[00:00] I just need to add zks to the deployment matrix [00:00] yup [00:00] then, if hbase or zks change, install_hive will run through the actual hive.install(**args) method [00:02] * kwmonroe wanders off to dinner [00:03] dinner?! [00:31] kwmonroe: https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py - still giving the same result (not getting the hive zk config in the hive-site.xml) [00:32] possibly I've missed adding something somewhere to get those picked up all the way [00:33] I feel its close [00:36] oooo I think I see it [00:36] well nm [00:40] willing to bet that https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hbase/layer-hbase/reactive/hbase.py#L70 matters that its named 'zookeepers' somewhere outside of my peripheral [00:40] I've decided to name mine zks https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L95 [00:41] bet if I s/zks/sookeepers/ it will be gold [00:44] oh man .... these layers dont have "series" tag in the metadata.yaml [00:45] I've been deploying the wrong thing all this timeGRRRR [00:46] it defaults to repo/builds/trusty lol oh man [00:47] at least hive will be super consistent with hbase :) [01:17] oooooh still not workin, but I bet I need to add entries here https://github.com/apache/bigtop/blob/be9a183b4db8f183c14cc9a4ed853cf7bbbab2e5/bigtop-deploy/puppet/hieradata/bigtop/cluster.yaml#L191 [05:51] kwmonroe: I found the source of the issue [05:51] cat /etc/hive/conf/hive-site.xml | https://paste.ubuntu.com/p/fm7mMsyvJS/ [05:52] as you can see I'm not getting the vars that I want in there, even though the template has them https://github.com/jamesbeedy/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml [05:53] and the facts are there on the filesystem - cat /home/ubuntu/bigtop.release/bigtop-1.2.1/bigtop-deploy/puppet/hieradata/site.yaml | http://paste.ubuntu.com/p/dK9mhgmYvc/ [05:53] so I figured puppet must be using some template it pulls from upstream [05:54] instead of whats in teh bigtop repo [05:54] which is correct [05:54] and here is the template that is used https://paste.ubuntu.com/p/T5dBbw8hzm/ [05:55] /home/ubuntu/bigtop.release/bigtop-1.2.1/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml [05:55] lol [05:55] you could have just told me [05:55] either way [05:56] I think what I have in my hive_config_add_zookeeper branch will work just fine [05:56] once we have the correct template there [05:57] alright I can stop now [05:57] thanks for your help today [05:58] here's the PR btw https://github.com/apache/bigtop/pull/344/files [06:11] setting 'bigtop_version: master' was all I needed [06:26] geh ... that did not allow me to do what I thought it would... still no way to get whats in my branch onto the instance [06:40] the bigtop-repo resource is the way!! === frankban|afk is now known as frankban [08:47] Hi. Trying to juju bootstrap with a LXC container as controller. Fails because of publickey. What's the conventional way of going about manual clouds and keys with juju? [11:59] gsimondo1: do you mean that you're doing "juju boostrap localhost"? Or are you doing something else? [12:00] gsimondo1: With some of the versions of juju 2, you needed to add the credential manually, with "juju add-credential localhost". [12:00] I think that's been fixed with the latest releases, though. [12:02] gsimondo1: you might want to try "snap install juju" to get the latest juju, if you don't want to mess around with adding the credential. [12:03] petevg ... he lives [12:03] magicaltrout: I'm even in your time zone for a little while. [12:04] jesus [12:04] Enjoying lovely snowy Dublin. [12:04] yeah its pretty bleak [12:04] i'm stuck at home today [12:04] kids are pissing me off [12:04] Snow day! [12:05] kill me [12:07] magicaltrout: it'll melt eventually. It's supposed to warm up and rain by the weekend :-) [12:43] pegevg: doing juju bootstrap manual/ip.address where ip.address is an address of a LXC container [12:44] petevg: juju uses that default ubuntu user. I was looking into configuration options to mess with that. [12:44] petevg: temporary solution is copying ssh keys in the container before running juju bootstrap [12:44] petevg: currently thinking and looking if there are some conf management tool integrations for this stack - can't find any. not sure if it's a good direction too [12:50] petevg: also running juju 2.3.4-xenial-amd64. why would I reinstall? [12:54] gismondo1: if you're up to date, not need to reinstall. [12:55] I'm afraid that I don't know manual providers well. [12:55] Anyone else have a suggestion? [12:56] i use manual clouds all the time, but i usually have a precanned image kicking around with a key in that becomes the base image [12:56] magicaltrout: that makes a lot of sense [12:57] magicaltrout: do you do cross-host lxc networking? using flannel by any chance? [12:57] afraid not gsimondo1 [12:58] gismondo1: according to my lunch companions, you are doing the correct thing by copying the key into the image. [12:58] ircing on your lunch break [12:58] what a geek [12:58] haha [12:59] Since juju doesn't create the machine, it can't drop the key on the machine by itself [12:59] A geek is what I am :-) [12:59] petevg: any way to make it use a different user than ubuntu? [12:59] yeah [12:59] you just alter the ssh command [12:59] when adding the machine [13:00] user@ [13:00] gotcha thanks [13:00] that's what I did to set it up but just making sure I'm not hacking around it too much [13:00] na what you're doing sounds pretty standard [13:01] It sounds like you've got stuff figured out. We're always glad to verify, though :-) [13:01] I'm more than grateful! :) [13:01] don't ask petevg he only knows openstack [13:01] he's been corrupted [13:02] spoiled lovechild? [13:02] those were the days... when kevin used to have team members.... [13:02] My mind has been expanded :-p [13:03] as I told beisner last night [13:03] openstack is so 2010 [13:03] do I feel a pitch for MaaS coming up? [13:03] ha [13:04] i'm only bitter because i have to run CDK on openstack [13:04] Kubernetes on VMs..... [13:04] cause thats a great idea [13:05] I suddenly feel lucky using this LXC cross host networking with questionable amount of testing that went into it [13:07] Exciting! [13:08] everyone loves software networks with questionable testing [13:08] its what petevg lives for [13:08] That just means I get to have fun writing tests :-) [13:09] Abs making test frameworks... [13:09] *and [13:09] Sometimes, I write non test code, too... [13:10] I'll refer you at this point to my point about you IRCing at lunch [15:56] alright folks [15:56] need a hand here [15:56] kjackal: or someone [15:57] i need to set these GC variables [15:57] and if I update the snaps args file by hand [15:57] your stuff zaps it [15:57] anyone got any bright ideas? [15:59] magicaltrout: what "stuff" zaps it? you talking about the k8s snaps? [15:59] if so, ryebot loves the k8s snaps... [15:59] i dunno if its the snap [16:00] i suspect its the charm [16:00] cory_fu: https://github.com/stub42/juju-relation-pgsql/pull/1/files if you are interested. The new Enpoint version of the pgsql interface is better in all respects I think. Its more confusing that it needs due to backwards compatibility. [16:00] magicaltrout: which snap/config are we talking about? [16:01] kubelet ryebot [16:01] the charm config doesn't let me set GC attributes [16:01] bdx: about the series stuff... we don't set that in the individual charms since all the bigtop charms are xenial-only. that's defined in https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/metadata.yaml. but yeah, you have to build with charm build --series xenial :) [16:01] but then when I append the args file for kubelet at some point they get removed [16:01] magicaltrout: okay cool - there's a kubelet-extra-args option on kubernetes-workere [16:01] worker* [16:01] aah [16:02] k8s: almost as easy as big data. [16:02] magicaltrout: if you have any trouble with it hit me up and I'll help out [16:02] oh [16:02] hold up [16:02] run ryebot [16:02] thats the bug we filed last week [16:02] which impacts this [16:02] welp, that sounds like trouble [16:02] let me take a look [16:02] ha [16:03] you guys put some logic into the charm and filter out flags you don't grep [16:03] but the GC settings aren't picked up [16:03] or so the greek told me [16:03] i take most things he says with a pinch of salt [16:03] but this seemed to ring true [16:04] magicaltrout: looks like this one? https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/498 [16:04] yeah thats me [16:04] magicaltrout: give me a few minutes to dig into this [16:04] thanks [16:12] magicaltrout: Yeah this is a bit of a pickle. Have you looked into the --eviction-hard and --eviction-soft cli options for kubelet? [16:13] They're supposed to be the replacements for the deprecated gc flags. [16:13] well [16:13] no [16:13] can i set them? :P [16:14] You should be able to, since they're in the --help output :) [16:14] Also look at --eviction-minimum-reclaim [16:15] alright, i'll give it a go and see if it stops the NIST folks complaining [16:15] they want max-dead-containers set and container-ttl set [16:15] but i'll see if i can get away with it [16:17] magicaltrout: If it helps, there's a deprecation/replacement table here: https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/ [16:17] hopefully that's convincing [16:17] ha [16:17] yeah i know that inside out these days :P [16:17] alright thanks ryebot i'll translate what we have and see how i get on [16:17] haha sorry to hear that xD [16:17] alright good luck! === kwmonroe_ is now known as kwmonroe === rharper` is now known as rharper === frankban is now known as frankban|afk [21:31] hey cory_fu, does charm build still require layers to be at the top level of a repo?