=== blahdeblah_ is now known as blahdeblah === frankban|afk is now known as frankban === freyes__ is now known as freyes === degville_ is now known as degville [11:05] kjackal: stupid question but seemingly something i've not got in any of my charms... or not that I can find [11:05] if I set a state in a reactive charm [11:05] how do i then unset it when the relation goes away? [11:14] magicaltrout: Use the newer Endpoint system and it is @when_not('endpoint.foo.joined'). With the older stuff, it is rather difficult. [11:40] Morning [11:40] Question about openstack charms - can I mix nova-lxd and nova-kvm hypervisors in one cloud ? it would be really cool [11:43] from charms in store descriptions I see that it is not possible but maybe someone know a workaround :) [11:52] Are there any tomcat juju pro's here? I'm looking for the correct way to deploy the tomcat charm and connect (jndi) to a deployed mysql charm. So that I can deploy a war that uses a db from the mysql server. [12:06] elmaciej: If you mean running kvm hypervisor on one host and lxd hypervisor on another host, then I imagine you should be able to deploy multiple different instances of the nova-compute charm, e.g. one for kvm and one lxd on different hosts. jamespage could probably confirm. [12:11] zeestrat: ok, so quickly - I prepare to charm configs and deploy juju deploy --config nova-lxd.yaml nova-compute --to 1 and juju deploy --config nova-kvm.yaml nova-compute --to 2 [12:19] elmaciej: Almost. You need to define the different deployments as different applications in juju. So you'd get something like `juju deploy nova-compute nova-lxd --config nova-lxd.yaml --to 1` where `nova-lxd` is the name of that application/deployment. Likewise with `juju deploy nova-compute nova-kvm --config nova-kvm.yaml --to 2`. Note that that you will then need to setup relations for these individually with their [12:19] new application name. [12:19] Is there a recommended production setup for bare metal for canonical kubernetes with juju? [12:21] zeestrat: that's cool, but honestly when I think about it what are the pros/cons in having nova lxd hypervisor ? just wondering also on this one to deploy manually: https://docs.openstack.org/zun/queens/install/overview.html [12:24] magicaltrout: You could have a look here: https://github.com/juju-solutions/interface-kube-control/blob/master/requires.py#L52 [12:26] gsimondon: what kind of setup would you expect? MaaS I guess would be nice to have so you can put cdk on top [12:26] elmaciej: Depends on your use case. Are you comparing lxd to kvm? [12:27] well from end user point of view. it will be still available as an instance in horizon so only things like latency etc will be better on lxd [12:27] right? [12:30] kjackal: nothing like MaaS, we have a custom solution for maas [12:30] kjackal: basically I know I can create a manual cloud by adding machines but when you spin up apps (units) juju tries to create a new machine [12:31] kjackal: that's the part that's confusing me in this bare metal setup. [12:31] gsimondon: let me understand something, you have MaaS already setup? [12:32] kjackal: if you are talking about maas.io - no, we are not using that [12:33] ok, so you you want to use the manual provider over machines you already have provided [12:33] kjackal: yes. [12:34] You need to pre-provision any machines: https://jujucharms.com/docs/2.2/clouds-manual and then... [12:35] deploy the cdk bundle: https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml [12:36] But you might need to change this bundle a bit [12:36] kjackal: I did that multiple times when I was evaluating this distribution the first time. Basically my questions would be 1) It sucks that I have to specify where to deploy apps. Is it possible to now do that? 2) If I deploy stuff from machine A, how can I use that machine to deploy apps to it? When you do juju status it will show only two machines that I manually added using add-machine but machine A [12:36] won't be available [12:37] kjackal: 3) Is LXD something worth considering for isolation of components in this setup? [12:39] gsimondon: yes, creating lxd containers will work [12:41] so tou your questions: 1) in the manual provider you have the pre-rpovision you machines so you should be explicit on where each app should go 2) I guess you could add the Machine A as a cloud machine with "juju ssh ubuntu@machine-A" [12:42] 3) lxd should work, I know we do this trick in orange boxes when we do not have enough nodes [12:43] kjackal: Thanks, very helpful. Do you use LXD just for kubernetes master and workers or basically for everything being deployed by juju? [12:45] gsimondon: have a look at how easyrsa is deployed in this bundle https://api.jujucharms.com/charmstore/v5/kubernetes-core/archive/bundle.yaml it says --to: lxd:0 This might work for you [12:46] My suggestion would be not to place workers inside an lxd container, although it should work you may run into trouble when trying to expose a docker container running inside that lxd container [12:47] Any rule of thumb on what to put in LXD and what to keep out so far? [12:49] elmaciej: Yeah, so it depends if you and your users want those lxd advantages. Canonical has a nice marketing page for that: https://www.ubuntu.com/containers/lxd [12:50] gsimondon: not rules, but do not put the etcds on lxd containers on the same host machine :) [12:51] kjackal: yeah, that wouldn't be so smart. :) [13:17] kwmonroe: do you, or do you know any moderator in bigdata mailing list? [13:22] pekkari: I saw your message do you have the charm somewhere so we can see it in action? [13:22] We would only need to deploy zookeeper, right? [13:23] for the testing is enough to deploy zookeeper kjackal [13:23] I was firing the charm in my laptop, so no environment we can share and see unless we get on a hangout and show it to you === Guest89134 is now known as zeus [13:24] I think I know what is wrong with it [13:25] you have somewhere: [13:25] @when('zookeeper.started', 'leadership.is_leader', 'zkpeer.changed') [13:25] def check_cluster_changed(zkpeer) [13:25] any suggestion is really appreciated, as I'm running out of ideas [13:27] no, turns out I donot... [13:27] its not what i thought it was, sorry [13:30] sorry kjackal, my irc just crashed [13:30] I'm all ears [13:30] no, turns out I donot know what is wrong with it [13:31] http://canacopegdl.com/images/all-ears/all-ears-9.jpg [13:31] magicaltrout: :) [13:32] is github down for you people or it only me? [13:32] https://cdn.head-fi.org/a/6941207_thumb.png or maybe this one [13:32] just you [13:32] xm... it is possible to get banned from gh :) [13:33] with you [13:33] anything is possible [13:36] I am afraid so... [13:36] kjackal: I'm starting to think it's platform related, I just tried to use xenial machines and here I can see the hook triggered with no problem: http://dpaste.com/25CB8NJ [13:37] for similarity with the end user I was using trusty [13:38] Is it possible at some point you had a build without the (self) without the self? [13:39] no, I don't think so, this very same build I'm deploying now is what I was testing last friday, no modification on the code [14:06] zeestrat: kjackal: Guys, do you know when the queens go official with juju charms? queens going to be released in 2 days and it bring's the 18.04 support [14:07] elmaciej: https://docs.openstack.org/charm-guide/latest/release-schedule.html [14:08] zeestrat: that's cool! thx [14:10] elmaciej: If it ain't in the OpenStack Charm Guide, then you can ask the folks on the mailing list (https://docs.openstack.org/charm-guide/latest/find-us.html) or in #openstack-charms === frankban is now known as frankban|afk [14:21] kjackal: contribution ready, thanks for the help! [14:21] did nothing, we are the ones to thank you [15:04] last question for today guys, is there a way to set default user and password when adding a machine to the model ? === chat is now known as Guest83894 [15:09] elmaciej, just for add-machine? [15:09] yes, I need a user to login directly not by ssh only [15:19] pmatulis: I know that these images are using the cloud-init so I was wondering on two options - rebuild the image and change the cloud.cfg or somehow passing the user data to create user during the creation [15:51] elmaciej: passing user data already has some support in 2.3.1+. https://bugs.launchpad.net/juju/+bug/1535891 Note the caveats such as not being able to use the users section. Please feel free to add a new bug/request for that. [15:51] Bug #1535891: Feature request: Custom/user definable cloud-init user-data [15:53] zeestrat: Thanks for that. I'm trying to modify default curtin preseed to add the default user with oneliner === chat is now known as Guest60542 [19:41] kwmonroe: I have a requirement for hiveserver2 to be hooked up to zookeeper [19:41] I find here https://www.cloudera.com/documentation/cdh/5-0-x/CDH5-Installation-Guide/cdh5ig_hiveserver2_configure.html, the configuration that need be mended [19:43] kwmonroe: I'm wondering if you have any suggestions toward managing the config xml for the bigtop software [19:43] ? [19:57] bdx: that's gonna be tricky. the hive-site.xml that bigtop uses is here: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml, but as you can see, none of those properties are available there. that means the charm doesn't have an easy way to set/change/manage them. [19:58] bdx: so the first step to do this right would be to open an issue for bigtop to expose those config options: https://issues.apache.org/jira/secure/Dashboard.jspa -- that requires you to have an apache jira account. if you don't have one, it's of course easy to create, or i can open the jira for you. [19:59] and then step 2 would be to get the hive charm to support the zookeeper interface. that likely wouldn't be too difficult. we'd do it very much like we do zookeeper's relation for hbase: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hbase/layer-hbase/lib/charms/layer/bigtop_hbase.py#L30 [19:59] and pass in the appropriate overrides to set the values in hive-site.xml. [20:03] kwmonroe: "so the first step to do this right would be to open an issue for bigtop to expose those config options" - the config options already seem to exist in hive-site.xml [20:03] https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml#L39 [20:04] ahhh, the concurrency option is missing [20:04] bdx: also, line 39 isn't hive.zookeeper.quorum :) read it again [20:05] ooooo [20:05] good eye [20:05] darn [20:05] so, cloudera just drops those properties right in there then? [20:07] well, hive upstream probably has those options too.. the full hive-site.xml is like 1000 lines. bigtop only uses a subset of those things that *they* support as being changeable from the default. [20:07] so we just need to get bigtop to include options related to hive.zookeeper, and then the charms can manage them. [20:08] bdx: i need to run my kid to an appt, but should be back on in about an hour to hear you curse the day you ever mentioned expanding hive to support zookeeper. biab :) [20:08] :) ok [20:09] https://imgur.com/a/f1W3r - something to think about while you are waiting at your appt [20:18] my biggest issue with kwmonroe's work.... is it puts hadoop in the hands of people who don't need it but don't know better ;) [20:18] shut it all down kwmonroe [20:23] hey [20:25] coming from someone who has ran hdfs on / for the entirety of all time [20:26] magicaltrout: letting me down holmes [20:30] haha :) [20:31] i'm not really directing it at you bdx, but people still bug me with "I have data, i need to deploy hadoop" and it irks :) [20:31] because then they a) don't have data that requires hadoop b) don't have the volume for hadoop c) don't have the budget for hadoop d) don't have a clue how to write stuff to run on hadoop e) don't know how to monitor hadoop [20:31] when all they really need is a column store DB at most [20:45] I see the same thing [20:45] happens across all technologies [20:46] true dat [20:46] I like the ones in here === magicaltrout is now known as jujuuser [20:47] hello, I'm trying to deploy openstack === jujuuser is now known as magicaltrout [20:47] oh really, hows that going? === magicaltrout is now known as jujuuser [20:47] magicaltrout: we are moving a ~10TB dataset through a slew of pipelines mon [20:47] well I have a 5 server setup, but its asking to deploy stuff via floppy disk === jujuuser is now known as magicaltrout [20:47] shoot! [20:47] really? What are you deploying on === magicaltrout is now known as jujuuser [20:48] Pentium 4 machines I have lying around in my lap [20:48] jujuuser: where does it say "floppy disk"? === jujuuser is now known as magicaltrout [20:49] hehe [20:49] thats honestly a real story from last year [20:49] he was having issues because his juju powered openstack cluster was trying to boot from floppy disk [20:50] no [20:51] lol I believe you [20:51] jujuuser: make sure the minimum requirements are met all the way around [20:52] hehe [20:52] I am excited about DruidIO coming to the Apache Foundation [20:52] I'm gonna get that charmed up [20:52] for high volume timeseries stuff [20:52] that'll be cool [20:53] i'm also tidying up my apache-drill charm which makes hooking up SQL apps to all these platforms much easier [20:56] magicaltrout: apache-drill just hooks up to a nosql database and lets you query it with SQL? [20:57] magicaltrout: what happens when you hook it up to hdfs? [20:57] yeah its a distributed SQL interface for NOSQL platforms [20:58] flat files, hdfs, hive, hbase, kafka, s3 are the most common ones [20:59] so on hdfs you can query a variety of different file formats, it'll do stuff like auto schema discovery for json/csv style files, if they follow a naming strategy it can do cross file queries etc [20:59] you can convert files to parquet DB's quickly and easily for more effective querying at scale [21:00] it'll push down what it can to the underlying query engine and make up the rest in memory unlike a lot of the JDBC interfaces out there [21:00] sponsored and largely developed by MapR and a few Dell folk IIRC [21:02] also provides ODBC & REST interfaces for other uses [21:02] and data federation capabilities so you can do stuff like "select * from mysql left join (select * from hbase)) [21:04] it leverages a lot of Apache Calcite which provides the SQL interface, and has a bunch of other adapters to do SQL over X [21:04] https://calcite.apache.org/docs/adapter.html [21:38] super cool [21:43] for anyone that cares to deploy volumes > 1T https://bugs.launchpad.net/juju/+bug/1751909 [21:43] Bug #1751909: turn up the volume - artificial quotas? [21:43] 1T is too much [21:43] no, 1T works [21:43] but anything over seems to fail [21:43] orr ooo [21:43] 1000 GB works [21:43] possibly 1T woudl fail [21:44] here's the thing bdx, when you get past a couple gigs, you really need hdfs. magicaltrout, back me up. [21:44] oooh sorry ... I'm crossing wires here, ^ is unrelated to hdfs [21:45] when IRC needs a joy emoticon [21:45] no bdx, i'm just being hilarious. /me just finished backscroll of how i'm pushing hadoop on people that don't need it. [21:45] ahh yes yes [21:45] aha [23:25] magicaltrout: are you still looking at ranger? [23:26] yeah slowly [23:27] magicaltrout: would ranger's inclusion in bigtop help you? [23:27] absolutely [23:28] magicaltrout: please add ranger to https://github.com/apache/bigtop/tree/master/bigtop-packages/src to include it in bigtop. [23:28] ;) [23:28] i'm chuckling quietly [23:29] who am i kidding, i'm totes LOL. get it done magicaltrout. [23:29] :'( [23:29] sad times [23:34] i will get around to doing something with it soon enough, I've got the LDAP bits and piees that need finishing up because they'll work for Saiku as well [23:34] and I had the start of a ranger snap [23:34] but then I got sidetracked with the day job [23:35] but now i'm circling back around to Hadoop -> Drill -> Saiku stuff for JAAS, i have an excuse to look at it again [23:48] magicaltrout: i know somebody on the bigtop PMC if you need support. [23:52] i heard the Bigtop PMC was just a bunch of jerks [23:53] We'll see, i need to get it in somehow, but I also have DruidIO, Janusgraph and some other stuff on my high priority todo list [23:53] Not sure where the chips will land with all of that [23:54] plus once the contracts are sorted out at JPL I'm supposed to be getting 80+ hrs a week and 20 odd on a separate cancer research project [23:55] so I need to resource them and sleep at some point ;)