[00:01] np magicaltrout, clearly my work here is done. [00:07] magicaltrout: what's the issue url that you filed? i'd like to know where to send my bill for all these zookeeper machines that never came up. [00:22] hehe [00:23] apparantly hatch has fixed it already === frankban|afk is now known as frankban [14:18] Hi #juju, can anyone shed some light on the following paste? http://dpaste.com/0PAMSQY === Guest60045 is now known as zeus [14:23] it says its broken [14:24] well, I'm adding the code changes below to see if this is some thing I missed from how to setup a relation-changed hook in reactive charms magicaltrout [14:27] I suspect the issue is around line 63, on the number of parameters set magicaltrout, does it says something to you? === frankban is now known as frankban|afk === frankban|afk is now known as frankban === rogpeppe1 is now known as rogpeppe === frankban is now known as frankban|afk [18:21] does anyone know if there is any sort of revision tracking for juju charms, that corresponds to a specific commit in the charm source (on github for example). For example, charms have different versions charm/foo-1 -> charm/foo-2 -> charm/foo-3. Is there some sort of tracking that can tell me, revision 2 contains all of specified fixes /commits in the source??? [18:24] the openstack charms have a repo-info file with that info in them [18:25] which is super helpful; but I don't think there is a generic mechanism for that === zeus is now known as Guest89134 === vern_ is now known as vern === zeestrat_ is now known as zeestrat === Dmitrii-Sh_ is now known as Dmitrii-Sh === coreycb_ is now known as coreycb === cory_fu_ is now known as cory_fu [21:39] Hey guys! I just started looking over the bigtop charms and I have a few questions/comments, that I am going to fire off. [21:39] *boom* [21:41] this is like the slowest cannon ever [21:43] apache software release vs charm release? is there any tracking going on between the charms and apache software, etc build system in place that builds releases charms when the apache software revs? [21:45] kwmonroe who deals with the juju charms is also on the bigtop PMC heckles1000 [21:45] so there is reasonably close alignment [21:45] I don't know how quick the release -> charm update cycle is, but i'm sure its not that shabby [21:47] oozie and other bigtop ecosystem software - do we have a contributing guidelines for charming bigtop things ... it seems some are getting sorely outdated .... some things are snapped some aren't, having a hard time finding consistency here I guess [21:47] I think stuff outside of the bigtop ecosystem are largley up to community commits so YMMV [21:48] magicaltrout: for example https://github.com/juju-solutions/layer-apache-hadoop-namenode#overview shows hadoop 2.7.1 [21:48] is that just a crusty readme, or do you think the softwware is just not up to date with recent hadoop releases [21:49] ignore that [21:49] https://github.com/juju-solutions/bigtop/tree/master/bigtop-packages/src/charm [21:49] thats the namenode in the charmstore [21:49] lol ... om thanks [21:49] if you find a charm in the charm store [21:50] most are linked to project homepages or bug trackers [21:50] assuming they've been maintained by the canonical folk [21:50] so like ..... is there a place where the single source of truth for all of the legit bigtop charms live, I feel like Im sorting through a ton of charms ...some seem irrelevant and its difficult for me to determine what is legit and what is stale [21:51] readmes like ^ in the charm repo dont help [21:51] :) [21:51] i refer to the previous link ;) [21:51] that appears to be the source of the bigtop charms [21:51] although i've not committed any so i could be lying [21:52] yea [21:52] 8 months ago [21:52] also [21:52] https://github.com/juju-solutions/bigdata-community/wiki/Contributing [21:52] oh [21:52] link #1 [21:52] ahhh thank you [21:52] in that page [21:52] https://github.com/apache/bigtop/tree/master/bigtop-deploy/juju [21:53] seems likely [21:53] ok thanks for those [21:53] but everything looks stale [21:53] like where is hadoop 3.0? [21:53] also https://github.com/juju-solutions/bigdata-community/wiki/Apache-Bigtop [21:53] jeeze ok [21:53] this is great info [21:54] how can I help get this all organized [21:54] so... in the bigtop repo... there's bigtop-packages/charms and bigtop-deploy/juju [21:54] those are the source of truths for the juju / bigtop charms and bundles [21:54] kevs here, kev is the canonical source of the truth [21:55] or something like that [21:55] heh - thx for taking that fire magicaltrout :) [21:55] also heckles1000 Bigtop 1.2.1 the latest release [21:55] heckles1000: as for crustiness, bigtop does indeed lag behind upstream [21:55] is Hadoop 2.7.3 [21:55] https://cwiki.apache.org/confluence/display/BIGTOP/Bigtop+1.2.1+Release [21:56] so like, hadoop-3 has just started being worked in bigtop [21:57] heckles1000: how did you find your way to https://github.com/juju-solutions/layer-apache-hadoop-namenode#overview? any -apache- charms are our previous gen (before bigtop) and should have been deprecated in favor of bigtop versions, eg, hadoop-namenode (good) vs apache-hadoop-namenode (bad) [21:57] sorry I didnt mean to call it crusty. I just realized there is a juju quickstart in the readme. I spent the last hour trying to use juju quickstart only realize it doesnt exist [21:57] oh but I was looking at the wrong readme [21:58] ha! doh. so again, how did you come across that readme for layer-apache-hadoop? i'll update links as needed (unless google took you there) [21:58] I see what your saying, that is probably why im confused looking at both new and old stuff [21:58] google [21:59] def switch to bing [21:59] I didnt know the source for the new new. Thank you for the link [21:59] using bing, you'll never come across the old stuff (or probably anything, really) [22:00] while I got you here do you mind if I ask a few more questions? [22:01] don't ask if you can ask [22:01] just fire them off ;) [22:01] the worst you'll get is silence :) [22:01] yeah heckles1000, fire away. magicaltrout is as frisky as he's gonna get. [22:01] kwmonroe's probably more impressed to find someone actually using his code ;) [22:02] Is there documentation for how to physically configure the nodes aka how do i provide disk for hdfs through the charm. I see there is a thing called juju storage, how do i use it with hdfs [22:02] you guys are great lol [22:02] heckles1000: that one is super simple! you can't. the bigtop charms don't have any juju storage wired in. [22:03] how do i tell hdfs which disk to use? [22:03] heckles1000: at the moment, hdfs is configured on the root disk of all datanodes [22:05] how do i give it disks then? do you just deploy nodes with super large extra large super resilient filesystem? [22:05] root filesystem: [22:06] heckles1000: hdfs will handle the resilient part for you. as for the super large part, when you deploy the datanode (hadoop-slave is what we deploy by default, it includes datanode and nodemgr), you can specify a root-disk constraint to whatever size you need. [22:07] the datanodes are configured to use /var/lib/hadoop-hdfs/data on their root filesystem [22:07] I am using maas and my hdfs server has 12 disks each [22:08] can i configure maas to just mount the array there [22:08] servers* [22:09] hmm.. you lost me. are you saying each of your servers has 12 disks? [22:10] yes. I have 10 data nodes each has 64gb satadom for the os and 12 6tb for hdfs storage [22:13] im trying to use maas to create a softraid across the disks, then juju storage to mount the array to the application/machine [22:13] it doesnt sound like this is a thing yet? [22:14] heckles1000: i don't know if maas can configure/expose storage for you. you could create an LVM array on each of your datanodes, add your 12 disks as PVs in a VG, create a monster LV, mount that as /mnt/mybigLV, then "rmdir /var/lib/hadoop-hdfs/data/dir && ln -s /mnt/mybigLV /var/lib/hadoop-hdfs/data/dir" [22:14] would I be able to help contribute to making juju storage work with these charms, or is it something I would have to wait for the charms to grow storage capability ? [22:14] ahh I see [22:14] heckles1000: it may be a thing -- i'm just not really familiar with maas [22:15] you can tell maas to make a software raid partition and tell it where to mount it [22:15] that should be fine [22:15] ah cool jhobbs - thanks! [22:16] so yeah heckles1000, however you make the LV, mount it on the datanode and symlink the mountpoint over to $dfs.datanode.data.dir [22:17] fwiw heckles1000, this is just how you'd have to do it today. proper storage support (juju storage) is a way better option, it's just not there today. [22:17] jhobbs, kwmonroe: so I can use maas + juju to do half of that process then by making the raid and mounting it using maas + juju storage [22:18] then just symlink the dirs ^^ [22:18] i don't know anything about these charms, sorry :/ but maas will make the raid set, format it however you tell it to, and mount it wherever you want it to; that will all be done before juju and the charms take over [22:18] awesome [22:18] jhobbs: thanks [22:18] heckles1000: the missing piece is that the charms wouldn't know what to do when you attached storage to them. ideally, they'd have to do the datanode config to set the mount point, etc. [22:19] I see [22:19] is it just 1 single config that specifies the mountpoint? [22:20] i think so heckles1000, see dfs.datanode.data.dir: https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml [22:20] oh awesome! [22:21] may also need to tweak some of the fs-limits [22:21] lets get this going [22:21] I need that too [22:22] kwmonroe: slacking [22:22] heh [22:22] i thought you needed network spaces bdx! [22:22] I need it alllll [22:22] you can't possibly want 2 things in a week [22:23] it means 2 people use kevs code [22:23] he must be due a raise [22:23] half of 'em are using the old stuff that cory_fu wrote though :/ [22:23] there goes my moneies [22:24] just buy him a fedora or 7 [22:24] lol [22:24] ha [22:35] kwmonroe: how can I modify the charm to set $dfs.datanode.data.dir config value? [22:36] heckles1000: another simple one! you can't. but if you open an issue for me here, https://github.com/juju-solutions/bigtop/issues, i'll see what i can do ;) [22:38] heckles1000: the datanode config is hard coded by bigtop here: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hadoop/conf.pseudo/hdfs-site.xml#L48 [22:39] so while you can't change it, you *can* do filesystem tricks to make it do what you want... hence the earlier workaround to symlink wherever your big 'ol mount point is over to that hard coded location ^^ [22:43] here's the workaround all in one go, assuming your big mount point is mounted at /mnt: juju run --all "mv /var/lib/hadoop-hdfs /mnt && ln -s /mnt /var/lib/hadoop-hdfs" [22:43] kwmonroe: sweet that doesn't sound too bad. [22:43] that'll be $300 [22:43] kwmonroe: I do have something for you ... [22:43] oh dear [22:44] kwmonroe: https://github.com/juju-solutions/bigtop/issues/53 [22:44] that'll be $500 [22:44] haha [22:45] kwmonroe: thats great, thanks again for the workaround [22:45] np heckles1000 - thx for opening the issue. i'll get that scheduled in soon. [22:46] bdx: PRs welcome for https://github.com/juju-solutions/bigtop/issues/53 ;) i'll give you $40. [22:46] i'm skint i'll do it for $30 [22:46] has to be a *mergeable* PR. sorry magicaltrout. [22:47] hey at least you don't need to get an ICLA off me :P [22:53] kwmonroe: aha, I gotchu https://github.com/juju-solutions/bigtop/issues/54 [22:53] um, bdx, that is the opposite of a PR [22:54] lol [23:50] kwmonroe: is the data-node provided by hadoop-slave? [23:51] yeah bdx, the slave includes layer-datanode and layer-nodemgr. those respective sources live at https://github.com/juju-solutions/layer-hadoop-datanode and https://github.com/juju-solutions/layer-hadoop-nodemanager [23:51] cool, thanks [23:52] bdx, it's not ideal, but charm build can't pull layers from repo subdirs, so we needed those layers in their own repos. [23:52] totally [23:52] is layer-hadoop-datanode used in any other bigtop charms? [23:53] no [23:54] it seems sensible to add the storage bits to the layer-hadoop-datanode .... but I mean .... it seems more sensible to add them to the top layer