[00:03] lazyPower: I still love the manual side of ops, just don’t have the time to manually install and configure thousands of servers at a time :P [00:03] stormmore - irc is notoriously horrible at transferring sarcasm :D [00:04] lazyPower: that I am well aware of during my time as an IRCop on one network and help channel op on DALnet back in the day ;-) [00:05] DALNet! \o/ now that brings back memories [00:06] was a channel op in #ircnewbies on there [00:08] love my test servers ssd drives really makes destroying and creating controllers so much faster [00:10] stormmore: hmm, ok so deploy worked here. Still working on figuring out wtf [00:13] rick_h_: it might be my weird ass config, will know as soon as the bootstrap is finished [00:14] rick_h_: just waiting for it to reconnect and finish up at this point [00:17] bootstrping the agent now [00:18] bootstraping* [00:24] rick_h_: before I try and deploy juju-gui should I do anything? [00:26] stormmore: no, it should be fine. I just did a bootstrap, switch to admin, and then test the deploy [00:27] rick_h_: definitely something obscure in my config, “juju-gui/0 maintenance executing 2.0-beta3 0 198.18.83.100 (install) installing charm software” [00:36] stormmore: hmm, ok. Yea if you can point us at anything we can repeat we can look into it [00:38] rick_h_: from what I can tell it goes back to the multi-gateway config setup [00:40] stormmore: hmm, ok. Yea not sure on that. [00:40] rick_h_: no worries, it’s one of those good to know for when we are designing a production setup [00:41] stormmore: definitely === zul_ is now known as zul [01:14] I must still have something wrong in my network design/implementation cause the 2nd node is failing to communicate with the 1st :-/ === natefinch-afk is now known as natefinch [07:58] morning [08:19] jamespage, don't suppose you have time to peak at a few reviews for me do you ? ( https://review.openstack.org/#/q/owner:liam.young%2540canonical.com+status:open ) [08:23] gnuoy, i will in a bit [08:23] ta [08:33] gnuoy, 3/4 landing - the cinder-backup one needs a small fix I think [08:34] jamespage, excellent, thank you [08:45] jamespage, I've fixed the code in response to the comments you and tinwood made on https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/pause-resume/+merge/289911 , are you happy for me to land that? [09:04] gnuoy, cinder-backup updates looks fine to me [09:04] gnuoy, approved [09:04] ta [09:05] gnuoy, +1 on the hacluster merge as well [09:05] jamespage, \o/ thanks [13:38] Hi, any tips on how to model a docker workload when using docker-compose? Do you encapsulate each docker in a charm or is there a way to provide the compose yaml file and juju some way create the charms? [14:13] lazyPower: ^ [14:13] mbruzek: ^ [14:16] It looks like deanman left, but there is a way to do that. [14:21] When deanman comes back: Our docker charm has compose functionality, and you can just drop in a compose.yml or compose.yaml in a directory and make a docker charm controlled by compose very quickly. === cos1 is now known as c0s [14:46] https://bigtop-repos.s3.amazonaws.com/releases/1.1.0/ubuntu/vivid/ppc64el/bigtop.list [14:53] https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/site.yaml - here's a minimal site file for the deployment [14:53] https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/bigtop/cluster.yaml - the full list of the hiera configuration [15:10] I guess I should have asked this here. I'm trying to get a juju setup going with maas. I have maas installed on Ubuntu server 15.10 with 4 nodes connected and commissioned. [15:11] However, when I try to bootstrap one of the nodes with juju bootstrap, the instance either goes into started but not deployed, or it goes it attempts to download something from the internet (even though the documentation mentions creating a private network between the nodes and mother node) [15:11] am I missing something here? [15:11] Is there a logfile on the maas mother node or something that will point me in the right direction? [15:14] bugrum: try running your bootstrap with --upload-tools [15:15] LiftedKilt: okay, let me try that... [15:23] bugrum: if you look at the server output on maas for one of the servers that is timing out and it shows that the installation is finished, you might need to adjust the bootstrap-timeout value on bootstrap. It may be that your servers are not able to install and provision within the 10 minutes juju expects them too. [15:24] LiftedKilt: do I add that value in the environments.yaml file? [15:24] for the bootstrap-timeout [15:24] what juju version are you using? 1.25 or 2 [15:24] one second I need to look [15:28] I'm using juju 1.25 on wily [15:29] bugrum: I would highly recommend using 2.0 [15:30] bugrum: It's a huge change in both syntax and function, and if you are getting started it's definitely worth starting with the new version [15:30] is there a ppa I need to add in order to use that version? [15:30] or is it just ppa:juju/stable? [15:30] http://paste.ubuntu.com/15560256/ [15:31] bugrum: there's a quick guide for you [15:32] does juju 2 work with MAAS 1.9 [15:32] bugrum: yes [15:32] bugrum: it doesn't work with maas 2.0 yet, but will in a few weeks [15:36] the pastebin is pointing to a development branch of juju. Is juju 2 in beta? [15:36] it is in beta [15:36] beta 3 went out earlier this week I believe [15:37] certainly worth using 2.0 if you are just starting out [15:37] save on the upgrade pain [15:42] thank you. Should I do an apt-get remove on juju 1.25 before installing juju 2 or can I do an apt-get upgrade simply to upgrade the current version? [15:42] i think the install target is apt-get install juju2 anyway [15:43] although i've not checked [15:43] configs and stuff go in different places [15:43] I'll error the side of caution and remove the old one then [15:43] dosaboy, hey - can you drop "Publi network is configured [15:43] existing os-public-hostname option." from the commit message and then I think both multi-network support reviews are good to go btw... [15:44] jamespage: looking === cos1 is now known as c0s [15:45] jamespage: yep i'll fix that [15:48] dosaboy, awesome [15:50] any charm command experts know how to pull a non-recommended charm? [15:51] ah, simple as charm pull cs:~bigdata-dev/apache-spark [15:51] just prefix the user name [15:57] jacekn: left you a comment on the collectd charm. i gotta run out for an appointment but i'll see about promulgating it when i get back [15:59] tvansteenburgh: yea I've just noticed. But...should that work not be part of review/promulgation process? I don't think pointing at personal repo is a good idea for recommended charm [15:59] tvansteenburgh: plus I tihnk that warning was not there last time I checked, it's likely that charm tools are moving quicker than review queue ;) [15:59] jacekn: the latter is true [16:00] jacekn: the point of the 'repo' key is to be able to find the source for the top layer of a charm [16:00] it doesn't matter that it's a personal repo [16:01] tvansteenburgh: so what if you promulgate the charm and I wipe that repo? [16:01] if someone does 'charm pull-source collectd', we want to get the source of the top layer, not the built charm [16:02] then `charm pull-source` would fall back to getting the built charm [16:02] jacekn: why would you delete the source for the charm? [16:02] tvansteenburgh: so IMO instead of falling back you should put that laer in the official repo owned by ~chamers [16:03] tvansteenburgh: IDK to clean up my LP branches? It may happen one day IDK [16:03] human error [16:04] tvansteenburgh: most layers here http://interfaces.juju.solutions/ and not in personal branches, I tihnk mine should be the same [16:06] jacekn: that's only b/c charmers have done many of the initial layers [16:06] jacekn: i think this would be a great point to raise on the juju mailing list [16:08] tvansteenburgh: if you do that I'm happy to comment (I'm not sure where to even stat, it's been over 2 months since I submitted this charm I don't even remember where to find docs any more) [16:09] marcoceppi: above hilights (again) how slow the review process is [16:09] marcoceppi: requirements change during the review so if I'm unlucky my charm wil never make it to the charmstore [16:10] jacekn: we are aware of that and are working on it. thanks for the feedback. [16:11] jacekn: if you think it is a mistake to keep charm layer source in a personal repo, please bring it up on the juju ML [16:11] jacekn: i have to go now - appointment in 20 minutes [16:11] * tvansteenburgh departs [16:12] tvansteenburgh: when you're back - can yo point me at the docs that says charm layer source must be a personal repo? [16:12] there is no "must" [16:12] tvansteenburgh: or should [16:12] tvansteenburgh: whatever doc you took that policy from [16:12] the source is in a repo somewhere, we would just like to know where it is [16:13] jacekn: it's from the output of `charm build` [16:13] * tvansteenburgh leaves for real [16:18] tvansteenburgh: another good reason not to keep official charm sources in personal repos is security - I can modify that source to do something bad, good luck to anybody using my layer [16:28] jacekn: you mean you don't blindly trusty everyone? :) [16:28] stokachu: shocking right? [16:29] jacekn: what is this world coming to [16:30] bu they if that's what charmers want that's what they get [16:31] lazyPower, marcoceppi you guys have been busy. interfaces.juju.solutions is looking packed with layers :) [16:32] Has bundletester been updated to work with juju 2 (tvansteenburgh) [16:34] tvansteenburgh: so...stable charm tools don't complain about it. I'll comment on the bug too [16:40] cholcombe yeah :) its not just us though [16:40] lazyPower, true [16:41] can a juju deployment only be managed from the machine that boostrapped the controller? [16:42] or is there a way to provide the connection details to multiple machines (i.e. developers) and let them connect to their controllers/models [16:46] how do actions work with layers? [16:50] the same as they do with any other charm cholcombe [16:50] lazyPower, ok cool [16:51] lazyPower, what i mean is if i add an actions dir to my layer charm and then do charm build it doesn't show up in the final charm directory. Where do actions go? [16:52] Hmm, doesn't seem right. Is there a layer.yaml directive somewhere telling the builder to omit the actions? [16:52] lazyPower, just the default layer.yaml which includes the basic layer [16:53] cholcombe - i say that because the docker layer has actions that show up in the built charms. [16:53] ah ok [16:54] aisrael: the metrics look nice [16:54] aisrael: would be worth a charmin [16:56] lazyPower, so it should just copy the actions dir from the layer dir right? [16:56] yep [16:57] lazyPower, got it. you have to add something to the actions dir or it ignores it [17:02] d34dp@@l [17:02] lazyPower: hehe [17:02] LiftedKilt - ayyyyyyy lmao ;) [17:04] I juse need to run irc on a completely separate box with a separate keyboard haha [17:06] you'd be surprised, you'll wind up installing synergy for convenience and you're back in the same boat [17:06] rick_h_: I need some help with maas 1.9 + juju 2.0 [17:06] ;) [17:06] marcoceppi: how so? [17:06] I can't get it bootstrapped, there's like no example for cloud.yml [17:06] marcoceppi - there was a maas example in the beta2 rel notes [17:07] i dont know if thats helpful information or not... [17:08] ugh [17:08] endpoint [17:08] lame [17:08] marcoceppi: ? [17:08] rick_h_: there we go [17:08] marcoceppi: woot [17:08] rick_h_: it was maas-server in the old env.yaml, but in clouds it's endpoint [17:09] rick_h_: the lack of documentation for add-cloud is painful [17:09] marcoceppi: ah, gotcha. It should walk you through it? [17:09] rick_h_: are we getting an interactive add-cloud like add-crednetial? [17:09] oh, there's no interactive add-cloud yet [17:09] rick_h_: will there be? [17:09] marcoceppi: I *think* so but /me double checks spec [17:10] What is the equivalent to "juju destroy-environment" in juju2? the command doesn't seem to exist anymore [17:10] bugrum: juju destroy-model xxxx [17:11] bugrum: or sorry, remove-model? /me can't recall [17:11] rick_h_: also hit an issue with juju add-credential where my local:ob24 maas cloud wasn't acceptable as a cloud [17:11] Usage: juju destroy-model [options] [17:11] rick_h_: I'll open a bug [17:11] rick_h_ bugrum ^ [17:11] marcoceppi: yes ple`ase [17:23] kwmonroe: should I give another try to that dev bundle from yesterday as you have updated it? [17:27] marcoceppi: here's what I've got for maas 1.9 juju 2.0 - http://paste.ubuntu.com/15560256/ [17:27] not sure if that's still helpful for you or if you got everything you needed [17:28] LiftedKilt: thanks, I also ended up with similar, but I didn't put the oauth token in the clouds.yaml [17:28] LiftedKilt: when you bootstrap, does juju2 acquire /all/ maas instances? [17:30] marcoceppi: I'm not entirely sure how acquire works, but as far as showing ownership, no [17:30] yup c0s, "juju deploy cs:~bigdata-dev/bundle/apache-hadoop-spark-zeppelin" will get you the good stuff. [17:30] marcoceppi: it acquires and deploys machine 0 [17:30] and then repeats that process per charm deployed [17:30] going right now, thanks kwm [17:31] kwmonroe: ^^ [17:31] LiftedKilt: odd, I had 13 nodes an don bootstrap juju took them all at once [17:31] marcoceppi: that's not right [17:31] LiftedKilt: i know ;) [17:31] haha [17:31] I'm on [17:31] MAAS Version 1.9.1+bzr4543-0ubuntu1 (wily1) fwiw [17:32] what series are you deploying? [17:36] rick_h_: so I am talking in #maas about the network issues that I am facing [17:37] LiftedKilt: I tried again, seems to be okay [17:38] 1.9.1+bzr4543-0ubuntu1 (trusty1) [17:38] was just an odd fluke [17:40] marcoceppi: I'm hoping some of these random quirks get ironed out with the new API in maas 2 [17:41] LiftedKilt: me tool [17:44] I modified the cs:~openstack-charmers-next/bundle/openstack-lxd-51 bundle to put every charm in a lxc container so that I can run the trusty bundle on xenial machines [17:44] but the charms freak out about a series mismatch [17:44] even in lxc containers [17:46] aisrael: no, bundletester doesn't work with juju2 yet [17:46] LiftedKilt: link? [17:46] tvansteenburgh: ack, thanks. [17:46] marcoceppi: what kind of link [17:47] LiftedKilt: to the bundle? [17:48] marcoceppi: https://paste.ubuntu.com/15561615/ [17:48] running it locally [17:48] LiftedKilt: that's a bug [17:51] marcoceppi: yeah it returns https://paste.ubuntu.com/15561642/ [17:51] marcoceppi: is that a known bug? === skr is now known as Guest96045 [17:57] Hey guys. Me and nihas want to use juju to orchestrate the deployments of our services on digital ocean, but we are having a hard time figuring out how to make juju manage the existing machines we currently have (the ones that wasn't juju that provisioned). Is there a way of doing that? [17:57] hackedbellini - i dont recommend that [17:58] if juju provisioned those machines, thats a-ok. but existing machines, if you care at all about the data on those units, the charm is likely to stomp on it, and cause you grief [17:59] lazyPower: yes I know. The charm that we would be running is actually ours, so we have control on what it would do and we can workaround that easily. Other than the problems related to charms and data, is there any other reason for me to avoid doing that? [18:00] Thats all I can think of at the moment. [18:00] you can certainly add the machine as if it were a manual enlistment. juju add-machine ssh:user@host [18:00] hackedbellini - but, again, please take the proper backups and precautions [18:01] lazyPower: will do :). Thank you very much! [18:02] I'm trying juju-quickstart apache-hadoop-spark-zeppelin [18:03] juju status is empty [18:03] and the command to kill the environment "juju destroy-environment xxx" is not working [18:18] A-Kaser: hello [18:18] A-Kaser: which cloud are you targetting? [18:19] run 'juju switch' to see what your current default is [18:20] jacekn: I think you're missing something in the new charm store world [18:21] jacekn: the charm is, once built and promulgated, managed in the store. We don't want the top most layer in the interface site (typically) because there's no reuse value for that. It's why my ubucon layer which builds the ubucon charm from django layer isn't in the index [18:21] jacekn: the repo key is just a nice pointer that makes layers and charm store connected. we don't need yourlayer to be owned by anyone but you [18:22] jacekn: since if you build a new version it'll have to go through the same review process, it's just a convience to show where things started from, effectively "this is the upstream of the charm artifact" [18:47] is there a charms channel? [18:55] marcoceppi: The bug is biting me even when I don't customize the bundle. [18:56] marcoceppi: https://jujucharms.com/u/openstack-charmers-next/openstack-lxd/bundle/51 doesn't deploy because ceph-radosgw complains of series mismatch because of a lxc container [18:58] LiftedKilt: so I installed juju2 and ran juju bootstrap, and one thing I noticed is that the script is asking for an ip on eth0 and eth1. I thought Wily uses a different naming scheme for the network interfaces [18:58] enp2s0 or something like that [19:00] for the systemd changes. Would I need to set this up manually to get the bootstrap to work correctly? [19:00] bugrum: I'm not sure who gets the credit, but I'm reasonable confident either maas directly or maas under juju's instruction sets the kernel parameter to keep old interface naming [19:00] reasonably* [19:22] kwmonroe, c0s: Is there a reasonable default value for dest_dir in this action? Maybe /user/ubuntu ? [19:22] from what I see in juju's way of deploying Hadoop stack that seems to be right, cory_fu [19:23] cory_fu: kwmonroe I think I have a way to put multiple files from the net into HDFS without intermediate store in the local FS. It is a bit ugly but should work just fine ;) [19:23] LiftedKilt: So I tried bootstrapping a node with juju2 twice now and I keep getting this error from juju2. It says "bootstrap instance started but did not changed to deployed State" [19:23] I didn't see any errors with the --debug flag turned on [19:23] bugrum: did you change the bootstrap-timeout? [19:24] bugrum: if your machines can't bootstrap within the default time of 600 seconds, you may need to up the bootstrap-timeout value [19:24] bugrum: this is passed as an argument in your deploy command [19:24] c0s: So, I had an idea, too, and that was to let the url param take a space-separated list. https://github.com/juju-solutions/layer-apache-hadoop-namenode/pull/13 [19:24] c0s: What was yours? [19:25] bugrum: err bootstrap command not deploy, sorry [19:25] er, no I didn't. can I man juju bootstrap to get all the options [19:25] or juju help bootstrap provides them all? [19:26] bugrum: juju bootstrap --config bootstrap-timeout=valueinseconds [19:26] and I answered my own question :-P [19:27] bugrum: I don't think the man page is updated yet with the new syntax [19:28] If I wanted to auto-scale jenkins slaves, is juju the right tool for me? Would I just need to create a charm for jenkins swarm? [19:29] I did this [19:29] wget --spider http://data.githubarchive.org/2016-01-04-{0..2}.json.gz 2>&1 | grep json.gz | awk '{print $3}' | \ [19:29] while read i ; do echo $i; wget $i -O - | hdfs dfs -put - cos-arch/`echo $i | awk -F "/" '{print $NF}'`; done [19:29] cory_fu: ^^ [19:29] bugrum: huge resource that has tons of info on new syntax: https://lists.ubuntu.com/archives/juju/2016-March/006922.html [19:30] pretty much like yours I think, but less user input [19:31] thanks LiftedKilt [19:32] bugrum: no problem [19:52] c0s: How is it less user input? Also, I have to note that the {0..2} expansion is being done by bash (if you put that URL in quotes, it fails) and so won't work if that pattern is passed in as the action param (unless we eval it, which seems dangerous). [19:56] I've been trying to figure out a way to do the brace expansion inside the action but the only solutions I can come up with use eval. :( [19:58] cory_fu: you can use single-quotes to avoid bash parsing [19:59] or rather local expansion [19:59] the lines I've published work fine in an interactive bash [19:59] c0s: I'm aware. But if you put a brace pattern in a variable (pat="file{0..2}") then you can't later expand it without using some form of eval [20:00] c0s: And: http://pastebin.ubuntu.com/15562624/ [20:05] let me check [20:06] cory_fu: would the situation change if wget is used instead of curl? [20:10] cory_fu: yeah, that's a fundamental problem, I guess [20:11] we can perhaps limit the download action by just "simple" URLs? Although, I am not sure how much value it will add [20:11] c0s: Check out my comment on https://github.com/juju-solutions/layer-apache-hadoop-namenode/pull/13 [20:11] I think that seems like a reasonable compromise, no? [20:18] cory_fu: yeah, replied to the PR [20:20] c0s: I'm not sure how actions work with multiple values for the same option. I think space-separated might be the only way to do a list [20:21] marcoceppi, lazyPower: I don't support one of you knows if you can specify multiple values for an action param and how that would work? [20:21] mbruzek: ^? [20:21] cory_fu: you mean, like a list? [20:22] cory_fu - really good question. I'm not sure. I'm pretty sure action params act just like config values, that you would have to have a convention on the values and split them apart during the action-get bits [20:22] but again, i'm like 80% sure of that, wide margin for error there. [20:22] cory_fu: also, as an option, the URL could be to the file, containing the list in it [20:22] I just updated the comment, sorry didn't think of it immediately [20:22] marcoceppi: Yeah. Like, could you do something like: juju action do url=foo url=bar [20:23] lazyPower: Yeah, that's the approach I took, with space separated (chose that because of the brace expansion mentioned in the afore-linked PR) [20:26] c0s: I think we're running up against the limitations of actions as they currently stand. You can't give them a file, only string params on the CLI. So if we put the URLs into a file, that file would have to be hosted somewhere on a public URL (or a URL the charm could hit, at least) [20:26] Of course, there's nothing stopping the user from calling `juju action do` several times, once for each file [20:27] cory_fu: I think I might have misspoken [20:27] That would probably even be better because it would give you more feedback on the progress [20:27] You still give a URL to the action. [20:27] And the individual actions would be smaller and not block the run queue [20:27] However, the URL will not be to a gz file, but rather to a file that contains the list of such gz-files [20:28] then you just need to iterate over that list in while... ; do...; done and be done with it [20:28] c0s: How would our action know if a given file contained a list of URLs or actual data to ingest? I guess it'd have to be another param [20:28] yeah, agree [20:29] two alternative params: a list of files; or a file with the list [20:29] good point [20:29] marcoceppi, lazyPower: On that note, are actions blocking the run queue something we need to worry about? Should we avoid long-running actions? This is an important question for the big data charms. [20:29] cory_fu: they do block execution queue [20:29] cory_fu - yep actions are blocking [20:29] is there a concept of async'ed actions? [20:30] as an example: once my HDFS layer is up, I can start pumping data into it, and at the same time the rest of the stack can be deployed [20:30] and of course, async'ed actions open all sorts of crazy opportunities for parallel execution ;) [20:31] c0s: you can have the action start a process and detach from it [20:31] (and a juicy Pandora box of bugs, related to it) [20:31] c0s: so the action returns quickly, while the tasks continues on [20:31] marcoceppi: or that ;) [20:31] +1 [20:31] c0s: but you can't really query status of that since the actions completed [20:31] again, I still have tons to learn about Juju [20:31] no worries, good questions [20:31] well, technically you can of course [20:32] detached process still might provide a call-back for status reporting and you might be able to query it once in a while [20:32] kwmonroe: hello, I deployed the bigdata-dev spark charm (cs:~bigdata-dev/apache-spark) and current workload status is blocked on "Waiting for relation to Hadoop Plugin" Looking at the logs it seems it is blocking pretty early on in the install hook (http://paste.ubuntu.com/15562811/) and I don't have access to the spark commands whe I ssh into the unit [20:33] arosales: kjackal is currently working on fixing that so that you can deploy Spark standalone (without Hadoop) [20:33] is the hadoop plugin mandatory or should I have deployed with a stand-aloone config option [20:34] cory_fu: good to know, any workarounds? Specifically for deploying spark on Z? [20:34] hadoop plugin is not required for spark standalone [20:35] kjackal: agreed [20:35] but wait, what version are we talking about, arosales? [20:36] The one I am working on this is fixed, but it is not merged yet [20:36] kjackal: the question is if the current bigdata-dev/spark charm can be deployed without needing to use hadoop plugin [20:36] no, the current one needs hadoop-plugin [20:36] kjackal: I am using https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-spark/trunk [20:37] kjackal: any chance you could update bigdata-dev? [20:37] yes, that one needs hadoop [20:37] so I can deploy in stand alone mode? [20:37] arosales: Can you do a charm-build from kjackal's branch? [20:37] cory_fu: I can, but I am writing instructions for Z users to follow [20:37] * arosales would prefer for them to just do a deploy even if from a personal name space [20:39] arosales: I don't think it's quite ready to be pointing users to. I don't think it's been reviewed by anyone except kjackal yet. [20:39] I could put it in my personal name space if there objections to putting it at lp:~bigdata-dev [20:39] But I suppose we could push it to the dev channel on bigdata-dev [20:40] cory_fu: well users here would know it is beta [20:40] they are already using Xenial [20:40] Ok, if you want to put it in your namespace, that's fine [20:41] * arosales preferes bigdata-dev as it looks better than arosales [20:41] cory_fu: how about if we pushed specifically to the xenial name space in bigdata dev? [20:41] we can be me if kjackal puts me to his latest layer [20:41] All this talk of channels and namespaces in the store makes me happy :D [20:42] gtd [20:42] :-) [20:42] no 15 min ingestion wait either [20:42] cory_fu: I'll need xenial anyways, removes one step for me [20:42] arosales: Sure, we don't have any charms published for xenial that I'm aware of anyway [20:45] cory_fu: thanks, do you know where kjackal latest layer is at? [20:45] https://github.com/ktsakalozos/layer-apache-spark [20:45] https://github.com/ktsakalozos/interface-spark-quorum [20:45] kjackal: thanks [20:56] arosales since you are doing this test drive there are two things (at least) you should know. a) the README is not updated b) if you connect spark to zookeeper HA gets enabled. That's about it. === natefinch is now known as natefinch-afk [20:57] I would have stayed but is too late. Sorry [20:58] kjackal: thanks for the info, and replying so late here [20:59] kjackal: and thanks for a spark that works in standalone [20:59] kjackal: good night [21:03] nite kjackal [21:04] marcoceppi: also as an fyi adding ppa:juju/stable resolved the Z charm-tools install dep issues, thanks for the pointer [21:04] thedac, https://code.launchpad.net/~james-page/charm-helpers/network-spaces-api-endpoints/+merge/290529 [21:04] I reckon that's good to go now... [21:05] jamespage: great. I'll take a look [21:05] jcastro: so...does this mean we can deliver juju on windows with the ubuntu client? [21:06] rick_h_, lol [21:06] but actually probably yes [21:16] cory_fu: kwmonroe: looks like complete notebook import isn't available in Zeppelin until 0.6.0 [21:16] how do you want to go with code upload for now? a jar file..? or else? [21:17] and it'll only land in 0.6.0 if the developers stop arguing endlessly with each other ;) [21:17] Why did we go 0.5.6 instead of 0.6.0 again, kwmonroe? [21:18] Oh, 0.6.0 isn't released yet? [21:18] That answers that question [21:18] 0.5.6 [21:20] yeah, 0.6.0 isn't out yet [21:21] BTW, with spark, we also can do scripts execution, so people aren't forced to compile their code into jar files. [21:21] hence, in general, code upload could take a jar file, or a scala file or a text file. [21:22] if a jar file is provided, then job could be injected via spark-submit, whereas [21:22] if a script execution is desired, that we'll need to provide an option to run spark-shell [21:26] c0s: This doesn't work for importing? http://fedulov.website/2015/10/16/export-apache-zeppelin-notebooks/ [21:27] lemme check, cory_fu [21:28] oh, wow... that's nuts ;) [21:28] it will, if you don't mind hacking configs like that [21:29] I was referring to a clean nice way of using REST API ;) [21:29] ha [21:29] Well, maybe we can leave that for last (or down the road) [21:32] yup [21:33] cory_fu: for now, the application code could be imported from say http://paste.ubuntu.com/15563125/plain/ and then we can run it non-interactively with spark-shell [21:34] evidently, same code could be executed in Z, with all bells and whistles [21:35] like this http://54.183.150.153:9090/#/notebook/2BEMDHCFF [21:43] cory_fu: ingestion from lp:bigdata-dev is not longer in effect since you guys are using charm push, correct? [21:44] arosales: That is correct [21:44] cory_fu: ok, I thought so just wanted to confirm -- thanks [21:53] kwmonroe: https://github.com/juju-solutions/layer-apache-hive/pull/11 [22:00] that looks nice cory_fu! does it work? [22:00] heh heh [22:01] kwmonroe: Yep! http://pastebin.ubuntu.com/15563255/ [22:01] you spelled "juju action do" incorrectly there ^ [22:02] Helpers, my friend. I don't like having to copy & paste the ID to a `juju action fetch --wait` command [22:03] kwmonroe: http://pastebin.ubuntu.com/15563269/ if you're interested [22:04] :) [22:04] totes stolen [22:05] that is hot! [22:05] cory_fu: That's awesome [22:06] lazy bugger [22:06] No no. *efficient*! [22:06] hehe [22:07] ^__^ [22:12] admcleod1: yo [22:12] cory_fu: hive-54 published, analytics-sql-26 published. [22:20] arosales: hello [22:27] admcleod1: thanks for the mail and links for HA [22:30] arosales: no worries [22:46] damn spark died ;( [22:48] tvansteenburgh: marcoceppi: I think rq ingestion needs a kick [22:48] c0s: do you know if theres a support matrix/similar detailing hdfs fs layout id / hadoop version? [22:51] admcleod1: not sure I follow [22:51] what do you mean by support matrix/similar detailing [22:51] c0s: so im working on the upgrade story. 2.7.1 to 2.7.2 have the same hdfs layout version so i only need to download the new resource and untar it. [22:52] c0s: oh .. something that tells me what layout version each hadoop version has [22:53] ah, yeah [22:53] there's no such thing ;) [22:53] c0s: like is it always the same for minor revisions? [22:53] but you can expect them to be the same in the point-releases at least [22:53] yes [22:53] does 'expect' means 'garauntee'? :) [22:53] oh, c'mon [22:53] mean [22:53] yes, sure [22:53] :) [22:53] in some definition of "guaranteed" [22:54] right.. well, ill deprioritize hdfs upgrade since im going to initially just work on 2.7.1 to 2.7.2 [22:54] makes sense [22:55] guys, where the spark workers are running normally? [22:55] Would it be on the slave nodes? [22:55] Seems I can not find any traces of the died executors. [22:56] ah... I guess it is a yarn app... hence I am looking in a wrong place. [22:57] yes on the slave nodes [22:59] but logs will be in yarn somewhere, I guess [23:09] c0s: yeah should be in resmgr gui or slave gui.. off for lunch [23:09] bon appetite [23:10] looks like damn nodemanagers are in unhealthy state ;( [23:10] argh... [23:21] oh well, at least damn thing died at the very end of the day ;) [23:21] will continue tomorrow, switching off now