/srv/irclogs.ubuntu.com/2016/03/30/#juju.txt

stormmorelazyPower: I still love the manual side of ops, just don’t have the time to manually install and configure thousands of servers at a time :P00:03
lazyPowerstormmore - irc is notoriously horrible at transferring sarcasm :D00:03
stormmorelazyPower: that I am well aware of during my time as an IRCop on one network and help channel op on DALnet back in the day ;-)00:04
lazyPowerDALNet! \o/ now that brings back memories00:05
stormmorewas a channel op in #ircnewbies on there00:06
stormmorelove my test servers ssd drives really makes destroying and creating controllers so much faster00:08
rick_h_stormmore: hmm, ok so deploy worked here. Still working on figuring out wtf00:10
stormmorerick_h_: it might be my weird ass config, will know as soon as the bootstrap is finished00:13
stormmorerick_h_: just waiting for it to reconnect and finish up at this point00:14
stormmorebootstrping the agent now00:17
stormmorebootstraping*00:18
stormmorerick_h_: before I try and deploy juju-gui should I do anything?00:24
rick_h_stormmore: no, it should be fine. I just did a bootstrap, switch to admin, and then test the deploy00:26
stormmorerick_h_: definitely something obscure in my config, “juju-gui/0 maintenance     executing   2.0-beta3 0             198.18.83.100  (install) installing charm software”00:27
rick_h_stormmore: hmm, ok. Yea if you can point us at anything we can repeat we can look into it00:36
stormmorerick_h_: from what I can tell it goes back to the multi-gateway config setup00:38
rick_h_stormmore: hmm, ok. Yea not sure on that.00:40
stormmorerick_h_: no worries, it’s one of those good to know for when we are designing a production setup00:40
rick_h_stormmore: definitely00:41
=== zul_ is now known as zul
stormmoreI must still have something wrong in my network design/implementation cause the 2nd node is failing to communicate with the 1st :-/01:14
=== natefinch-afk is now known as natefinch
jamespagemorning07:58
gnuoyjamespage, don't suppose you have time to peak at a few reviews for me do you ? ( https://review.openstack.org/#/q/owner:liam.young%2540canonical.com+status:open )08:19
jamespagegnuoy, i will in a bit08:23
gnuoyta08:23
jamespagegnuoy, 3/4 landing - the cinder-backup one needs a small fix I think08:33
gnuoyjamespage, excellent, thank you08:34
gnuoyjamespage, I've fixed the code in response to the comments you and tinwood made on https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/pause-resume/+merge/289911 , are you happy for me to land that?08:45
jamespagegnuoy, cinder-backup updates looks fine to me09:04
jamespagegnuoy, approved09:04
gnuoyta09:04
jamespagegnuoy, +1 on the hacluster merge as well09:05
gnuoyjamespage, \o/ thanks09:05
deanmanHi, any tips on how to model a docker workload when using docker-compose? Do you encapsulate each docker in a charm or is there a way to provide the compose yaml file and juju some way create the charms?13:38
marcoceppilazyPower: ^14:13
marcoceppimbruzek: ^14:13
mbruzekIt looks like deanman left, but there is a way to do that.14:16
mbruzekWhen deanman comes back: Our docker charm has compose functionality, and you can just drop in a compose.yml or compose.yaml in a directory and make a docker charm controlled by compose very quickly.14:21
=== cos1 is now known as c0s
c0shttps://bigtop-repos.s3.amazonaws.com/releases/1.1.0/ubuntu/vivid/ppc64el/bigtop.list14:46
c0shttps://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/site.yaml - here's a minimal site file for the deployment14:53
c0shttps://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/bigtop/cluster.yaml - the full list of the hiera configuration14:53
bugrumI guess I should have asked this here. I'm trying to get a juju setup going with maas. I have maas installed on Ubuntu server 15.10 with 4 nodes connected and commissioned.15:10
bugrumHowever, when I try to bootstrap one of the nodes with juju bootstrap, the instance either goes into started but not deployed, or it goes it attempts to download something from the internet (even though the documentation mentions creating a private network between the nodes and mother node)15:11
bugrumam I missing something here?15:11
bugrumIs there a logfile on the maas mother node or something that will point me in the right direction?15:11
LiftedKiltbugrum: try running your bootstrap with --upload-tools15:14
bugrumLiftedKilt: okay, let me try that...15:15
LiftedKiltbugrum: if you look at the server output on maas for one of the servers that is timing out and it shows that the installation is finished, you might need to adjust the bootstrap-timeout value on bootstrap. It may be that your servers are not able to install and provision within the 10 minutes juju expects them too.15:23
bugrumLiftedKilt: do I add that value in the environments.yaml file?15:24
bugrumfor the bootstrap-timeout15:24
LiftedKiltwhat juju version are you using? 1.25 or 215:24
bugrumone second I need to look15:24
bugrumI'm using juju 1.25 on wily15:28
LiftedKiltbugrum: I would highly recommend using 2.015:29
LiftedKiltbugrum: It's a huge change in both syntax and function, and if you are getting started it's definitely worth starting with the new version15:30
bugrumis there a ppa I need to add in order to use that version?15:30
bugrumor is it just ppa:juju/stable?15:30
LiftedKilthttp://paste.ubuntu.com/15560256/15:30
LiftedKiltbugrum: there's a quick guide for you15:31
bugrumdoes juju 2 work with MAAS 1.915:32
LiftedKiltbugrum: yes15:32
LiftedKiltbugrum: it doesn't work with maas 2.0 yet, but will in a few weeks15:32
bugrumthe pastebin is pointing to a development branch of juju. Is juju 2 in beta?15:36
magicaltroutit is in beta15:36
magicaltroutbeta 3 went out earlier this week I believe15:36
magicaltroutcertainly worth using 2.0 if you are just starting out15:37
magicaltroutsave on the upgrade pain15:37
bugrumthank you. Should I do an apt-get remove on juju 1.25 before installing juju 2 or can I do an apt-get upgrade simply to upgrade the current version?15:42
magicaltrouti think the install target is apt-get install juju2 anyway15:42
magicaltroutalthough i've not checked15:43
magicaltroutconfigs and stuff go in different places15:43
bugrumI'll error the side of caution and remove the old one then15:43
jamespagedosaboy, hey - can you drop "Publi network is configured15:43
jamespageexisting os-public-hostname option." from the commit message and then I think both multi-network support reviews are good to go btw...15:43
dosaboyjamespage: looking15:44
=== cos1 is now known as c0s
dosaboyjamespage: yep i'll fix that15:45
jamespagedosaboy, awesome15:48
arosalesany charm command experts know how to pull a non-recommended charm?15:50
arosalesah, simple as charm pull cs:~bigdata-dev/apache-spark15:51
arosalesjust prefix the user name15:51
tvansteenburghjacekn: left you a comment on the collectd charm. i gotta run out for an appointment but i'll see about promulgating it when i get back15:57
jacekntvansteenburgh: yea I've just noticed. But...should that work not be part of review/promulgation process? I don't think pointing at personal repo is a good idea for recommended charm15:59
jacekntvansteenburgh: plus I tihnk that warning was not there last time I checked, it's likely that charm tools are moving quicker than review queue ;)15:59
tvansteenburghjacekn: the latter is true15:59
tvansteenburghjacekn: the point of the 'repo' key is to be able to find the source for the top layer of a charm16:00
tvansteenburghit doesn't matter that it's a personal repo16:00
jacekntvansteenburgh: so what if you promulgate the charm and I wipe that repo?16:01
tvansteenburghif someone does 'charm pull-source collectd', we want to get the source of the top layer, not the built charm16:01
tvansteenburghthen `charm pull-source` would fall back to getting the built charm16:02
tvansteenburghjacekn: why would you delete the source for the charm?16:02
jacekntvansteenburgh: so IMO instead of falling back you should put that laer in the official repo owned by ~chamers16:02
jacekntvansteenburgh: IDK to clean up my LP branches? It may happen one day IDK16:03
jaceknhuman error16:03
jacekntvansteenburgh: most layers here http://interfaces.juju.solutions/ and not in personal branches, I tihnk mine should be the same16:04
tvansteenburghjacekn: that's only b/c charmers have done many of the initial layers16:06
tvansteenburghjacekn: i think this would be a great point to raise on the juju mailing list16:06
jacekntvansteenburgh: if you do that I'm happy to comment (I'm not sure where to even stat, it's been over 2 months since I submitted this charm I don't even remember where to find docs any more)16:08
jaceknmarcoceppi: above hilights (again) how slow the review process is16:09
jaceknmarcoceppi: requirements change during the review so if I'm unlucky my charm wil never make it to the charmstore16:09
tvansteenburghjacekn: we are aware of that and are working on it. thanks for the feedback.16:10
tvansteenburghjacekn: if you think it is a mistake to keep charm layer source in a personal repo, please bring it up on the juju ML16:11
tvansteenburghjacekn: i have to go now - appointment in 20 minutes16:11
* tvansteenburgh departs16:11
jacekntvansteenburgh: when you're back - can yo point me at the docs that says charm layer source must be a personal repo?16:12
tvansteenburghthere is no "must"16:12
jacekntvansteenburgh: or should16:12
jacekntvansteenburgh: whatever doc you took that policy from16:12
tvansteenburghthe source is in a repo somewhere, we would just like to know where it is16:12
tvansteenburghjacekn: it's from the output of `charm build`16:13
* tvansteenburgh leaves for real16:13
jacekntvansteenburgh: another good reason not to keep official charm sources in personal repos is security - I can modify that source to do something bad, good luck to anybody using my layer16:18
stokachujacekn: you mean you don't blindly trusty everyone? :)16:28
jaceknstokachu: shocking right?16:28
stokachujacekn: what is this world coming to16:29
jaceknbu they if that's what charmers want that's what they get16:30
cholcombelazyPower, marcoceppi you guys have been busy.  interfaces.juju.solutions is looking packed with layers :)16:31
aisraelHas bundletester been updated to work with juju 2 (tvansteenburgh)16:32
jacekntvansteenburgh: so...stable charm tools don't complain about it. I'll comment on the bug too16:34
lazyPowercholcombe yeah :) its not just us though16:40
cholcombelazyPower, true16:40
LiftedKiltcan a juju deployment only be managed from the machine that boostrapped the controller?16:41
LiftedKiltor is there a way to provide the connection details to multiple machines (i.e. developers) and let them connect to their controllers/models16:42
cholcombehow do actions work with layers?16:46
lazyPowerthe same as they do with any other charm cholcombe16:50
cholcombelazyPower, ok cool16:50
cholcombelazyPower, what i mean is if i add an actions dir to my layer charm and then do charm build it doesn't show up in the final charm directory.  Where do actions go?16:51
lazyPowerHmm, doesn't seem right. Is there a layer.yaml directive somewhere telling the builder to omit the actions?16:52
cholcombelazyPower, just the default layer.yaml which includes the basic layer16:52
lazyPowercholcombe - i say that because the docker layer has actions that show up in the built charms.16:53
cholcombeah ok16:53
marcoceppiaisrael: the metrics look nice16:54
marcoceppiaisrael: would be worth a charmin16:54
cholcombelazyPower, so it should just copy the actions dir from the layer dir right?16:56
lazyPoweryep16:56
cholcombelazyPower, got it.  you have to add something to the actions dir or it ignores it16:57
LiftedKiltd34dp@@l17:02
LiftedKiltlazyPower: hehe17:02
lazyPowerLiftedKilt - ayyyyyyy lmao ;)17:02
LiftedKiltI juse need to run irc on a completely separate box with a separate keyboard haha17:04
lazyPoweryou'd be surprised, you'll wind up installing synergy for convenience and you're back in the same boat17:06
marcoceppirick_h_: I need some help with maas 1.9 + juju 2.017:06
lazyPower;)17:06
rick_h_marcoceppi: how so?17:06
marcoceppiI can't get it bootstrapped, there's like no example for cloud.yml17:06
lazyPowermarcoceppi - there was a maas example in the beta2 rel notes17:06
lazyPoweri dont know if thats helpful information or not...17:07
marcoceppiugh17:08
marcoceppiendpoint17:08
marcoceppilame17:08
rick_h_marcoceppi: ?17:08
marcoceppirick_h_: there we go17:08
rick_h_marcoceppi: woot17:08
marcoceppirick_h_: it was maas-server in the old env.yaml, but in clouds it's endpoint17:08
marcoceppirick_h_: the lack of documentation for add-cloud is painful17:09
rick_h_marcoceppi: ah, gotcha. It should walk you through it?17:09
marcoceppirick_h_: are we getting an interactive add-cloud like add-crednetial?17:09
rick_h_oh, there's no interactive add-cloud yet17:09
marcoceppirick_h_: will there be?17:09
rick_h_marcoceppi: I *think* so but /me double checks spec17:09
bugrumWhat is the equivalent to "juju destroy-environment" in juju2? the command doesn't seem to exist anymore17:10
rick_h_bugrum: juju destroy-model xxxx17:10
rick_h_bugrum: or sorry, remove-model? /me can't recall17:11
marcoceppirick_h_: also hit an issue with juju add-credential where my local:ob24 maas cloud wasn't acceptable as a cloud17:11
lazyPowerUsage: juju destroy-model [options] <model name>17:11
marcoceppirick_h_: I'll open a bug17:11
lazyPowerrick_h_  bugrum ^17:11
rick_h_marcoceppi: yes ple`ase17:11
c0skwmonroe: should I give another try to that dev bundle from yesterday as you have updated it?17:23
LiftedKiltmarcoceppi: here's what I've got for maas 1.9 juju 2.0 - http://paste.ubuntu.com/15560256/17:27
LiftedKiltnot sure if that's still helpful for you or if you got everything you needed17:27
marcoceppiLiftedKilt: thanks, I also ended up with similar, but I didn't put the oauth token in the clouds.yaml17:28
marcoceppiLiftedKilt: when you bootstrap, does juju2 acquire /all/ maas instances?17:28
LiftedKiltmarcoceppi: I'm not entirely sure how acquire works, but as far as showing ownership, no17:30
kwmonroeyup c0s, "juju deploy cs:~bigdata-dev/bundle/apache-hadoop-spark-zeppelin" will get you the good stuff.17:30
LiftedKiltmarcoceppi: it acquires and deploys machine 017:30
LiftedKiltand then repeats that process per charm deployed17:30
c0sgoing right now, thanks kwm17:30
c0skwmonroe: ^^17:31
marcoceppiLiftedKilt: odd, I had 13 nodes an don bootstrap juju took them all at once17:31
LiftedKiltmarcoceppi: that's not right17:31
marcoceppiLiftedKilt: i know ;)17:31
LiftedKilthaha17:31
LiftedKiltI'm on17:31
LiftedKiltMAAS Version 1.9.1+bzr4543-0ubuntu1 (wily1) fwiw17:31
LiftedKiltwhat series are you deploying?17:32
stormmorerick_h_: so I am talking in #maas about the network issues that I am facing17:36
marcoceppiLiftedKilt: I tried again, seems to be okay17:37
marcoceppi1.9.1+bzr4543-0ubuntu1 (trusty1)17:38
marcoceppiwas just an odd fluke17:38
LiftedKiltmarcoceppi: I'm hoping some of these random quirks get ironed out with the new API in maas 217:40
marcoceppiLiftedKilt: me tool17:41
LiftedKiltI modified the cs:~openstack-charmers-next/bundle/openstack-lxd-51 bundle to put every charm in a lxc container so that I can run the trusty bundle on xenial machines17:44
LiftedKiltbut the charms freak out about a series mismatch17:44
LiftedKilteven in lxc containers17:44
tvansteenburghaisrael: no, bundletester doesn't work with juju2 yet17:46
marcoceppiLiftedKilt: link?17:46
aisraeltvansteenburgh: ack, thanks.17:46
LiftedKiltmarcoceppi: what kind of link17:46
marcoceppiLiftedKilt: to the bundle?17:47
LiftedKiltmarcoceppi: https://paste.ubuntu.com/15561615/17:48
LiftedKiltrunning it locally17:48
marcoceppiLiftedKilt: that's a bug17:48
LiftedKiltmarcoceppi: yeah it returns https://paste.ubuntu.com/15561642/17:51
LiftedKiltmarcoceppi: is that a known bug?17:51
=== skr is now known as Guest96045
hackedbelliniHey guys. Me and nihas want to use juju to orchestrate the deployments of our services on digital ocean, but we are having a hard time figuring out how to make juju manage the existing machines we currently have (the ones that wasn't juju that provisioned). Is there a way of doing that?17:57
lazyPowerhackedbellini - i dont recommend that17:57
lazyPowerif juju provisioned those machines, thats a-ok. but existing machines, if you care at all about the data on those units, the charm is likely to stomp on it, and cause you grief17:58
hackedbellinilazyPower: yes I know. The charm that we would be running is actually ours, so we have control on what it would do and we can workaround that easily. Other than the problems related to charms and data, is there any other reason for me to avoid doing that?17:59
lazyPowerThats all I can think of at the moment.18:00
lazyPoweryou can certainly add the machine as if it were a manual enlistment. juju add-machine ssh:user@host18:00
lazyPowerhackedbellini - but, again, please take the proper backups and precautions18:00
hackedbellinilazyPower: will do :). Thank you very much!18:01
A-KaserI'm trying juju-quickstart apache-hadoop-spark-zeppelin18:02
A-Kaserjuju status is empty18:03
A-Kaserand the command to kill the environment "juju destroy-environment xxx" is not working18:03
arosalesA-Kaser: hello18:18
arosalesA-Kaser: which cloud are you targetting?18:18
arosalesrun 'juju switch' to see what your current default is18:19
marcoceppijacekn: I think you're missing something in the new charm store world18:20
marcoceppijacekn: the charm is, once built and promulgated, managed in the store. We don't want the top most layer in the interface site (typically) because there's no reuse value for that. It's why my ubucon layer which builds the ubucon charm from django layer isn't in the index18:21
marcoceppijacekn: the repo key is just a nice pointer that makes layers and charm store connected. we don't need yourlayer to be owned by anyone but you18:21
marcoceppijacekn: since if you build a new version it'll have to go through the same review process, it's just a convience to show where things started from, effectively "this is the upstream of the charm artifact"18:22
falanxis there a charms channel?18:47
LiftedKiltmarcoceppi: The bug is biting me even when I don't customize the bundle.18:55
LiftedKiltmarcoceppi: https://jujucharms.com/u/openstack-charmers-next/openstack-lxd/bundle/51 doesn't deploy because ceph-radosgw complains of series mismatch because of a lxc container18:56
bugrumLiftedKilt: so I installed juju2 and ran juju bootstrap, and one thing I noticed is that the script is asking for an ip on eth0 and eth1. I thought Wily uses a different naming scheme for the network interfaces18:58
bugrumenp2s0 or something like that18:58
bugrumfor the systemd changes. Would I need to set this up manually to get the bootstrap to work correctly?19:00
LiftedKiltbugrum: I'm not sure who gets the credit, but I'm reasonable confident either maas directly or maas under juju's instruction sets the kernel parameter to keep old interface naming19:00
LiftedKiltreasonably*19:00
cory_fukwmonroe, c0s: Is there a reasonable default value for dest_dir in this action?  Maybe /user/ubuntu ?19:22
c0sfrom what I see in juju's way of deploying Hadoop stack that seems to be right, cory_fu19:22
c0scory_fu: kwmonroe I think I have a way to put multiple files from the net into HDFS without intermediate store in the local FS. It is a bit ugly but should work just fine ;)19:23
bugrumLiftedKilt: So I tried bootstrapping a node with juju2 twice now and I keep getting this error from juju2. It says "bootstrap instance started but did not changed to deployed State"19:23
bugrumI didn't see any errors with the --debug flag turned on19:23
LiftedKiltbugrum: did you change the bootstrap-timeout?19:23
LiftedKiltbugrum: if your machines can't bootstrap within the default time of 600 seconds, you may need to up the bootstrap-timeout value19:24
LiftedKiltbugrum: this is passed as an argument in your deploy command19:24
cory_fuc0s: So, I had an idea, too, and that was to let the url param take a space-separated list.  https://github.com/juju-solutions/layer-apache-hadoop-namenode/pull/1319:24
cory_fuc0s: What was yours?19:24
LiftedKiltbugrum: err bootstrap command not deploy, sorry19:25
bugrumer, no I didn't. can I man juju bootstrap to get all the options19:25
bugrumor juju help bootstrap provides them all?19:25
LiftedKiltbugrum: juju bootstrap --config bootstrap-timeout=valueinseconds19:26
bugrumand I answered my own question :-P19:26
LiftedKiltbugrum: I don't think the man page is updated yet with the new syntax19:27
falanxIf I wanted to auto-scale jenkins slaves, is juju the right tool for me?  Would I just need to create a charm for jenkins swarm?19:28
c0sI did this19:29
c0swget --spider http://data.githubarchive.org/2016-01-04-{0..2}.json.gz 2>&1 | grep json.gz | awk '{print $3}' | \19:29
c0s  while read i ; do echo $i; wget $i -O - | hdfs dfs -put - cos-arch/`echo $i | awk -F "/" '{print $NF}'`; done19:29
c0scory_fu: ^^19:29
LiftedKiltbugrum: huge resource that has tons of info on new syntax: https://lists.ubuntu.com/archives/juju/2016-March/006922.html19:29
c0spretty much like yours I think, but less user input19:30
bugrumthanks LiftedKilt19:31
LiftedKiltbugrum: no problem19:32
cory_fuc0s: How is it less user input?  Also, I have to note that the {0..2} expansion is being done by bash (if you put that URL in quotes, it fails) and so won't work if that pattern is passed in as the action param (unless we eval it, which seems dangerous).19:52
cory_fuI've been trying to figure out a way to do the brace expansion inside the action but the only solutions I can come up with use eval.  :(19:56
c0scory_fu: you can use single-quotes to avoid bash parsing19:58
c0sor rather local expansion19:59
c0sthe lines I've published work fine in an interactive bash19:59
cory_fuc0s: I'm aware.  But if you put a brace pattern in a variable (pat="file{0..2}") then you can't later expand it without using some form of eval19:59
cory_fuc0s: And: http://pastebin.ubuntu.com/15562624/20:00
c0slet me check20:05
c0scory_fu: would the situation change if wget is used instead of curl?20:06
c0scory_fu: yeah, that's a fundamental problem, I guess20:10
c0swe can perhaps limit the download action by just "simple" URLs? Although, I am not sure how much value it will add20:11
cory_fuc0s: Check out my comment on https://github.com/juju-solutions/layer-apache-hadoop-namenode/pull/1320:11
cory_fuI think that seems like a  reasonable compromise, no?20:11
c0scory_fu: yeah, replied to the PR20:18
cory_fuc0s: I'm not sure how actions work with multiple values for the same option.  I think space-separated might be the only way to do a list20:20
cory_fumarcoceppi, lazyPower: I don't support one of you knows if you can specify multiple values for an action param and how that would work?20:21
cory_fumbruzek: ^?20:21
marcoceppicory_fu: you mean, like a list?20:21
lazyPowercory_fu - really good question. I'm not sure. I'm pretty sure action params act just like config values, that you would have to have a convention on the values and split them apart during the action-get bits20:22
lazyPowerbut again, i'm like 80% sure of that, wide margin for error there.20:22
c0scory_fu: also, as an option, the URL could be to the file, containing the list in it20:22
c0sI just updated the comment, sorry didn't think of it immediately20:22
cory_fumarcoceppi: Yeah.  Like, could you do something like: juju action do url=foo url=bar20:22
cory_fulazyPower: Yeah, that's the approach I took, with space separated (chose that because of the brace expansion mentioned in the afore-linked PR)20:23
cory_fuc0s: I think we're running up against the limitations of actions as they currently stand.  You can't give them a file, only string params on the CLI.  So if we put the URLs into a file, that file would have to be hosted somewhere on a public URL (or a URL the charm could hit, at least)20:26
cory_fuOf course, there's nothing stopping the user from calling `juju action do` several times, once for each file20:26
c0scory_fu: I think I might have misspoken20:27
cory_fuThat would probably even be better because it would give you more feedback on the progress20:27
c0sYou still give a URL to the action.20:27
cory_fuAnd the individual actions would be smaller and not block the run queue20:27
c0sHowever, the URL will not be to a gz file, but rather to a file that contains the list of such gz-files20:27
c0sthen you just need to iterate over that list in while... ; do...; done and be done with it20:28
cory_fuc0s: How would our action know if a given file contained a list of URLs or actual data to ingest?  I guess it'd have to be another param20:28
c0syeah, agree20:28
c0stwo alternative params: a list of files; or a file with the list20:29
c0sgood point20:29
cory_fumarcoceppi, lazyPower: On that note, are actions blocking the run queue something we need to worry about?  Should we avoid long-running actions?  This is an important question for the big data charms.20:29
marcoceppicory_fu: they do block execution queue20:29
lazyPowercory_fu - yep actions are blocking20:29
c0sis there a concept of async'ed actions?20:29
c0sas an example: once my HDFS layer is up, I can start pumping data into it, and at the same time the rest of the stack can be deployed20:30
c0sand of course, async'ed actions open all sorts of crazy opportunities for parallel execution ;)20:30
marcoceppic0s: you can have the action start a process and detach from it20:31
c0s(and a juicy Pandora box of bugs, related to it)20:31
marcoceppic0s: so the action returns quickly, while the tasks continues on20:31
c0smarcoceppi: or that ;)20:31
c0s+120:31
marcoceppic0s: but you can't really query status of that since the actions completed20:31
c0sagain, I still have tons to learn about Juju20:31
marcoceppino worries, good questions20:31
c0swell, technically you can of course20:31
c0sdetached process still might provide a call-back for status reporting and you might be able to query it once in a while20:32
arosaleskwmonroe: hello, I deployed the bigdata-dev spark charm (cs:~bigdata-dev/apache-spark) and current workload status is blocked on "Waiting for relation to Hadoop Plugin"  Looking at the logs it seems it is blocking pretty early on in the install hook (http://paste.ubuntu.com/15562811/) and I don't have access to the spark commands whe I ssh into the unit20:32
cory_fuarosales: kjackal is currently working on fixing that so that you can deploy Spark standalone (without Hadoop)20:33
arosalesis the hadoop plugin mandatory or should I have deployed with a stand-aloone config option20:33
arosalescory_fu: good to know, any workarounds?  Specifically for deploying spark on Z?20:34
kjackalhadoop plugin is not required for spark standalone20:34
arosaleskjackal: agreed20:35
kjackalbut wait, what version are we talking about, arosales?20:35
kjackalThe one I am working on this is fixed, but it is not merged yet20:36
arosaleskjackal: the question is if the current bigdata-dev/spark charm can be deployed without needing to use hadoop plugin20:36
kjackalno, the current one needs hadoop-plugin20:36
arosaleskjackal: I am using https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-spark/trunk20:36
arosaleskjackal: any chance you could update bigdata-dev?20:37
kjackalyes, that one needs hadoop20:37
arosalesso I can deploy in stand alone mode?20:37
cory_fuarosales: Can you do a charm-build from kjackal's branch?20:37
arosalescory_fu: I can, but I am writing instructions for Z users to follow20:37
* arosales would prefer for them to just do a deploy even if from a personal name space20:37
cory_fuarosales: I don't think it's quite ready to be pointing users to.  I don't think it's been reviewed by anyone except kjackal yet.20:39
arosalesI could put it in my personal name space if there objections to putting it at lp:~bigdata-dev20:39
cory_fuBut I suppose we could push it to the dev channel on bigdata-dev20:39
arosalescory_fu: well users here would know it is beta20:40
arosalesthey are already using Xenial20:40
cory_fuOk, if you want to put it in your namespace, that's fine20:40
* arosales preferes bigdata-dev as it looks better than arosales20:41
arosalescory_fu: how about if we pushed specifically to the xenial name space in bigdata dev?20:41
arosaleswe can be me if kjackal puts me to his latest layer20:41
lazyPower All this talk of channels and namespaces in the store makes me happy :D20:41
arosalesgtd20:42
arosales:-)20:42
arosalesno 15 min ingestion wait either20:42
arosalescory_fu: I'll need xenial anyways, removes one step for me20:42
cory_fuarosales: Sure, we don't have any charms published for xenial that I'm aware of anyway20:42
arosalescory_fu: thanks, do you know where kjackal latest layer is at?20:45
kjackalhttps://github.com/ktsakalozos/layer-apache-spark20:45
kjackalhttps://github.com/ktsakalozos/interface-spark-quorum20:45
arosaleskjackal: thanks20:45
kjackalarosales since you are doing this test drive there are two things (at least) you should know. a) the README is not updated b) if you connect spark to zookeeper HA gets enabled. That's about it.20:56
=== natefinch is now known as natefinch-afk
kjackalI would have stayed but is too late. Sorry20:57
arosaleskjackal: thanks for the info, and replying so late here20:58
arosaleskjackal: and thanks for a spark that works in standalone20:59
arosaleskjackal: good night20:59
c0snite kjackal21:03
arosalesmarcoceppi: also as an fyi adding ppa:juju/stable resolved the Z charm-tools install  dep issues, thanks for the pointer21:04
jamespagethedac, https://code.launchpad.net/~james-page/charm-helpers/network-spaces-api-endpoints/+merge/29052921:04
jamespageI reckon that's good to go now...21:04
thedacjamespage: great. I'll take a look21:05
rick_h_jcastro: so...does this mean we can deliver juju on windows with the ubuntu client?21:05
jamespagerick_h_, lol21:06
jamespagebut actually probably yes21:06
c0scory_fu: kwmonroe: looks like complete notebook import isn't available in Zeppelin until 0.6.021:16
c0show do you want to go with code upload for now? a jar file..? or else?21:16
magicaltroutand it'll only land in 0.6.0 if the developers stop arguing endlessly with each other ;)21:17
cory_fuWhy did we go 0.5.6 instead of 0.6.0 again, kwmonroe?21:17
cory_fuOh, 0.6.0 isn't released yet?21:18
cory_fuThat answers that question21:18
magicaltrout0.5.621:18
c0syeah, 0.6.0 isn't out yet21:20
c0sBTW, with spark, we also can do scripts execution, so people aren't forced to compile their code into jar files.21:21
c0shence, in general, code upload could take a jar file, or a scala file or a text file.21:21
c0sif a jar file is provided, then job could be injected via spark-submit, whereas21:22
c0sif a script execution is desired, that we'll need to provide an option to run spark-shell21:22
cory_fuc0s: This doesn't work for importing?  http://fedulov.website/2015/10/16/export-apache-zeppelin-notebooks/21:26
c0slemme check, cory_fu21:27
c0soh, wow... that's nuts ;)21:28
c0sit will, if you don't mind hacking configs like that21:28
c0sI was referring to a clean nice way of using REST API ;)21:29
cory_fuha21:29
cory_fuWell, maybe we can leave that for last (or down the road)21:29
c0syup21:32
c0scory_fu: for now, the application code could be imported from say http://paste.ubuntu.com/15563125/plain/ and then we can run it non-interactively with spark-shell21:33
c0sevidently, same code could be executed in Z, with all bells and whistles21:34
c0slike this http://54.183.150.153:9090/#/notebook/2BEMDHCFF21:35
arosalescory_fu: ingestion from lp:bigdata-dev is not longer in effect since you guys are using charm push, correct?21:43
cory_fuarosales: That is correct21:44
arosalescory_fu: ok, I thought so just wanted to confirm -- thanks21:44
cory_fukwmonroe: https://github.com/juju-solutions/layer-apache-hive/pull/1121:53
kwmonroethat looks nice cory_fu!  does it work?22:00
admcleod1heh heh22:00
cory_fukwmonroe: Yep!  http://pastebin.ubuntu.com/15563255/22:01
kwmonroeyou spelled "juju action do" incorrectly there ^22:01
cory_fuHelpers, my friend.  I don't like having to copy & paste the ID to a `juju action fetch --wait` command22:02
cory_fukwmonroe: http://pastebin.ubuntu.com/15563269/ if you're interested22:03
kwmonroe:)22:04
kwmonroetotes stolen22:04
jrwrenthat is hot!22:05
aisraelcory_fu: That's awesome22:05
magicaltroutlazy bugger22:06
aisraelNo no. *efficient*!22:06
magicaltrouthehe22:06
cory_fu^__^22:07
arosalesadmcleod1: yo22:12
kwmonroecory_fu: hive-54 published, analytics-sql-26 published.22:12
admcleod1arosales: hello22:20
arosalesadmcleod1: thanks for the mail and links for HA22:27
admcleod1arosales: no worries22:30
c0sdamn spark died ;(22:46
aisraeltvansteenburgh: marcoceppi: I think rq ingestion needs a kick22:48
admcleod1c0s: do you know if theres a support matrix/similar detailing hdfs fs layout id / hadoop version?22:48
c0sadmcleod1: not sure I follow22:51
c0swhat do you mean by support matrix/similar detailing22:51
admcleod1c0s: so im working on the upgrade story. 2.7.1 to 2.7.2 have the same hdfs layout version so i only need to download the new resource and untar it.22:51
admcleod1c0s: oh .. something that tells me what layout version each hadoop version has22:52
c0sah, yeah22:53
c0sthere's no such thing ;)22:53
admcleod1c0s: like is it always the same for minor revisions?22:53
c0sbut you can expect them to be the same in the point-releases at least22:53
c0syes22:53
admcleod1does 'expect' means 'garauntee'? :)22:53
c0soh, c'mon22:53
admcleod1mean22:53
c0syes, sure22:53
c0s:)22:53
c0sin some definition of "guaranteed"22:53
admcleod1right.. well, ill deprioritize hdfs upgrade since im going to initially just work on 2.7.1 to 2.7.222:54
c0smakes sense22:54
c0sguys, where the spark workers are running normally?22:55
c0sWould it be on the slave nodes?22:55
c0sSeems I can not find any traces of the died executors.22:55
c0sah... I guess it is a yarn app... hence I am looking in a wrong place.22:56
admcleod1yes on the slave nodes22:57
c0sbut logs will be in yarn somewhere, I guess22:59
admcleod1c0s: yeah should be in resmgr gui or slave gui.. off for lunch23:09
c0sbon appetite23:09
c0slooks like damn nodemanagers are in unhealthy state ;(23:10
c0sargh...23:10
c0soh well, at least damn thing died at the very end of the day ;)23:21
c0swill continue tomorrow, switching off now23:21

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!