[00:18] <cholcombe> is there a list somewhere of the allowed types in actions.yaml?
[00:18] <cholcombe> i thought there was but i'm having trouble finding it now
[00:20] <cholcombe> ah ok i see. it's the json schema types.  Nevermind.  http://json-schema.org/latest/json-schema-core.html <-- for future people
[00:23] <arosales> man zeppelin doesn't like zenial or spark stand-alone or both
[00:23] <arosales> debug continues
[00:23] <arosales> but dinner calls
[00:23] <arosales> s/zenial/xenial :-)
[00:34] <cholcombe> lazyPower, 2 new interfaces today thanks to icey's help: https://github.com/cholcombe973/juju-interface-ceph https://github.com/cholcombe973/juju-interface-backup
[07:36] <jacekn> marcoceppi: so I understand what you are saying I just completely disagree with that approach. If you "charm pull-source" for an official, curated charm you should absolutely get official and curated layer. Not one from someone's personal branch who may have decided to add "rm -rf" there just for laugh
[07:37] <jacekn> marcoceppi: defaulting to more secure approach is always good, especially if it's so trivial to do
[08:07] <jamespage> gnuoy, before I raise any more reviews
[08:07] <jamespage> can you sanity check one of https://review.openstack.org/#/q/topic:network-spaces
[08:07] <gnuoy> sure
[08:07] <jamespage> the delta is pretty small
[08:08] <gnuoy> jamespage, duff grammer in README.md
[08:08] <jamespage> gnuoy, hah
[08:09] <gnuoy> jamespage, when you say "deployer" in the README do you mean that syntax is valid for both juju-deployer and juju native deployer and if not maybe its worth being explicit?
[08:10] <jamespage> gnuoy, I was intentionally vague
[08:10] <jamespage> the juju v4 bundle format in juju supports it
[08:10] <gnuoy> haha ok :)
[08:14] <gnuoy> jamespage, other than that cinder lgtm
[08:14] <jamespage> gnuoy, can you comment on the README and I'll update all of them
[08:18] <gnuoy> jamespage, done
[08:22] <kjackal> hey people, is there a way to tell amulet to "attach" to an existing set of charms instead of deploying them from scratch? I am trying to save up some time while writing tests.
[08:25] <gnuoy> kjackal, not that I know of but let me know if you find a way!
[08:26] <kjackal> ok gnuoy
[08:44] <deanman> Hi, do you keep logs of this channel somewhere?
[08:46] <magicaltrout> deanman: http://irclogs.ubuntu.com/2016/03/31/%23juju.txt
[08:46] <deanman> magicaltrout, thanks
[08:58] <kjackal> gnuoy, running the same test for a second time will not redeploy the charms so it does more or less an attach
[08:59] <gnuoy> kjackal, well, you still loose time while it looks at the deployment, I want a way for it to jump straight to the functional tests themselves
[09:00] <kjackal> yes, true
[09:09] <Sophie_> hello my dear folks
[09:10] <Sophie_> I got a trouble situation here...
[09:54] <jacekn> would anybody have time to have a look at my collectd subordinate review? https://bugs.launchpad.net/charms/+bug/1538573
[09:55] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1538573>
[09:58] <magicaltrout> jacekn: you had your turn, back of the queue now! ;)
[10:00] <jacekn> magicaltrout: the queue is a lie! If you don't ask here you don't bet review ;)
[10:00] <jacekn> s/bet/get/
[10:01] <magicaltrout> true
[10:32] <deanman> magicaltrout, apart from irc logs retention on the link you gave me is there any other alternative that you can suggest for following discussion here offline ?
[10:33] <magicaltrout> yeah, spin up the juju big data quickstart, ingest the log files into hadoop and run a bunch of Map Reduce textual analysis over it.....
[12:39] <neiljerram> Did the Ubuntu keyserver have a problem in the last hour?  I was seeing GPG key retrieval fail during bundle deployment; then suddenly it started working again...
[13:10] <marcoceppi> jacekn: there is no official curated layer. YOUR layer is the official one, we don't store layers, and you're the trusted upstream because we trusted you to put the charm in the store
[13:13] <jacekn> marcoceppi: trust level can change easily, social engineers just wait for holes like this (gain trust, get something into the charmstore and then exploit it by modifying their layer)
[13:32] <marcoceppi> jacekn: but modifying the layer won't affect the charm in the store
[13:35] <jacekn> marcoceppi: no but it will affect everybody who wants to develop the charm or those who want to compose right before deployment for new hotness
[13:36] <marcoceppi> jacekn: but that's on /them/ to validate the source they're building
[13:36] <jacekn> marcoceppi: I really can't see why keeping layers in non-personal branches is a bad idea
[13:36] <marcoceppi> jacekn: we don't own any branches anymore
[13:36] <marcoceppi> jacekn: launchpad ~charmers as we know it is dead
[13:36] <jacekn> marcoceppi: sorry but realying on people fails sooner or later
[13:36] <jacekn> marcoceppi: oh really? So all new charms in the charmstore are under personal branches?
[13:36] <marcoceppi> jacekn: there is no "personal branch"
[13:37] <marcoceppi> there's no branch
[13:37] <marcoceppi> it's just a blob upload
[13:37] <marcoceppi> promulgation is a pointer to the namespace it was uploaded to
[13:37] <marcoceppi> and we gate allowance of people updating promulgated charms
[13:37] <marcoceppi> we do all the QA we do now, but the person who created the charm "owns" the space
[13:37] <jacekn> marcoceppi: so what happens if I delete my LP account one day?
[13:38] <marcoceppi> jacekn: who cares? it's gone. The charms will still be in the store it's fine
[13:38] <jacekn> marcoceppi: well those who might want to submit patches will care, if there is no branch there is no way to raise MP
[13:39] <marcoceppi> jacekn: well, we have a way to change ownership of a promulgated source, and people can pull the charm - as it is compiled - and uncompile it since we have a manifest that tracks the files and what layers they came from
[13:39] <marcoceppi> it can be rebuilt somewhere else
[13:39] <marcoceppi> or just updated in place
[13:39] <marcoceppi> the charm store is the single source of truth regardless where upstream development is
[13:40] <marcoceppi> what happens if an upstream's code goes away one day
[13:40] <marcoceppi> the packages in the archive won't suddenly break
[13:40] <jacekn> marcoceppi: if I wipe my LP accont there will be no layer to pull
[13:40] <marcoceppi> jacekn: I can reconstruct your layer, from the built charm
[13:40] <mbruzek> Hi guys do we have the key/value "configuration" (in layer.yaml) documented anywhere? Like how do I get those values and what would be a good case to use them?  Asking for a friend.
[13:41] <marcoceppi> mbruzek: not quite documented much anywhere - yet, but we do have a few examples
[13:41] <jacekn> marcoceppi: wow all this seems super weird and is IMO asknig for trouble. In any case - the error that was noticed is NOT picked up by the latest stable charm-tools
[13:41] <jacekn> marcoceppi: so I can't even test the fix
[13:41] <mbruzek> marcoceppi: is it mostly the big data guys?
[13:42] <mbruzek> marcoceppi: I would like to send my friend an example.
[13:42] <jacekn> marcoceppi: and of course I'm not too happy about fixing something that's clearly not a requirement when I use stable tools
[13:43] <marcoceppi> mbruzek: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/__init__.py
[13:43] <marcoceppi> from charms.layer import options
[13:43] <marcoceppi> options('layer-you-want-options-from')
[13:44] <marcoceppi> mbruzek: how to define options: https://github.com/juju-solutions/layer-basic/blob/master/layer.yaml
[13:44] <marcoceppi> mbruzek: how to set them: https://github.com/marcoceppi/layer-gitlab/blob/master/layer.yaml
[13:44] <marcoceppi> jacekn: if you don't want to put a repo key. Don't.
[13:45] <mbruzek> marcoceppi: thank you
[13:45] <jacekn> marcoceppi: well that's apparently waht's blockin my charmstore submission
[13:45] <marcoceppi> it's not, it was a suggestion
[13:46] <jacekn> marcoceppi: ok so it's very unclear to me what's going on with my submission. tvansteenburgh reviewed it but the only feedback he provided was a repo key. Does it mean it's approved other than that?
[13:46] <jacekn> marcoceppi: I suspect you can see that I'm getting increasingly upset by the review process ;)
[13:47] <marcoceppi> jacekn: sure, we're tyring to build an extensive and fruitful ecosystem, but if you don't want to contribute to that by simply point to your upstream layer repo, people just won't know where to contribute to the charm and we'lljust have a bunch of forks of the compiled artifact
[13:48] <marcoceppi> we'll simply do it for you, prior to promulgation
[13:48] <marcoceppi> the review process is two fold. One to make sure charms are sane and two - to help educate charmers as the ecosystem evolves
[13:50] <jacekn> marcoceppi: sorry but saying that I don't want to contribute is not fair. I've been trying to get this charm promulgated for the past 2 months and I chased it more times than I can count
[13:51] <marcoceppi> jacekn: yes, and the review queue is old, outdated and the team reviewing is small and over burdened
[13:51] <marcoceppi> jacekn: these aren't unique to you, it's something everyone has had to sufferthough and no one is happy about
[13:51] <marcoceppi> jacekn: we're aggresively building a new review queue and process as the entire way charms are uploaded and submitted to the store is changing
[13:51] <marcoceppi> jacekn: the goal of this is to stream line the process so that review times go from months and weeks to days and hours
[13:52] <marcoceppi> jacekn: all while juju 2.0 is being built and a shift of process is happening
[13:52] <marcoceppi> jacekn: we appreciate your patcience and feel the same strain
[13:53] <marcoceppi> if you have an actual problem with the idea of a repo key, I suggest mailing the list so it can be hashed out, if you're just tired of waiting and making a stand we'll just add the key for you
[13:53] <lazyPower> mbruzek - we have examples of layer options in the docker layer as well
[13:53] <jacekn> marcoceppi: that layer BTW is still being developed but because CS submissions are so slow I just froze it to get it thorugh the review. One it's in the charmstore there will be updates coming. So every week of delay is bigger delta to review later
[13:53] <marcoceppi> jacekn: we understand, again, we have the same problems and no one is disagreeing
[13:54] <jacekn> marcoceppi: I don't have real problem with that other than it's something I had no chance of knowing about because stable tools don't care and it's causing yet another delay
[13:54] <marcoceppi> in two weeks, if we can get the dev time, we'll be launching a beta of the new review queue which will allow you to iterate in the store outside of the review process
[13:54] <jacekn> marcoceppi: and I can't test it using stable tools so I can't really fix it
[13:55] <marcoceppi> jacekn: our stable tools are in this same flux, we have a 2.1.2 of charm-tools in the devel ppa and pypi -it's considered stable but we're waiting for them to land in xenial before ocnsidering them released
[13:55] <marcoceppi> welcome to the most awkard time of the ecosystem
[13:55] <marcoceppi> if you don't feel comfortable adding it, we'll take care of it
[13:56] <deanman> I have a setup of a single VM running locally and using manual provider to bootstrap it and it works fine when deploying inside lxc but as soon as i move inside a proxied network it gets stuck endlessly trying to setup the agent on the container. Any hints/clues what could be wrong?
[13:56] <deanman> (environments.yaml has proper proxy configuration)
[13:57] <jacekn> marcoceppi: cool, thanks. FTR feel free to point at my branch I don't mind I just can't comfortably do it myself because of testing
[13:58] <marcoceppi> jacekn: understood, we have charm-tools in pypi and ppa:juju/devel if you want to test in a lxd machine or virtualenv in the future
[13:59] <jacekn> ack
[13:59] <marcoceppi> jacekn: we hope to have it in xenial and /stable in the next two week
[13:59] <jacekn> marcoceppi: nice!
[14:00] <marcoceppi> jacekn: where are you doing your layer dev?
[14:01] <jacekn> marcoceppi: in a LTS VM (currently trusty)
[14:01] <marcoceppi> jacekn: ah, so not pushed anywhere?
[14:02] <marcoceppi> deanman: what version of Juju?
[14:02] <jacekn> marcoceppi: ah sorry that's what you meant. It is in LP https://code.launchpad.net/~jacekn/canonical-is-charms/collectd-composer
[14:02] <marcoceppi> jacekn: <3 thanks
[14:03] <marcoceppi> jacekn: charm LGTM, promulgating
[14:03] <deanman> marcoceppi, 1.25.3
[14:03] <jacekn> marcoceppi: thanks! It's 1.25.3-trusty-amd64 if you want to know (from juju table PPA). I think you can see the trend with "stable" here ;)
[14:03] <marcoceppi> jacekn: hum, so it should work, but you said manual provider so I just realized I don't think cloud init gets run on manual provider which means the proxy stuff won't be put in place
[14:03] <marcoceppi> deanman: ^^
[14:04] <marcoceppi> deanman: can you `grep -ri "proxy" /etc/apt/apt.conf.d` on the manual machine? do you get any hits?
[14:06] <deanman> marcoceppi, has several entries
[14:06] <jacekn> marcoceppi: sorry I don't get it. What's the problem with the proxy?
[14:06] <marcoceppi> jacekn deanman: juju passes proxy information with cloud-init which I don't think gets run in a manual provider bootstrap, but I coudl be wrong
[14:07] <tvansteenburgh> stub: have you used your juju-wait plugin with juju2 at all?
[14:07] <jacekn> marcoceppi: that part I understand but how does it relate to my charm?
[14:07] <deanman> marcoceppi,
[14:07] <marcoceppi> jacekn: wrong ping, sorry, meant to ping deanman
[14:07] <deanman> marcoceppi, "/etc/apt/apt.conf.d/42-juju-proxy-settings:Acquire::http::Proxy "http://......."
[14:08] <marcoceppi> jacekn: you're charm is fine
[14:08] <jacekn> marcoceppi: hahaha OK!
[14:08] <marcoceppi> deanman: hum, so that's good. is it failing to bootstrap or failling to set the LXC up?
[14:09] <deanman> deanman, it's failing both to deploy on lxc and on bare machine with --to 0. "Waiting agent to finish".
[14:10] <marcoceppi> deanman: can you grab the logs?
[14:10] <deanman> marcoceppi, machine? unit ?
[14:10] <marcoceppi> deanman: all?
[14:11] <lazyPower> mbruzek - have a sec to review a hotfix patch?
[14:11] <mbruzek> yes
[14:11] <lazyPower> mbruzek - we're broken in master :( - https://github.com/juju-solutions/layer-docker/pull/30/files
[14:13] <lazyPower> I should probably get an amulet test written to stress that compose functionality just moved into the wheelhouse... mbruzek  - do you want that befor eyou do the review? I can bust something out in the next ho ur
[14:14] <mbruzek> lazyPower: I didn't see tests, so I am building and will test it manually here
[14:14] <lazyPower> ack
[14:30] <deanman> marcoceppi, http://paste.ubuntu.com/15568095/
[14:31] <deanman> marcoceppi, ls -lh /var/cache/lxc/cloud-trusty/ is empty even several minutes after deploying, not downloading image?
[14:31] <marcoceppi> deanman: it looks like it's failing to hit itsefl after proxy
[14:32] <marcoceppi> that's the EOF
[14:32] <deanman> marcoceppi, could be a no_proxy env issue then ?
[14:32] <marcoceppi> deanman: add 127.0.0.1 and 10.0.2.15 and 192.168.11.11 to no_proxy
[14:53] <mbruzek> lazyPower: +1 merged the docker layer
[14:53] <lazyPower> ta mbruzek , thanks for the priority review
[14:54] <mbruzek> lazyPower: thanks for sending me quality code
[14:55] <deanman> mbruzek, first link on README at layer-docker points to non-valid URL.
[14:55] <mbruzek> deanman on it
[14:56] <lazyPower> oh wow it says "composer" in there
[14:56] <lazyPower> thats some old text :D
[14:57] <deanman> yeah, that confused me while trying to understand how to build my layer docker charm. I think i found some other references that need update too, sec...
[14:59] <deanman> https://jujucharms.com/docs/1.25/authors-charm-with-docker this also points to wrong "composing documentation" link.
[15:14] <mbruzek> https://github.com/juju-solutions/layer-docker/pull/32
[15:14] <A-Kaser> H
[15:14] <A-Kaser> Hi
[15:14] <mbruzek> deanman: Please review this change based on your suggestions.
[15:19] <deanman> mbruzek, done
[15:19] <mbruzek> Thank you
[15:21] <mbruzek> deanman: thanks for helping us improve
[15:27] <lazyPower> A-Kaser o/
[15:32] <deanman> marcoceppi, are you still around ?
[15:35] <A-Kaser> have you tips t debug an error when i run charm build
[15:36] <A-Kaser> I have this error "TypeError: list indices must be integers, not str" but no more information about which file
[15:37] <tvansteenburgh> A-Kaser: please put the `charm build` output in a pastebin
[15:37] <marcoceppi> A-Kaser: also -l debug might help
[15:38] <marcoceppi> deanman: what's up?
[15:38] <A-Kaser> http://pastebin.com/c3LrVRi7
[15:38] <deanman> marcoceppi, the no_proxy didn't seem to work out. http://paste.ubuntu.com/15568500/
[15:40] <tvansteenburgh> A-Kaser: can you paste your layer.yaml too?
[15:40] <marcoceppi> A-Kaser: try upgrading to 1.11.2
[15:42] <tvansteenburgh> A-Kaser: actually, paste your metadata.yaml
[15:42] <deanman> lazyPower, the juju deploy local:<series>/<charm-name> works for me, the one you are proposing is also for stable or for 2.0 ?
[15:42] <lazyPower> deanman - that goes away in juju 2.0
[15:42] <A-Kaser> ok brew charm-tools is only 1.11.1 I'm cloning the source
[15:43] <tvansteenburgh> A-Kaser: i think you have a malformed metadata.yaml
[15:43] <deanman> lazyPower, ok maybe have both at README because if you land there from stable docs then it might be confusing if it's not working ?
[15:44] <marcoceppi> A-Kaser: crud, that's my bad. I'll make sure 1.11.2 is in brew. A-Kaser it may be easier to just install from pip in a virutalenv on your OSX machine
[15:45] <marcoceppi> A-Kaser: virtualenv ~/charm-tools; . ~/charm-tools/bin/activate; pip install charm-tools==1.11.2
[15:45] <A-Kaser> metadata : http://pastebin.com/R2EhNDVg
[15:45] <marcoceppi> A-Kaser: 1.11.2 is the last 1.X build, 2.0 and beyond are a bit different and wont' work as expected until they're packaged in brew
[15:46] <marcoceppi> A-Kaser: ah, your metadata is also a bit off
[15:46] <A-Kaser> marcoceppi: I have installed charm-tools there 2 days
[15:46] <marcoceppi> A-Kaser: that requires: packages: - screen should be in layer.yaml not metadata.yaml
[15:47] <marcoceppi> A-Kaser: as an example: https://github.com/marcoceppi/layer-gitlab/blob/master/layer.yaml
[15:47] <marcoceppi> A-Kaser: that requires: section is for relations, not packages
[15:47] <A-Kaser> ok I've removed packages lines thx
[15:48] <A-Kaser> charm build work , thx
[15:49] <marcoceppi> A-Kaser: cheers
[15:49] <marcoceppi> A-Kaser: expect a new charm-tools 2.0 build in homebrew in the next week or so
[15:51] <A-Kaser> I use juju2 but charm-tools 1.x , is it a problem ?
[15:54] <marcoceppi> A-Kaser: no, no problem
[15:55] <marcoceppi> A-Kaser: charm 2.0 will be much nicer when you go to upload charms to the store, but it's more or less the same build command, etc
[16:47] <jamespage> thedac, some comments on https://code.launchpad.net/~thedac/charm-helpers/apparmor/+merge/290096
[16:47] <thedac> jamespage: thanks.
[16:54] <jamespage> ok taking a break - may be back later...
[17:42] <thedac> jamespage: pushed up fixes
[17:43] <Prabakaran> Helo Team, Hope you all doing great  I was trying to use ibm-base-layer and as confirm by matt it uses juju 2.0 so i was trying to install juju 2.0 in my machine by following the link https://jujucharms.com/docs/devel/introducing-2 and found it has steps for aws setup only and I am not able to install local lxc containers. Can you please advise on how to set up local lxc containers in juju 2.0.
[17:44] <c0s> I ingested a bigger dataset from gh-archive this time and ran some analysis on it http://54.183.80.145:9090/#/notebook/2BEZN4985
[17:49] <marcoceppi> c0s: sweet
[17:49] <c0s> wheee ;)
[17:50] <mbruzek> Prabakaran: LXD is the new name for the local lxc provider: https://jujucharms.com/docs/devel/config-LXD
[17:51] <cory_fu> c0s: Sorry I haven't been available yet to work on the actions.  RQ and meetings have taken up my whole day
[17:51] <mbruzek> Prabakaran: You might be having issues because the local directory changes for Juju 2.0.  From ~/.juju to ~/.local/share/juju
[17:52] <cory_fu> mbruzek: Is that true?  The ibm-base-layer requires 2.0?
[17:52] <cory_fu> Is it using resources?
[17:53] <mbruzek> not yet but very soon, the ibm-base-layer is using the "terms" feature
[17:53] <mbruzek> that is only in 2.0
[17:53] <cory_fu> Ah, right.
[17:53] <c0s> cory_fu: no worries
[17:53] <c0s> the main thing that I have unblocked you ;)
[17:54] <c0s> commits to *juju* repos over a couple of weeks back in January http://54.183.80.145:9090/#/notebook/2BEZN4985/paragraph/20160331-172642_1279078054?asIframe
[17:56] <cory_fu> c0s: For the action to submit a job, is there any value to making it able to schedule the job to run repeatedly?
[17:57] <c0s> I'd say yes
[17:58] <cory_fu> c0s: And would cron be the best way to do that?
[17:58] <c0s> Not sure if Zeppelin has a REST call for that... but with spark-submit, spark-shell, and other CLI tools we should be able to
[17:58] <c0s> yup
[17:58] <c0s> cron always is the best ;)
[18:01] <cory_fu> c0s: I see the Zeppelin REST API has endpoints for cron jobs.  Does that schedule a given notebook to be re-run periodically?
[18:02] <c0s> I think yes, that' for the whole thing
[18:03] <c0s> cory_fu: kwmonroe, this http://54.183.80.145:18080/ doesn't look right
[18:03] <c0s> I have ran a bunch of the Z jobs on that cluster yet it doesn't show any
[18:10] <cory_fu> c0s: Can you check the log on the Spark unit and see if it has anything?  juju debug-log -i spark/0
[18:11] <cory_fu> juju debug-log -i spark/0 --replay
[18:12] <bugrum> on juju2 what is the proper way of deploying a charm?
[18:12] <c0s> sec
[18:12] <rick_h_> bugrum: ? juju deploy mysql? what charm are you looking for?
[18:12] <bugrum> I got juju up and running (thanks to LiftedKilt) and am now trying to deploying openstack on top of it
[18:13] <bugrum> juju2 says it can't resolve URL "cs:openstack"
[18:13] <c0s> cory_fu: nothing, it just hanging there
[18:13] <c0s> do I need to run something at the same time?
[18:14] <cory_fu> Bah.  I think the -i option doesn't work very well with 2.0
[18:14] <cory_fu> c0s: You might have to just `juju ssh spark/0` and view the log on there (sudo less /var/log/juju/unit-spark-0.log)
[18:16] <c0s> there's a bunch of juju messages (what seems to be from the deployment) cory_fu
[18:18] <kwmonroe> cory_fu: c0s, fyi, -i with juju 2.0 needs "unit-spark-0" and not "spark/0"
[18:18] <c0s> but looking under /var/log/spark I see that the process is running history server like this
[18:18] <c0s>   org.apache.spark.deploy.history.HistoryServer hdfs:///user/ubuntu/directory
[18:18] <cory_fu> ORLY?
[18:18] <cory_fu> Good to know
[18:18] <c0s> +1 kwmonroe
[18:18] <kwmonroe> c0s: what user is running that HistoryServer process?  ubuntu?
[18:19] <c0s> notices the different name while browsing the logs manually
[18:19] <c0s> yes kwmonroe
[18:19] <kwmonroe> c0s: as ubuntu on the spark/0 unit, can you "hdfs -dfs -ls -R /user/ubuntu/directory"?
[18:19] <c0s> interesting hdfs:///user/ubuntu/directory/application_1459442777673_0001.inprogress
[18:19] <kwmonroe> ahh
[18:20] <c0s> there's nothing else in this directory, so the history server is correct
[18:20] <kwmonroe> so it's writing stuff.. that's good.
[18:20] <c0s> well, sparsly
[18:20] <c0s> I have ran at least 3 different jobs, they all complete now
[18:20] <c0s> that's the only trace of it
[18:20] <kwmonroe> i wonder if the spark slaves or zepp is configured to use 54.183.80.145 as the history server
[18:20] <c0s> it's not in progress for sure
[18:21] <c0s> is there any way to check the configs using juju? Or shall I just manually look at it?
[18:22] <kwmonroe> c0s: run ~/sparkpi.sh from the spark/0 unit and then check your :18080 URL.  that will tell us if spark-submit jobs are making it to the historyserver
[18:22] <c0s> so, /etc/spark/conf/ would be the correct place to check, I presume
[18:22] <c0s> ok, runnig
[18:23] <kwmonroe> yeah c0s, i dunno what (if any) history server config is possible for zepp
[18:23] <c0s> and I can see http://54.183.80.145:18080/?page=1&showIncomplete=true new job in there
[18:23] <kwmonroe> ok, so spark-submit works directly on the unit.  when you ran your z jobs earlier, were they using spark-submit through zeppelin?
[18:23] <c0s> kwmonroe: I don't know if Zepp has to do with it. These are just spark jobs
[18:24] <c0s> yeah... so, why Zepp's jobs aren't showing?
[18:24] <c0s> nits
[18:24] <kwmonroe> may need to dig into the /var/log/zeppelin/* on the spark unit
[18:24] <c0s> nuts
[18:24] <c0s> lemme check
[18:25] <cory_fu> kwmonroe: Have you run in to services getting stuck on "Waiting for agent initialization to finish" and having this in the logs over and over: http://pastebin.ubuntu.com/15570047/
[18:25] <kwmonroe> yeah cory_fu
[18:25] <kwmonroe> juju upgrade-charm <foo>
[18:25] <cory_fu> Is there a bug open for this?
[18:26] <c0s> judging by this tail -f zeppelin-ubuntu-plugin-0.out
[18:26] <c0s> the spark jobs are running as needed
[18:26] <c0s> I see the output in the file
[18:26] <c0s> like ------ Create new SparkContext yarn-client -------
[18:26] <c0s> and the progress indicators
[18:27] <c0s> crazy
[18:29] <kwmonroe> cory_fu: you sure it's a bug?  i hit that when i deploy a local charm, hackity hack hack, rebuild and try to deploy the new rev of that local charm... at which point there's a mismatch between my $JR charm and unit-0's charm, which i was ok with.  upgrade-charm to refresh unit-0 seemed like the right thing to require in that case.
[18:29] <c0s> and I am quite positive that last night the jobs were displayed in the history server UI
[18:30] <cory_fu> kwmonroe: This was on a fresh deploy
[18:30] <cory_fu> Though I did keep my bootstrap node around and just created a new model
[18:31] <kwmonroe> c0s: what spark execution mode are you in?  or rather, what is the MASTER envvar set to?
[18:31] <cory_fu> Still, I think Juju should definitely handle that better.
[18:31] <c0s> yarn-client
[18:32] <c0s> according to the Zep's configuration
[18:35] <kwmonroe> c0s: is it possible spark's job history won't work in yarn-client mode?  juju expose resourcemanager and go take a look at that UI to see if your jobs are listed there.
[18:35] <c0s> well, the sparkpi stuff runs in the yarn-client mode too
[18:36] <c0s> and it is logged
[18:36] <c0s> kwmonroe: ^^
[18:37] <kwmonroe> that, c0s, is a mighty fine point.
[18:39] <c0s> :)
[18:40] <c0s> anyhow, perhaps isn't a huge issue, but looks like a minor bug to me. However, I am still using the bigtop-dev bundle, so it might be expected to be a bit flaky I guess
[18:41] <cory_fu> Well, we're planning to release those bundles so it shouldn't really be flaky
[18:43] <c0s> agree cory_fu
[18:43] <c0s> what I was saying is that this is still in development, so some regressions are possible
[19:19] <c0s> kwmonroe: is it possible to add say kafka to an already deployed bundle?
[19:23] <kwmonroe> you bet c0s!  "juju deploy cs:~bigdata-dev/trusty/apache-kafka kafka"
[19:23] <kwmonroe> but then it'll wait for zookeepers, so you'll need to "juju deploy cs:~bigdata-dev/trusty/apache-zookeeper zk" and "juju add-relation kafka zk"
[19:24] <c0s> ok, let me try that ;)
[19:24] <c0s> thanks!
[19:24] <c0s> what the worst could happen
[19:24] <kwmonroe> there is no try c0s.  just do it.
[19:24] <c0s> ok, kwmonroe - I won't try ;)
[19:24] <kwmonroe> :)
[19:34] <c0s> kwmonroe: shall I scale up ZK to 3 nodes as well? Or 1 will do ?
[19:39] <admcleod1> hello
[19:40] <admcleod1> c0s: i dunno if he replied, but 1 zk is fine
[19:41] <c0s> it seems to be holding for the POC purposes, yes
[19:41] <c0s> thanks!
[20:08] <c0s> kaffffffka
[20:08] <lazyPower> shouted to the tune of "KHAAAAAAN"
[20:10] <c0s> yes lazyPower
[20:10] <c0s> alongside with the sound of my sobbing ;(
[20:20] <kwmonroe> c0s: sobbing because everything went great?  sobs and frownie faces of joy?
[20:34] <arosales> kwmonroe: is the latest code for the hadoop charms still at lp?
[20:34] <arosales> for example hdfs master I get directed to
[20:34] <arosales> https://code.launchpad.net/~bigdata-charmers/charms/trusty/apache-hadoop-hdfs-master/trunk
[20:34] <arosales> from https://jujucharms.com/apache-hadoop-hdfs-master/trusty/9 which is the promulgated version
[20:38] <arosales> kwmonroe: that doesn't look like a reactive charm though . . . .
[20:38] <arosales> cory_fu: ^
[20:38] <cory_fu> arosales: No, all of the new charms are on github
[20:39] <cory_fu> Also, hdfs-master is the oldness.  apache-hadoop-namenode is the newness
[20:39] <deanman> anyone using brew charm-tools 1.11.1 ?
[20:39] <arosales> cory_fu: ya I also thought all the new charms were in git
[20:40] <arosales> cory_fu: but this is the current recommeded charm
[20:40] <arosales> cory_fu: it is just the case that we haven't got the git charms to be the recommeded ones?
[20:40] <cory_fu> arosales: I'd like to fix that right now.  I'm going to start promulgating charms and bundles
[20:40] <cory_fu> After this meeting
[20:40] <arosales> cory_fu: ya super confusing atm
[20:41] <arosales> cory_fu: also as a result of that, if any of the charms at https://jujucharms.com/big-data change please file a bug on https://github.com/CanonicalLtd/jujucharms.com/issues
[20:47] <arosales> deanman: ah brew still probably at the 1.x versions
[20:47] <arosales> deanman: hitting any issues?
[20:47] <deanman> arosales, yeah when trying to $charm build
[20:51] <deanman> arosales, and this would be the tailed output when using -l DEBUG http://pastebin.ubuntu.com/15571120/
[20:53]  * arosales looking at pastebin
[20:53] <mbruzek> Hey deanman the brew release of charm-tools is a bit behind the other ones. I would recommend setting up an venv and pip installing charm tools until the new version is available.
[20:55] <arosales> deanman: I saw a similar error but on ubuntu I needed to install venv package
[20:55] <deanman> mbruzek, No problem mbruzek, i'm building from within the ubuntu VM (which runs 1.11.2).
[20:55] <arosales> deanman: your running this on osx?
[20:55] <deanman> arosales, Yes
[20:55] <deanman> on ubuntu with 1.11.2 it works just fine
[20:55] <arosales> ok, deanman it looks like the dep is broken in brew
[20:55] <arosales> aisrael: are you still looking after charm-tools in brew?
[20:56] <aisrael> arosales: Marco's been doing it
[20:56] <aisrael> I think
[20:56] <arosales> aisrael: if so could you take an action to update to the latest, possibly even 2.0 that marcoceppi is going to put into xenial?
[20:56] <arosales> marcoceppi's brewing for osx :-)  I bet he has been keeping that a secret :-)
[20:56] <lazyPower> deanman - do you have git installed?
[20:56] <marcoceppi> arosales: it's already on my list, it'll be out tomomrrow
[20:56] <arosales> aisrael: could you follow up with marcoceppi though?
[20:56] <lazyPower> i assume yeah but there's only a few paths that end in that error
[20:57] <arosales> marcoceppi: ok
[20:57] <lazyPower> and its typically due to not having unzip, git, or  one of the other smallish dependencies that we're shelling out to invoke
[20:57] <arosales> deanman: are you ok to leverage ubuntu in the vm for now?
[20:58] <lazyPower> mbruzek - oh i figured out why its tanked, we never got the restructuring merge done
[20:58] <deanman> arosales, Yeah it's fine, just asking if you wanted to raise an issue on github
[20:59] <lazyPower> mbruzek https://github.com/juju-solutions/charms.docker/pull/14  -- when you have time <3 its kidn of big but most of it is documentation updates for the charms.docker api
[20:59] <arosales> deanman: thanks. Sounds like marcoceppi is going to update brew soon
[20:59] <lazyPower> deanman - that particular issue has been patched. https://github.com/juju/charm-tools/issues/159
[20:59] <lazyPower> deanman - is there a remote on your git repository? or is it a local git repo only?
[21:00] <deanman> deanman, Yeah i do have a single remote setup
[21:00] <deanman> lazyPower, ....
[21:01] <lazyPower> deanman do you have bzr installed?
[21:01] <lazyPower> if any of those libs are in a launchpad repo, its gonna die with that error as it tries to bzr branch the remote
[21:01] <lazyPower> s/libs/layers
[21:02] <deanman> lazyPower, a private remote repo, not a launchpad repo
[21:02] <lazyPower> deanman - can i see a pastebin of your layer.yaml?
[21:04] <deanman> lazyPower, if only pastebinit on mac didn't complain about "Bad API request...", gimme a sec
[21:04] <deanman> lazyPower, Single liner -> includes: ['layer:docker']
[21:04] <lazyPower> and thats not in lp
[21:04]  * lazyPower is puzzled now
[21:05] <deanman> lazyPower, fails after copying tox.ini
[21:08] <lazyPower> deanman - do you have virtualenv installed?
[21:08] <lazyPower> and/or pip
[21:09] <deanman> after installing virtualenv with pip the build goes through
[21:09] <lazyPower> \o/ paydirt
[21:09] <lazyPower> https://github.com/juju/charm-tools/blob/v1.11.1/charmtools/build/tactics.py#L622 - was the clue
[21:10] <lazyPower> thats poopy though, i'm pretty sure this error messaging was patched with the future revs
[21:10] <lazyPower> so its more apparent whats happened
[21:10] <deanman> removed virtualenv and bug reappeared, so definitely installing virtualevn was the key !!
[21:12] <lazyPower> https://github.com/juju/charm-tools/blob/v2.1.2/charmtools/build/tactics.py#L636  - doesn't look like it unless we patched it on the invocation of this in utils
[21:15] <LiftedKilt> what does juju-gui-console-enabled do in the juju-gui charm?
[21:18] <lazyPower> LiftedKilt - it opens my garage door. plz dont toggle that option
[21:20] <jrwren> afaik, nothing. it may have been used in the past.
[21:23] <marcoceppi> arosales: was waiting for the packages to settle
[21:24] <LiftedKilt> lazyPower: I'm toggling it furiously now
[21:24] <LiftedKilt> jrwren: so no plans to re-enable it then?
[21:25] <lazyPower> LiftedKilt: schenanigans!
[21:25] <jrwren> LiftedKilt: none that I have heard.
[21:26] <jrwren> We probably did not delete it because it would break bundles which specify it, but we could mark it obsolete and document that it is unused and is a no-op
[21:28] <LiftedKilt> but it sounds like a cool feature haha
[21:32] <arosales> marcoceppi: ya I figured you wanted to updated with what we put into xenial
[21:33] <arosales> marcoceppi: appreciate all the work you have been putting into charm{tools} and getting it out there for folks
[21:33] <A-Kaser> re
[21:48] <A-Kaser> my charm create a new service but no units
[21:48] <A-Kaser> juju debug-log show me nothing, another way to see log ?
[21:49] <lazyPower> A-Kaser - did you deploy it via dragging on the gui or something? there's a scenario where you can stage a service without any allocated units, so you can do things like upload resources, tweak config
[21:49] <A-Kaser> juju deploy local:trusty/spark-notebook
[21:49] <lazyPower> *then* you can add units to kind of "flush" the whole operation into the environment, but if thats not what you did, i'm uncertain why you got a service declaration without any units though
[21:49] <admcleod1> c0s: with the bigtop puppet deploy, how do you define the role of the machine you're deploying to?
[21:49] <A-Kaser> it's my own charm, may be I have missing something
[21:50] <c0s> admcleod1: there are two ways
[21:50] <A-Kaser> s/missing/missed/
[21:50] <c0s> you can just specify FQDNs for master nodes (NN, spark, etc.) and then the rest of the nodes will be worker nodes.
[21:51] <c0s> or you can do more complex settings of the roles via hiera
[21:54] <c0s> admcleod1: you can check https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/bigtop/cluster.yaml for more info
[21:54] <c0s> search for roles
[21:55] <c0s> the complexity, mostly, comes from the fact that you need to maintain different hiera configs on different nodes
[21:55] <c0s> where's the simpler method would mandate uniformed configs
[22:00] <admcleod1> c0s: right, thanks
[22:00] <admcleod1> c0s: ah yeah ok pretty straight forward
[22:01] <c0s> yeah, we are simple folks ... over there ;)
[22:01] <c0s> in our corners of them woods
[22:02] <c0s> bloody hell, what damn Kafka doesn't have a thin client?
[22:02] <c0s> what's the point of having all this crap laying around if every Joe and Frank needs to write tons of custom code to handle all that parallelism ;(
[22:03] <c0s> damn hipsters
[22:03] <admcleod1> hah
[22:31] <c0s> kwmonroe: cory_fu, admcleod1, looks like apache-kafka-14 is pulling in
[22:31] <c0s>   "url": "https://s3.amazonaws.com/jujubigdata/apache/x86_64/kafka_2.10-0.8.2.1-89ede9a.tgz"
[22:31] <c0s> which has the bug KAFKA-1711
[22:31] <c0s> not sure if this is because of that, I can not even writing anything into a topic
[22:36] <c0s> perhaps not as I can write data to a topic using juju actions
[22:36] <c0s> weeeird
[23:19] <arosales> cory_fu: how would you suggest to pull in admcleod1 branches into github/juju-solutions for the hadoop HA bits?
[23:20] <kwmonroe> why would you want to do that arosales?
[23:20] <kwmonroe> it's not ready to merge into the juju-solutions namespace
[23:22] <kwmonroe> or, admcleod1 can push his HA-enabled charms to his namespace, and we can bundle that into a bundle-ha.yaml in the apache-core-batch-processing bundle (which does live in juju-solutions)
[23:23] <arosales> kwmonroe: I guess I just need to understand how the juju-solutions name space is used
[23:25] <arosales> kwmonroe: for example if I wanted to submit a merge request against the new HA bits. I have the following questions
[23:25] <arosales> 1. how do I find them
[23:25] <arosales> 2. where do I submit issues
[23:25] <arosales> 3. where so I submit branches against
[23:30] <cory_fu> arosales: github.com/juju-solutions/apache-hadoop-namenode is the upstream.  To work on features, we fork that to our personal namespace, work, and then when it's ready submit a PR against the upstream.
[23:31] <cory_fu> Once HA is ready, there will be a PR and once that's merged, it will be in juju-solutions
[23:31] <arosales> cory_fu: seems that doesn't give a lot of opportunity for folks collaborate on the branch before pushing to a final version
[23:32] <arosales> but completely agree that during early dev personal name space works well
[23:32]  * arosales just wondering if there should be a deb branch off of http://github.com/juju-solutions/apache-hadoop-namenode
[23:32] <cory_fu> arosales: You can fork Andrew's repo and submit PRs back to him
[23:32] <arosales> that has early bits folks want others to test
[23:32] <c0s> I am struggling to come up with a good data ingest example for Kafka ;(
[23:33] <arosales> cory_fu: and thats one way, but now I have to follow your namespace, kwmonroe, kjackal, admcleod1, and anyone elese for the dev version
[23:33] <cory_fu> arosales: We can also have branches on the juju-solutions repo, but the "typical" GitHub workflow is to fork and submit PRs from your personal namespace
[23:33] <arosales> cory_fu: +1 on personal name space to hack.
[23:33] <arosales> cory_fu: my noob questions is where do I go to collaborate on the dev versions
[23:34] <arosales> cory_fu: if the answer is ping here and find out who's personal namespace has the dev branch I can do that
[23:34] <arosales> cory_fu: just wanted to be sure that was the correct process before I started filling issues against kjackal  and admcleod1 namespace
[23:35] <cory_fu> So, with this workflow, there isn't really a single dev version.  There are multiple feature branches.  We had that before, but they lived in a single namespace and eventually got merged into bigdata-dev just prior to release.  So bigdata-dev was more like staging
[23:35] <cory_fu> arosales: If we think it would be easier, we can do branches within the upstream repo.  I'm not against that
[23:36] <arosales> cory_fu: you guys are the experts here :-) I just wanted to play nice with your workflow
[23:36] <cory_fu> We'll discuss it during sync tomorrow
[23:36] <arosales> cory_fu: what ever is working for you guys I can follow, I just simple didn't know
[23:36] <arosales> cory_fu: sounds good
[23:37] <cory_fu> arosales: You can also see all the forks pretty easily on the GH website
[23:37] <cory_fu> Just click the number next to "Fork"
[23:38] <LiftedKilt> If I can deploy xenial and trusty machines on maas with juju, and maas is telling me that a wily machine is deployed but my juju controller still shows pending, where would one go to troubleshoot that?
[23:39] <arosales> cory_fu: gotcha and that works when there are a few folks to guess who branch is the dev version I should submit issues against, but when more folks get involved (ie start branching) that guess may be a little hard, but perhaps look at that problem when it occurs.
[23:40] <cory_fu> arosales: Or you could just submit all issues against the upstream
[23:41] <arosales> cory_fu: true, as long as I have found the right dev branch to find the issues, which is the crux of my questions
[23:42] <marcoceppi> busy day , time to wind down
[23:42] <arosales> LiftedKilt: I would start with at least "juju debug-log" and see if that gives you any useful pointers
[23:42] <marcoceppi> juju kill-controller
[23:42] <marcoceppi> Waiting on 5 models, 20 machines, 22 services
[23:43] <marcoceppi> $_$
[23:43] <arosales> LiftedKilt: and I am guessing you are using maas 1.9 and juju 2.0?
[23:43] <LiftedKilt> arosales: correct
[23:44] <arosales> I *think* the maas team and juju team are working on compatibility between those two versions
[23:44] <LiftedKilt> arosales: ERROR verification failed: no macaroons
[23:45] <LiftedKilt> arosales: I know they've been working on maas 2 and juju 2 due to the new maas api, but I didn't know they had compatibility problems with 1.9?
[23:45] <arosales> any juju-devs around that can confirm that?
[23:45] <arosales> LiftedKilt: I may very well be wrong
[23:45]  * arosales is going to pop into juju-dev and confirm
[23:46] <LiftedKilt> yall got any more of them macaroons?
[23:46] <LiftedKilt> just need to borrow one real quick. I promise not to eat it.
[23:47] <arosales> macaroons way tastier that cookies
[23:47] <arosales> s/that/than/
[23:49] <arosales> c0s: were you looking for a stream set?
[23:50] <c0s> not exactly
[23:50] <rick_h_> arosales: LiftedKilt we're workiing on juju 2 and maas 2.0
[23:50] <rick_h_> arosales: LiftedKilt 1.9 should be peachy
[23:50] <arosales> LiftedKilt: juju-core folks say maas 1.9 and juju 2.0 should work together
[23:50] <LiftedKilt> rick_h_ arosales: any idea why juju debug-log is complaining about no macaroons?
[23:51] <c0s> here's the main dilemma: Kafka is poorly suited to work with a variety of data sources out of the box. It is a message bus, after all, so you need all sorts of sinks, consumers, etc for each particular case
[23:51] <rick_h_> LiftedKilt: hmm, that sounds like a bug to be honest.
[23:51] <c0s> hence, developing a generic enough data flow (ie without writing a bunch of special code) is a challenge
[23:52] <c0s> I was thinking may be we should experiment with something like SpringXD to provide a good pipeline building tool. Which is integrated with Kafka pretty well, it seems
[23:52] <arosales> LiftedKilt: can you post what you are seeing at http://paste.ubuntu.com/
[23:52] <LiftedKilt> I noticed all my wily charms were failing, so I tried to add-machine a couple wily machines to see if I could independently get them to work
[23:52] <c0s> or Gobblin to build ETL-like flows
[23:53] <rick_h_> LiftedKilt: they're failing to deploy via deploying wily based charms?
[23:53] <rick_h_> LiftedKilt: can you speak to what you're deploying and how?
[23:53] <LiftedKilt> arosales: https://paste.ubuntu.com/15572264/
[23:53] <c0s> or if we are to focus on collection- or aggregating-like use cases, then Flume+Kafka
[23:54] <c0s> but bare Kafka seems to be too bare to start using is without any preps.
[23:54] <LiftedKilt> rick_h_: sure - attempting to deploy the openstack-lxd bundle
[23:54] <arosales> c0s: perhaps prep the data before sending over to kafka
[23:54] <rick_h_> LiftedKilt: do me a favor, try juju status --format=yaml
[23:54] <c0s> arosales: you still need to connect to data
[23:55] <LiftedKilt> rick_h_: when I noticed it was hanging on provisioning the machines, I created a new model and tried adding a machine there manually
[23:55] <LiftedKilt> https://paste.ubuntu.com/15572272/
[23:55] <c0s> having a source isn't enough. You need to have source-specific sink
[23:55] <c0s> or producer
[23:55] <arosales> c0s: ya the data format for a gernal case . . .
[23:56] <rick_h_> LiftedKilt: gotcha, so can you juju ssh 0 ?
[23:56]  * arosales also looking up SpringXD c0s
[23:56] <rick_h_> LiftedKilt: and check out if anything is fishy in the /var/log/juju/xxxxx log files
[23:56] <c0s> BTW, out of the box setting up kafka on top of spark-zeppelin bundle doesn't let me to use their producer script in the shell session. Not sure if it's a bug or not
[23:56] <LiftedKilt> rick_h_: juju ssh 0 hangs
[23:57] <rick_h_> LiftedKilt: hmm, can you manually ssh to the machine IP?
[23:58] <LiftedKilt> rick_h_: It would let me it I had the key set up
[23:58] <arosales> c0s: does seem like an issue
[23:58] <rick_h_> LiftedKilt: the yaml status says the machine is deployed?
[23:58] <rick_h_> LiftedKilt: oh heh
[23:59] <arosales> c0s: got it on the board to investigate
[23:59] <c0s> arosales: it is either a differences in the environment between what I get via 'juju ssh kafka/0' and 'juju action do kafka/0...'
[23:59] <c0s> or else
[23:59] <c0s> ok, will add a ticket now
[23:59] <LiftedKilt> rick_h_: yeah it's bizarre - it sees that it's up, but yet at the same time doesn't
[23:59] <c0s> will dig more into this later... getting to the end of my rope today :/