[00:18] is there a list somewhere of the allowed types in actions.yaml? [00:18] i thought there was but i'm having trouble finding it now [00:20] ah ok i see. it's the json schema types. Nevermind. http://json-schema.org/latest/json-schema-core.html <-- for future people [00:23] man zeppelin doesn't like zenial or spark stand-alone or both [00:23] debug continues [00:23] but dinner calls [00:23] s/zenial/xenial :-) [00:34] lazyPower, 2 new interfaces today thanks to icey's help: https://github.com/cholcombe973/juju-interface-ceph https://github.com/cholcombe973/juju-interface-backup === natefinch-afk is now known as natefinch === marlinc_ is now known as marlinc === xnox_ is now known as xnox === stokachu_ is now known as stokachu [07:36] marcoceppi: so I understand what you are saying I just completely disagree with that approach. If you "charm pull-source" for an official, curated charm you should absolutely get official and curated layer. Not one from someone's personal branch who may have decided to add "rm -rf" there just for laugh [07:37] marcoceppi: defaulting to more secure approach is always good, especially if it's so trivial to do [08:07] gnuoy, before I raise any more reviews [08:07] can you sanity check one of https://review.openstack.org/#/q/topic:network-spaces [08:07] sure [08:07] the delta is pretty small [08:08] jamespage, duff grammer in README.md [08:08] gnuoy, hah [08:09] jamespage, when you say "deployer" in the README do you mean that syntax is valid for both juju-deployer and juju native deployer and if not maybe its worth being explicit? [08:10] gnuoy, I was intentionally vague [08:10] the juju v4 bundle format in juju supports it [08:10] haha ok :) [08:14] jamespage, other than that cinder lgtm [08:14] gnuoy, can you comment on the README and I'll update all of them [08:18] jamespage, done [08:22] hey people, is there a way to tell amulet to "attach" to an existing set of charms instead of deploying them from scratch? I am trying to save up some time while writing tests. [08:25] kjackal, not that I know of but let me know if you find a way! [08:26] ok gnuoy === SaMnCo-laptop is now known as SaMnCo-desktop [08:44] Hi, do you keep logs of this channel somewhere? [08:46] deanman: http://irclogs.ubuntu.com/2016/03/31/%23juju.txt [08:46] magicaltrout, thanks [08:58] gnuoy, running the same test for a second time will not redeploy the charms so it does more or less an attach [08:59] kjackal, well, you still loose time while it looks at the deployment, I want a way for it to jump straight to the functional tests themselves [09:00] yes, true [09:09] hello my dear folks [09:10] I got a trouble situation here... [09:54] would anybody have time to have a look at my collectd subordinate review? https://bugs.launchpad.net/charms/+bug/1538573 [09:55] Bug #1538573: New collectd subordinate charm [09:58] jacekn: you had your turn, back of the queue now! ;) [10:00] magicaltrout: the queue is a lie! If you don't ask here you don't bet review ;) [10:00] s/bet/get/ [10:01] true [10:32] magicaltrout, apart from irc logs retention on the link you gave me is there any other alternative that you can suggest for following discussion here offline ? [10:33] yeah, spin up the juju big data quickstart, ingest the log files into hadoop and run a bunch of Map Reduce textual analysis over it..... [12:39] Did the Ubuntu keyserver have a problem in the last hour? I was seeing GPG key retrieval fail during bundle deployment; then suddenly it started working again... [13:10] jacekn: there is no official curated layer. YOUR layer is the official one, we don't store layers, and you're the trusted upstream because we trusted you to put the charm in the store [13:13] marcoceppi: trust level can change easily, social engineers just wait for holes like this (gain trust, get something into the charmstore and then exploit it by modifying their layer) [13:32] jacekn: but modifying the layer won't affect the charm in the store [13:35] marcoceppi: no but it will affect everybody who wants to develop the charm or those who want to compose right before deployment for new hotness [13:36] jacekn: but that's on /them/ to validate the source they're building [13:36] marcoceppi: I really can't see why keeping layers in non-personal branches is a bad idea [13:36] jacekn: we don't own any branches anymore [13:36] jacekn: launchpad ~charmers as we know it is dead [13:36] marcoceppi: sorry but realying on people fails sooner or later [13:36] marcoceppi: oh really? So all new charms in the charmstore are under personal branches? [13:36] jacekn: there is no "personal branch" [13:37] there's no branch [13:37] it's just a blob upload [13:37] promulgation is a pointer to the namespace it was uploaded to [13:37] and we gate allowance of people updating promulgated charms [13:37] we do all the QA we do now, but the person who created the charm "owns" the space [13:37] marcoceppi: so what happens if I delete my LP account one day? [13:38] jacekn: who cares? it's gone. The charms will still be in the store it's fine [13:38] marcoceppi: well those who might want to submit patches will care, if there is no branch there is no way to raise MP [13:39] jacekn: well, we have a way to change ownership of a promulgated source, and people can pull the charm - as it is compiled - and uncompile it since we have a manifest that tracks the files and what layers they came from [13:39] it can be rebuilt somewhere else [13:39] or just updated in place [13:39] the charm store is the single source of truth regardless where upstream development is [13:40] what happens if an upstream's code goes away one day [13:40] the packages in the archive won't suddenly break [13:40] marcoceppi: if I wipe my LP accont there will be no layer to pull [13:40] jacekn: I can reconstruct your layer, from the built charm [13:40] Hi guys do we have the key/value "configuration" (in layer.yaml) documented anywhere? Like how do I get those values and what would be a good case to use them? Asking for a friend. [13:41] mbruzek: not quite documented much anywhere - yet, but we do have a few examples [13:41] marcoceppi: wow all this seems super weird and is IMO asknig for trouble. In any case - the error that was noticed is NOT picked up by the latest stable charm-tools [13:41] marcoceppi: so I can't even test the fix [13:41] marcoceppi: is it mostly the big data guys? [13:42] marcoceppi: I would like to send my friend an example. [13:42] marcoceppi: and of course I'm not too happy about fixing something that's clearly not a requirement when I use stable tools [13:43] mbruzek: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/__init__.py [13:43] from charms.layer import options [13:43] options('layer-you-want-options-from') [13:44] mbruzek: how to define options: https://github.com/juju-solutions/layer-basic/blob/master/layer.yaml [13:44] mbruzek: how to set them: https://github.com/marcoceppi/layer-gitlab/blob/master/layer.yaml [13:44] jacekn: if you don't want to put a repo key. Don't. [13:45] marcoceppi: thank you [13:45] marcoceppi: well that's apparently waht's blockin my charmstore submission [13:45] it's not, it was a suggestion [13:46] marcoceppi: ok so it's very unclear to me what's going on with my submission. tvansteenburgh reviewed it but the only feedback he provided was a repo key. Does it mean it's approved other than that? [13:46] marcoceppi: I suspect you can see that I'm getting increasingly upset by the review process ;) [13:47] jacekn: sure, we're tyring to build an extensive and fruitful ecosystem, but if you don't want to contribute to that by simply point to your upstream layer repo, people just won't know where to contribute to the charm and we'lljust have a bunch of forks of the compiled artifact [13:48] we'll simply do it for you, prior to promulgation [13:48] the review process is two fold. One to make sure charms are sane and two - to help educate charmers as the ecosystem evolves [13:50] marcoceppi: sorry but saying that I don't want to contribute is not fair. I've been trying to get this charm promulgated for the past 2 months and I chased it more times than I can count [13:51] jacekn: yes, and the review queue is old, outdated and the team reviewing is small and over burdened [13:51] jacekn: these aren't unique to you, it's something everyone has had to sufferthough and no one is happy about [13:51] jacekn: we're aggresively building a new review queue and process as the entire way charms are uploaded and submitted to the store is changing [13:51] jacekn: the goal of this is to stream line the process so that review times go from months and weeks to days and hours [13:52] jacekn: all while juju 2.0 is being built and a shift of process is happening [13:52] jacekn: we appreciate your patcience and feel the same strain [13:53] if you have an actual problem with the idea of a repo key, I suggest mailing the list so it can be hashed out, if you're just tired of waiting and making a stand we'll just add the key for you [13:53] mbruzek - we have examples of layer options in the docker layer as well [13:53] marcoceppi: that layer BTW is still being developed but because CS submissions are so slow I just froze it to get it thorugh the review. One it's in the charmstore there will be updates coming. So every week of delay is bigger delta to review later [13:53] jacekn: we understand, again, we have the same problems and no one is disagreeing [13:54] marcoceppi: I don't have real problem with that other than it's something I had no chance of knowing about because stable tools don't care and it's causing yet another delay [13:54] in two weeks, if we can get the dev time, we'll be launching a beta of the new review queue which will allow you to iterate in the store outside of the review process [13:54] marcoceppi: and I can't test it using stable tools so I can't really fix it [13:55] jacekn: our stable tools are in this same flux, we have a 2.1.2 of charm-tools in the devel ppa and pypi -it's considered stable but we're waiting for them to land in xenial before ocnsidering them released [13:55] welcome to the most awkard time of the ecosystem [13:55] if you don't feel comfortable adding it, we'll take care of it [13:56] I have a setup of a single VM running locally and using manual provider to bootstrap it and it works fine when deploying inside lxc but as soon as i move inside a proxied network it gets stuck endlessly trying to setup the agent on the container. Any hints/clues what could be wrong? [13:56] (environments.yaml has proper proxy configuration) [13:57] marcoceppi: cool, thanks. FTR feel free to point at my branch I don't mind I just can't comfortably do it myself because of testing [13:58] jacekn: understood, we have charm-tools in pypi and ppa:juju/devel if you want to test in a lxd machine or virtualenv in the future [13:59] ack [13:59] jacekn: we hope to have it in xenial and /stable in the next two week [13:59] marcoceppi: nice! [14:00] jacekn: where are you doing your layer dev? [14:01] marcoceppi: in a LTS VM (currently trusty) [14:01] jacekn: ah, so not pushed anywhere? [14:02] deanman: what version of Juju? [14:02] marcoceppi: ah sorry that's what you meant. It is in LP https://code.launchpad.net/~jacekn/canonical-is-charms/collectd-composer [14:02] jacekn: <3 thanks [14:03] jacekn: charm LGTM, promulgating [14:03] marcoceppi, 1.25.3 [14:03] marcoceppi: thanks! It's 1.25.3-trusty-amd64 if you want to know (from juju table PPA). I think you can see the trend with "stable" here ;) [14:03] jacekn: hum, so it should work, but you said manual provider so I just realized I don't think cloud init gets run on manual provider which means the proxy stuff won't be put in place [14:03] deanman: ^^ [14:04] deanman: can you `grep -ri "proxy" /etc/apt/apt.conf.d` on the manual machine? do you get any hits? [14:06] marcoceppi, has several entries [14:06] marcoceppi: sorry I don't get it. What's the problem with the proxy? [14:06] jacekn deanman: juju passes proxy information with cloud-init which I don't think gets run in a manual provider bootstrap, but I coudl be wrong [14:07] stub: have you used your juju-wait plugin with juju2 at all? [14:07] marcoceppi: that part I understand but how does it relate to my charm? [14:07] marcoceppi, [14:07] jacekn: wrong ping, sorry, meant to ping deanman [14:07] marcoceppi, "/etc/apt/apt.conf.d/42-juju-proxy-settings:Acquire::http::Proxy "http://......." [14:08] jacekn: you're charm is fine [14:08] marcoceppi: hahaha OK! [14:08] deanman: hum, so that's good. is it failing to bootstrap or failling to set the LXC up? [14:09] deanman, it's failing both to deploy on lxc and on bare machine with --to 0. "Waiting agent to finish". [14:10] deanman: can you grab the logs? [14:10] marcoceppi, machine? unit ? [14:10] deanman: all? [14:11] mbruzek - have a sec to review a hotfix patch? [14:11] yes [14:11] mbruzek - we're broken in master :( - https://github.com/juju-solutions/layer-docker/pull/30/files [14:13] I should probably get an amulet test written to stress that compose functionality just moved into the wheelhouse... mbruzek - do you want that befor eyou do the review? I can bust something out in the next ho ur [14:14] lazyPower: I didn't see tests, so I am building and will test it manually here [14:14] ack === cos1 is now known as c0s [14:30] marcoceppi, http://paste.ubuntu.com/15568095/ [14:31] marcoceppi, ls -lh /var/cache/lxc/cloud-trusty/ is empty even several minutes after deploying, not downloading image? [14:31] deanman: it looks like it's failing to hit itsefl after proxy [14:32] that's the EOF [14:32] marcoceppi, could be a no_proxy env issue then ? [14:32] deanman: add 127.0.0.1 and 10.0.2.15 and 192.168.11.11 to no_proxy [14:53] lazyPower: +1 merged the docker layer [14:53] ta mbruzek , thanks for the priority review [14:54] lazyPower: thanks for sending me quality code [14:55] mbruzek, first link on README at layer-docker points to non-valid URL. [14:55] deanman on it [14:56] oh wow it says "composer" in there [14:56] thats some old text :D [14:57] yeah, that confused me while trying to understand how to build my layer docker charm. I think i found some other references that need update too, sec... [14:59] https://jujucharms.com/docs/1.25/authors-charm-with-docker this also points to wrong "composing documentation" link. [15:14] https://github.com/juju-solutions/layer-docker/pull/32 [15:14] H [15:14] Hi [15:14] deanman: Please review this change based on your suggestions. [15:19] mbruzek, done [15:19] Thank you [15:21] deanman: thanks for helping us improve [15:27] A-Kaser o/ [15:32] marcoceppi, are you still around ? [15:35] have you tips t debug an error when i run charm build [15:36] I have this error "TypeError: list indices must be integers, not str" but no more information about which file [15:37] A-Kaser: please put the `charm build` output in a pastebin [15:37] A-Kaser: also -l debug might help [15:38] deanman: what's up? [15:38] http://pastebin.com/c3LrVRi7 [15:38] marcoceppi, the no_proxy didn't seem to work out. http://paste.ubuntu.com/15568500/ [15:40] A-Kaser: can you paste your layer.yaml too? [15:40] A-Kaser: try upgrading to 1.11.2 [15:42] A-Kaser: actually, paste your metadata.yaml [15:42] lazyPower, the juju deploy local:/ works for me, the one you are proposing is also for stable or for 2.0 ? [15:42] deanman - that goes away in juju 2.0 [15:42] ok brew charm-tools is only 1.11.1 I'm cloning the source [15:43] A-Kaser: i think you have a malformed metadata.yaml [15:43] lazyPower, ok maybe have both at README because if you land there from stable docs then it might be confusing if it's not working ? [15:44] A-Kaser: crud, that's my bad. I'll make sure 1.11.2 is in brew. A-Kaser it may be easier to just install from pip in a virutalenv on your OSX machine [15:45] A-Kaser: virtualenv ~/charm-tools; . ~/charm-tools/bin/activate; pip install charm-tools==1.11.2 [15:45] metadata : http://pastebin.com/R2EhNDVg [15:45] A-Kaser: 1.11.2 is the last 1.X build, 2.0 and beyond are a bit different and wont' work as expected until they're packaged in brew [15:46] A-Kaser: ah, your metadata is also a bit off [15:46] marcoceppi: I have installed charm-tools there 2 days [15:46] A-Kaser: that requires: packages: - screen should be in layer.yaml not metadata.yaml [15:47] A-Kaser: as an example: https://github.com/marcoceppi/layer-gitlab/blob/master/layer.yaml [15:47] A-Kaser: that requires: section is for relations, not packages [15:47] ok I've removed packages lines thx [15:48] charm build work , thx [15:49] A-Kaser: cheers [15:49] A-Kaser: expect a new charm-tools 2.0 build in homebrew in the next week or so [15:51] I use juju2 but charm-tools 1.x , is it a problem ? [15:54] A-Kaser: no, no problem [15:55] A-Kaser: charm 2.0 will be much nicer when you go to upload charms to the store, but it's more or less the same build command, etc [16:47] thedac, some comments on https://code.launchpad.net/~thedac/charm-helpers/apparmor/+merge/290096 [16:47] jamespage: thanks. [16:54] ok taking a break - may be back later... [17:42] jamespage: pushed up fixes [17:43] Helo Team, Hope you all doing great  I was trying to use ibm-base-layer and as confirm by matt it uses juju 2.0 so i was trying to install juju 2.0 in my machine by following the link https://jujucharms.com/docs/devel/introducing-2 and found it has steps for aws setup only and I am not able to install local lxc containers. Can you please advise on how to set up local lxc containers in juju 2.0. [17:44] I ingested a bigger dataset from gh-archive this time and ran some analysis on it http://54.183.80.145:9090/#/notebook/2BEZN4985 [17:49] c0s: sweet [17:49] wheee ;) [17:50] Prabakaran: LXD is the new name for the local lxc provider: https://jujucharms.com/docs/devel/config-LXD [17:51] c0s: Sorry I haven't been available yet to work on the actions. RQ and meetings have taken up my whole day [17:51] Prabakaran: You might be having issues because the local directory changes for Juju 2.0. From ~/.juju to ~/.local/share/juju [17:52] mbruzek: Is that true? The ibm-base-layer requires 2.0? [17:52] Is it using resources? [17:53] not yet but very soon, the ibm-base-layer is using the "terms" feature [17:53] that is only in 2.0 [17:53] Ah, right. [17:53] cory_fu: no worries [17:53] the main thing that I have unblocked you ;) [17:54] commits to *juju* repos over a couple of weeks back in January http://54.183.80.145:9090/#/notebook/2BEZN4985/paragraph/20160331-172642_1279078054?asIframe [17:56] c0s: For the action to submit a job, is there any value to making it able to schedule the job to run repeatedly? [17:57] I'd say yes [17:58] c0s: And would cron be the best way to do that? [17:58] Not sure if Zeppelin has a REST call for that... but with spark-submit, spark-shell, and other CLI tools we should be able to [17:58] yup [17:58] cron always is the best ;) [18:01] c0s: I see the Zeppelin REST API has endpoints for cron jobs. Does that schedule a given notebook to be re-run periodically? [18:02] I think yes, that' for the whole thing [18:03] cory_fu: kwmonroe, this http://54.183.80.145:18080/ doesn't look right [18:03] I have ran a bunch of the Z jobs on that cluster yet it doesn't show any [18:10] c0s: Can you check the log on the Spark unit and see if it has anything? juju debug-log -i spark/0 [18:11] juju debug-log -i spark/0 --replay [18:12] on juju2 what is the proper way of deploying a charm? [18:12] sec [18:12] bugrum: ? juju deploy mysql? what charm are you looking for? [18:12] I got juju up and running (thanks to LiftedKilt) and am now trying to deploying openstack on top of it [18:13] juju2 says it can't resolve URL "cs:openstack" [18:13] cory_fu: nothing, it just hanging there [18:13] do I need to run something at the same time? [18:14] Bah. I think the -i option doesn't work very well with 2.0 [18:14] c0s: You might have to just `juju ssh spark/0` and view the log on there (sudo less /var/log/juju/unit-spark-0.log) [18:16] there's a bunch of juju messages (what seems to be from the deployment) cory_fu [18:18] cory_fu: c0s, fyi, -i with juju 2.0 needs "unit-spark-0" and not "spark/0" [18:18] but looking under /var/log/spark I see that the process is running history server like this [18:18] org.apache.spark.deploy.history.HistoryServer hdfs:///user/ubuntu/directory [18:18] ORLY? [18:18] Good to know [18:18] +1 kwmonroe [18:18] c0s: what user is running that HistoryServer process? ubuntu? [18:19] notices the different name while browsing the logs manually [18:19] yes kwmonroe [18:19] c0s: as ubuntu on the spark/0 unit, can you "hdfs -dfs -ls -R /user/ubuntu/directory"? [18:19] interesting hdfs:///user/ubuntu/directory/application_1459442777673_0001.inprogress [18:19] ahh [18:20] there's nothing else in this directory, so the history server is correct [18:20] so it's writing stuff.. that's good. [18:20] well, sparsly [18:20] I have ran at least 3 different jobs, they all complete now [18:20] that's the only trace of it [18:20] i wonder if the spark slaves or zepp is configured to use 54.183.80.145 as the history server [18:20] it's not in progress for sure [18:21] is there any way to check the configs using juju? Or shall I just manually look at it? [18:22] c0s: run ~/sparkpi.sh from the spark/0 unit and then check your :18080 URL. that will tell us if spark-submit jobs are making it to the historyserver [18:22] so, /etc/spark/conf/ would be the correct place to check, I presume [18:22] ok, runnig [18:23] yeah c0s, i dunno what (if any) history server config is possible for zepp [18:23] and I can see http://54.183.80.145:18080/?page=1&showIncomplete=true new job in there [18:23] ok, so spark-submit works directly on the unit. when you ran your z jobs earlier, were they using spark-submit through zeppelin? [18:23] kwmonroe: I don't know if Zepp has to do with it. These are just spark jobs [18:24] yeah... so, why Zepp's jobs aren't showing? [18:24] nits [18:24] may need to dig into the /var/log/zeppelin/* on the spark unit [18:24] nuts [18:24] lemme check [18:25] kwmonroe: Have you run in to services getting stuck on "Waiting for agent initialization to finish" and having this in the logs over and over: http://pastebin.ubuntu.com/15570047/ [18:25] yeah cory_fu [18:25] juju upgrade-charm [18:25] Is there a bug open for this? [18:26] judging by this tail -f zeppelin-ubuntu-plugin-0.out [18:26] the spark jobs are running as needed [18:26] I see the output in the file [18:26] like ------ Create new SparkContext yarn-client ------- [18:26] and the progress indicators [18:27] crazy [18:29] cory_fu: you sure it's a bug? i hit that when i deploy a local charm, hackity hack hack, rebuild and try to deploy the new rev of that local charm... at which point there's a mismatch between my $JR charm and unit-0's charm, which i was ok with. upgrade-charm to refresh unit-0 seemed like the right thing to require in that case. [18:29] and I am quite positive that last night the jobs were displayed in the history server UI [18:30] kwmonroe: This was on a fresh deploy [18:30] Though I did keep my bootstrap node around and just created a new model [18:31] c0s: what spark execution mode are you in? or rather, what is the MASTER envvar set to? [18:31] Still, I think Juju should definitely handle that better. [18:31] yarn-client [18:32] according to the Zep's configuration [18:35] c0s: is it possible spark's job history won't work in yarn-client mode? juju expose resourcemanager and go take a look at that UI to see if your jobs are listed there. [18:35] well, the sparkpi stuff runs in the yarn-client mode too [18:36] and it is logged [18:36] kwmonroe: ^^ [18:37] that, c0s, is a mighty fine point. [18:39] :) [18:40] anyhow, perhaps isn't a huge issue, but looks like a minor bug to me. However, I am still using the bigtop-dev bundle, so it might be expected to be a bit flaky I guess [18:41] Well, we're planning to release those bundles so it shouldn't really be flaky [18:43] agree cory_fu [18:43] what I was saying is that this is still in development, so some regressions are possible [19:19] kwmonroe: is it possible to add say kafka to an already deployed bundle? [19:23] you bet c0s! "juju deploy cs:~bigdata-dev/trusty/apache-kafka kafka" [19:23] but then it'll wait for zookeepers, so you'll need to "juju deploy cs:~bigdata-dev/trusty/apache-zookeeper zk" and "juju add-relation kafka zk" [19:24] ok, let me try that ;) [19:24] thanks! [19:24] what the worst could happen [19:24] there is no try c0s. just do it. [19:24] ok, kwmonroe - I won't try ;) [19:24] :) === blr_ is now known as blr [19:34] kwmonroe: shall I scale up ZK to 3 nodes as well? Or 1 will do ? [19:39] hello [19:40] c0s: i dunno if he replied, but 1 zk is fine [19:41] it seems to be holding for the POC purposes, yes [19:41] thanks! [20:08] kaffffffka [20:08] shouted to the tune of "KHAAAAAAN" [20:10] yes lazyPower [20:10] alongside with the sound of my sobbing ;( [20:20] c0s: sobbing because everything went great? sobs and frownie faces of joy? [20:34] kwmonroe: is the latest code for the hadoop charms still at lp? [20:34] for example hdfs master I get directed to [20:34] https://code.launchpad.net/~bigdata-charmers/charms/trusty/apache-hadoop-hdfs-master/trunk [20:34] from https://jujucharms.com/apache-hadoop-hdfs-master/trusty/9 which is the promulgated version [20:38] kwmonroe: that doesn't look like a reactive charm though . . . . [20:38] cory_fu: ^ [20:38] arosales: No, all of the new charms are on github [20:39] Also, hdfs-master is the oldness. apache-hadoop-namenode is the newness [20:39] anyone using brew charm-tools 1.11.1 ? [20:39] cory_fu: ya I also thought all the new charms were in git [20:40] cory_fu: but this is the current recommeded charm [20:40] cory_fu: it is just the case that we haven't got the git charms to be the recommeded ones? [20:40] arosales: I'd like to fix that right now. I'm going to start promulgating charms and bundles [20:40] After this meeting [20:40] cory_fu: ya super confusing atm [20:41] cory_fu: also as a result of that, if any of the charms at https://jujucharms.com/big-data change please file a bug on https://github.com/CanonicalLtd/jujucharms.com/issues [20:47] deanman: ah brew still probably at the 1.x versions [20:47] deanman: hitting any issues? [20:47] arosales, yeah when trying to $charm build [20:51] arosales, and this would be the tailed output when using -l DEBUG http://pastebin.ubuntu.com/15571120/ [20:53] * arosales looking at pastebin [20:53] Hey deanman the brew release of charm-tools is a bit behind the other ones. I would recommend setting up an venv and pip installing charm tools until the new version is available. [20:55] deanman: I saw a similar error but on ubuntu I needed to install venv package [20:55] mbruzek, No problem mbruzek, i'm building from within the ubuntu VM (which runs 1.11.2). [20:55] deanman: your running this on osx? [20:55] arosales, Yes [20:55] on ubuntu with 1.11.2 it works just fine [20:55] ok, deanman it looks like the dep is broken in brew [20:55] aisrael: are you still looking after charm-tools in brew? [20:56] arosales: Marco's been doing it [20:56] I think [20:56] aisrael: if so could you take an action to update to the latest, possibly even 2.0 that marcoceppi is going to put into xenial? [20:56] marcoceppi's brewing for osx :-) I bet he has been keeping that a secret :-) [20:56] deanman - do you have git installed? [20:56] arosales: it's already on my list, it'll be out tomomrrow [20:56] aisrael: could you follow up with marcoceppi though? [20:56] i assume yeah but there's only a few paths that end in that error [20:57] marcoceppi: ok [20:57] and its typically due to not having unzip, git, or one of the other smallish dependencies that we're shelling out to invoke [20:57] deanman: are you ok to leverage ubuntu in the vm for now? [20:58] mbruzek - oh i figured out why its tanked, we never got the restructuring merge done [20:58] arosales, Yeah it's fine, just asking if you wanted to raise an issue on github [20:59] mbruzek https://github.com/juju-solutions/charms.docker/pull/14 -- when you have time <3 its kidn of big but most of it is documentation updates for the charms.docker api [20:59] deanman: thanks. Sounds like marcoceppi is going to update brew soon [20:59] deanman - that particular issue has been patched. https://github.com/juju/charm-tools/issues/159 [20:59] deanman - is there a remote on your git repository? or is it a local git repo only? [21:00] deanman, Yeah i do have a single remote setup [21:00] lazyPower, .... [21:01] deanman do you have bzr installed? [21:01] if any of those libs are in a launchpad repo, its gonna die with that error as it tries to bzr branch the remote [21:01] s/libs/layers [21:02] lazyPower, a private remote repo, not a launchpad repo [21:02] deanman - can i see a pastebin of your layer.yaml? [21:04] lazyPower, if only pastebinit on mac didn't complain about "Bad API request...", gimme a sec [21:04] lazyPower, Single liner -> includes: ['layer:docker'] [21:04] and thats not in lp [21:04] * lazyPower is puzzled now [21:05] lazyPower, fails after copying tox.ini [21:08] deanman - do you have virtualenv installed? [21:08] and/or pip [21:09] after installing virtualenv with pip the build goes through [21:09] \o/ paydirt [21:09] https://github.com/juju/charm-tools/blob/v1.11.1/charmtools/build/tactics.py#L622 - was the clue [21:10] thats poopy though, i'm pretty sure this error messaging was patched with the future revs [21:10] so its more apparent whats happened [21:10] removed virtualenv and bug reappeared, so definitely installing virtualevn was the key !! [21:12] https://github.com/juju/charm-tools/blob/v2.1.2/charmtools/build/tactics.py#L636 - doesn't look like it unless we patched it on the invocation of this in utils [21:15] what does juju-gui-console-enabled do in the juju-gui charm? [21:18] LiftedKilt - it opens my garage door. plz dont toggle that option [21:20] afaik, nothing. it may have been used in the past. [21:23] arosales: was waiting for the packages to settle [21:24] lazyPower: I'm toggling it furiously now [21:24] jrwren: so no plans to re-enable it then? [21:25] LiftedKilt: schenanigans! [21:25] LiftedKilt: none that I have heard. [21:26] We probably did not delete it because it would break bundles which specify it, but we could mark it obsolete and document that it is unused and is a no-op [21:28] but it sounds like a cool feature haha [21:32] marcoceppi: ya I figured you wanted to updated with what we put into xenial [21:33] marcoceppi: appreciate all the work you have been putting into charm{tools} and getting it out there for folks [21:33] re === scuttle|afk is now known as scuttlemonkey [21:48] my charm create a new service but no units [21:48] juju debug-log show me nothing, another way to see log ? [21:49] A-Kaser - did you deploy it via dragging on the gui or something? there's a scenario where you can stage a service without any allocated units, so you can do things like upload resources, tweak config [21:49] juju deploy local:trusty/spark-notebook [21:49] *then* you can add units to kind of "flush" the whole operation into the environment, but if thats not what you did, i'm uncertain why you got a service declaration without any units though [21:49] c0s: with the bigtop puppet deploy, how do you define the role of the machine you're deploying to? [21:49] it's my own charm, may be I have missing something [21:50] admcleod1: there are two ways [21:50] s/missing/missed/ [21:50] you can just specify FQDNs for master nodes (NN, spark, etc.) and then the rest of the nodes will be worker nodes. [21:51] or you can do more complex settings of the roles via hiera [21:54] admcleod1: you can check https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/bigtop/cluster.yaml for more info [21:54] search for roles [21:55] the complexity, mostly, comes from the fact that you need to maintain different hiera configs on different nodes [21:55] where's the simpler method would mandate uniformed configs [22:00] c0s: right, thanks [22:00] c0s: ah yeah ok pretty straight forward [22:01] yeah, we are simple folks ... over there ;) [22:01] in our corners of them woods [22:02] bloody hell, what damn Kafka doesn't have a thin client? [22:02] what's the point of having all this crap laying around if every Joe and Frank needs to write tons of custom code to handle all that parallelism ;( [22:03] damn hipsters [22:03] hah [22:31] kwmonroe: cory_fu, admcleod1, looks like apache-kafka-14 is pulling in [22:31] "url": "https://s3.amazonaws.com/jujubigdata/apache/x86_64/kafka_2.10-0.8.2.1-89ede9a.tgz" [22:31] which has the bug KAFKA-1711 [22:31] not sure if this is because of that, I can not even writing anything into a topic [22:36] perhaps not as I can write data to a topic using juju actions [22:36] weeeird [23:19] cory_fu: how would you suggest to pull in admcleod1 branches into github/juju-solutions for the hadoop HA bits? [23:20] why would you want to do that arosales? [23:20] it's not ready to merge into the juju-solutions namespace [23:22] or, admcleod1 can push his HA-enabled charms to his namespace, and we can bundle that into a bundle-ha.yaml in the apache-core-batch-processing bundle (which does live in juju-solutions) [23:23] kwmonroe: I guess I just need to understand how the juju-solutions name space is used [23:25] kwmonroe: for example if I wanted to submit a merge request against the new HA bits. I have the following questions [23:25] 1. how do I find them [23:25] 2. where do I submit issues [23:25] 3. where so I submit branches against [23:30] arosales: github.com/juju-solutions/apache-hadoop-namenode is the upstream. To work on features, we fork that to our personal namespace, work, and then when it's ready submit a PR against the upstream. [23:31] Once HA is ready, there will be a PR and once that's merged, it will be in juju-solutions [23:31] cory_fu: seems that doesn't give a lot of opportunity for folks collaborate on the branch before pushing to a final version [23:32] but completely agree that during early dev personal name space works well [23:32] * arosales just wondering if there should be a deb branch off of http://github.com/juju-solutions/apache-hadoop-namenode [23:32] arosales: You can fork Andrew's repo and submit PRs back to him [23:32] that has early bits folks want others to test [23:32] I am struggling to come up with a good data ingest example for Kafka ;( [23:33] cory_fu: and thats one way, but now I have to follow your namespace, kwmonroe, kjackal, admcleod1, and anyone elese for the dev version [23:33] arosales: We can also have branches on the juju-solutions repo, but the "typical" GitHub workflow is to fork and submit PRs from your personal namespace [23:33] cory_fu: +1 on personal name space to hack. [23:33] cory_fu: my noob questions is where do I go to collaborate on the dev versions [23:34] cory_fu: if the answer is ping here and find out who's personal namespace has the dev branch I can do that [23:34] cory_fu: just wanted to be sure that was the correct process before I started filling issues against kjackal and admcleod1 namespace [23:35] So, with this workflow, there isn't really a single dev version. There are multiple feature branches. We had that before, but they lived in a single namespace and eventually got merged into bigdata-dev just prior to release. So bigdata-dev was more like staging [23:35] arosales: If we think it would be easier, we can do branches within the upstream repo. I'm not against that [23:36] cory_fu: you guys are the experts here :-) I just wanted to play nice with your workflow [23:36] We'll discuss it during sync tomorrow [23:36] cory_fu: what ever is working for you guys I can follow, I just simple didn't know [23:36] cory_fu: sounds good [23:37] arosales: You can also see all the forks pretty easily on the GH website [23:37] Just click the number next to "Fork" [23:38] If I can deploy xenial and trusty machines on maas with juju, and maas is telling me that a wily machine is deployed but my juju controller still shows pending, where would one go to troubleshoot that? [23:39] cory_fu: gotcha and that works when there are a few folks to guess who branch is the dev version I should submit issues against, but when more folks get involved (ie start branching) that guess may be a little hard, but perhaps look at that problem when it occurs. [23:40] arosales: Or you could just submit all issues against the upstream [23:41] cory_fu: true, as long as I have found the right dev branch to find the issues, which is the crux of my questions [23:42] busy day , time to wind down [23:42] LiftedKilt: I would start with at least "juju debug-log" and see if that gives you any useful pointers [23:42] juju kill-controller [23:42] Waiting on 5 models, 20 machines, 22 services [23:43] $_$ [23:43] LiftedKilt: and I am guessing you are using maas 1.9 and juju 2.0? [23:43] arosales: correct [23:44] I *think* the maas team and juju team are working on compatibility between those two versions [23:44] arosales: ERROR verification failed: no macaroons [23:45] arosales: I know they've been working on maas 2 and juju 2 due to the new maas api, but I didn't know they had compatibility problems with 1.9? [23:45] any juju-devs around that can confirm that? [23:45] LiftedKilt: I may very well be wrong [23:45] * arosales is going to pop into juju-dev and confirm [23:46] yall got any more of them macaroons? [23:46] just need to borrow one real quick. I promise not to eat it. [23:47] macaroons way tastier that cookies [23:47] s/that/than/ [23:49] c0s: were you looking for a stream set? [23:50] not exactly [23:50] arosales: LiftedKilt we're workiing on juju 2 and maas 2.0 [23:50] arosales: LiftedKilt 1.9 should be peachy [23:50] LiftedKilt: juju-core folks say maas 1.9 and juju 2.0 should work together [23:50] rick_h_ arosales: any idea why juju debug-log is complaining about no macaroons? [23:51] here's the main dilemma: Kafka is poorly suited to work with a variety of data sources out of the box. It is a message bus, after all, so you need all sorts of sinks, consumers, etc for each particular case [23:51] LiftedKilt: hmm, that sounds like a bug to be honest. [23:51] hence, developing a generic enough data flow (ie without writing a bunch of special code) is a challenge [23:52] I was thinking may be we should experiment with something like SpringXD to provide a good pipeline building tool. Which is integrated with Kafka pretty well, it seems [23:52] LiftedKilt: can you post what you are seeing at http://paste.ubuntu.com/ [23:52] I noticed all my wily charms were failing, so I tried to add-machine a couple wily machines to see if I could independently get them to work [23:52] or Gobblin to build ETL-like flows [23:53] LiftedKilt: they're failing to deploy via deploying wily based charms? [23:53] LiftedKilt: can you speak to what you're deploying and how? [23:53] arosales: https://paste.ubuntu.com/15572264/ [23:53] or if we are to focus on collection- or aggregating-like use cases, then Flume+Kafka [23:54] but bare Kafka seems to be too bare to start using is without any preps. [23:54] rick_h_: sure - attempting to deploy the openstack-lxd bundle [23:54] c0s: perhaps prep the data before sending over to kafka [23:54] LiftedKilt: do me a favor, try juju status --format=yaml [23:54] arosales: you still need to connect to data [23:55] rick_h_: when I noticed it was hanging on provisioning the machines, I created a new model and tried adding a machine there manually [23:55] https://paste.ubuntu.com/15572272/ [23:55] having a source isn't enough. You need to have source-specific sink [23:55] or producer [23:55] c0s: ya the data format for a gernal case . . . [23:56] LiftedKilt: gotcha, so can you juju ssh 0 ? [23:56] * arosales also looking up SpringXD c0s [23:56] LiftedKilt: and check out if anything is fishy in the /var/log/juju/xxxxx log files [23:56] BTW, out of the box setting up kafka on top of spark-zeppelin bundle doesn't let me to use their producer script in the shell session. Not sure if it's a bug or not [23:56] rick_h_: juju ssh 0 hangs [23:57] LiftedKilt: hmm, can you manually ssh to the machine IP? [23:58] rick_h_: It would let me it I had the key set up [23:58] c0s: does seem like an issue [23:58] LiftedKilt: the yaml status says the machine is deployed? [23:58] LiftedKilt: oh heh [23:59] c0s: got it on the board to investigate [23:59] arosales: it is either a differences in the environment between what I get via 'juju ssh kafka/0' and 'juju action do kafka/0...' [23:59] or else [23:59] ok, will add a ticket now [23:59] rick_h_: yeah it's bizarre - it sees that it's up, but yet at the same time doesn't [23:59] will dig more into this later... getting to the end of my rope today :/