=== natefinch-afk is now known as natefinch === valeech_ is now known as valeech [04:52] kwmonroe: huh. charm-helpers should have bootstrapped itself and installed the python3-apt package already by that point. [04:54] kwmonroe: Its always used the deb package for python-apt, as the pip install hasn't worked historically [04:58] kwmonroe: oh right, so no include_system_packages. Which will break charm-helpers at various points unless yaml, six, apt and other dependencies get pip installed [04:58] (charm-helpers doesn't bootstrap python3-apt - it is assumed installed on the image, as a system package) [05:00] kwmonroe: How did it used to work? I don't think the clear_removed_package_states update will have changed anything, except changing the point at which it fails. [05:02] kwmonroe: I'm tempted to say this is a baselayer or charm-tools problem, since it is both including charm-helpers and providing an option to stop it finding its dependencies. [05:12] kwmonroe: python3-apt must become pip installable, or it is by definition incompatible with venvs not using system packages. At which point we can add it to wheelhouse.txt. Its either that or rewriting the relevant bits of the apt layer or charm-helpers to use subprocess.call() rather than apt.foo(). [05:19] kwmonroe: (rewriting isn't much work - the only thing I see using python3-apt is charmhelpers.fetch.apt_cache(), which is used by charmhelpers.fetch.filter_installed_packages() [05:21] kwmonroe: which is only ever called by clear_removed_package_states(). I see. So I just need to replace that call in the apt layer and it is back to being compatible with system_packages disabled. === frankban|afk is now known as frankban === cppforlife__ is now known as cppforlife_ === degville- is now known as degville [08:52] kwmonroe: I've pushed a fix to the apt layer [09:01] Hello Juju World! === mpavone1 is now known as mpavone [12:39] admcleod: hey are you there? [12:54] kjackal: hi, yes [12:55] admcleod: I wanted to ask you, when we had hadoop in HA mode did we wait for a specific number of Zookeeper units to register with the namenode? [12:56] kjackal: min 1 [12:58] kjackal: https://github.com/juju-solutions/layer-apache-hadoop-namenode/blob/master/reactive/namenode_failover.py [12:59] admcleod: I see. Thanks [12:59] kjackal: 1 being sufficient for testing, etc, but not for prod [13:12] marcoceppi: ping if you get a sec [13:12] rick_h_: pong got some sec's [13:13] marcoceppi: https://hangouts.google.com/hangouts/_/canonical.com/rick?authuser=1 please [13:26] rick_h_ - follow up re-pong from yesterday. Did you still want to sync sometime today? [13:26] lazyPower: no, thanks. I'm ok for now. [13:26] ack === mhall119_ is now known as mhall119 === redelmann_ is now known as redelmann_wfh === redelmann is now known as redelmann_brb === scuttle|afk is now known as scuttlemonkey [15:03] Hi, I've asked a question about glance-charm in the OpenStack dev mailing list - http://lists.openstack.org/pipermail/openstack-dev/2016-August/100660.html is it a suitable place for the question or there is a better place present? [15:31] kwmonroe, kjackal: do either of you remember offhand where bigtop typically logs the output from a smoke test? [15:38] petevg: not sure about bigtop smoke tests, but *we* put some logs in /opt, eg: https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test#L48 [15:39] petevg: i would guess the bigtop smoke tests just log to the respective application log (/var/log/zeppelin/*.out, for example) [15:39] kwmonroe: awesome. That is exactly where I needed to look. thx. [15:40] kwmonroe: I was grepping through the app logs, and wasn't finding things. [15:40] I do get a helpful error message in /opt, though. [15:40] kwmonroe: spark fails smoke tests due to "Error: Master must start with yarn, spark, mesos, or local" [15:40] cool.. though perhaps /opt isn't a great place.. it might be better to shove those in the application log dir, since that's where most people would think to look. [15:40] I thought that we had some code in the charm to deal with the case where spark is running standalone. [15:40] Maybe it broke. [15:41] kwmonroe: yes. Sticking logs in /opt is a little weird :-p [15:41] petevg: what is 'spark_execution_mode' set to? perhaps we don't translate the default "standalone" to "local" properly. [15:42] * D4RKS1D3 Hi [15:42] kwmonroe: how do I check that value? I don't see it in any of the files in /etc/spark/conf ... [15:45] petevg: grep MASTER /etc/spark/conf, but based on the error, the charm may not be setting that value because it thinks it's invalid.. if you haven't explicitly chagned it, the default will be in /var/lib/juju/agents/unit-*/charm/config.yaml. i'm guessing it's "standalone". [15:48] kwmonroe: Hmmm. It looks like we're setting SPARK_MASTER_IP as ourself in spark-env.sh, which seems like plausibly correct behavior. [15:50] kwmonroe: I think that I'm going to just file a ticket for now, and see if I can move on to leaving comments/testing some of the other upstream stuff. [15:50] petevg: where the heck is that Error: Master... coming from? i can't find that string in the spark branch of our bigtop fork [15:50] petevg: thank you for doing this [15:51] petevg: is this on AWS only? [15:51] kwmonroe: It's coming from /opt/sparkpi-results/.log. No idea what part of the software stack is generating the error. [15:52] kjackal: it might be aws only. (Might be an issue w/ the public ip being different than the private ip?) [15:53] petevg: not sure I am just asking for the setup I have to replicate [15:54] oh hey petevg, it's coming from spark-submit itself. petevg can you check to see if there's a $MASTER envar set on your spark unit? [15:54] spark-pi is sourcing /etc/environment, which is where the apache spark charm would have set a MASTER envar... but we don't do that in bigtop-spark (i don't think... we don't do much of anything in /etc/env in bigotp charms) [15:55] kwmonroe: ubuntu doesn't have MASTER set in its environment. [15:55] so petevg, we may need to include a "--master=$(grep MASTER /etc/spark/conf/foo | awk -whatever-)" to get the right master value sent to spark-submit. [15:56] kwmonroe: got it ... we should probably set that up so that it happens automagically, though, right? [15:56] It's weird that it's the bigtop smoke test that fails. [15:56] You'd expect Bigtop to have setup things in a place where it can find it ... [15:57] or even better petevg, make spark-pi a python script and call charms.layer.bigtop_spark.Spark().get_master_url.. that is, call this method to determine the master: https://github.com/juju-solutions/bigtop/blob/spark/bigtop-packages/src/charm/spark/layer-spark/lib/charms/layer/bigtop_spark.py#L31 [15:57] nope, not weird petevg, because we're not calling the bigtop smoke test.. we're calling the charm smoke-test action, which is just a symlink to the spark-pi action that we wrote. [15:58] kwmonroe: Got it. That makes more sense :-) [15:58] and by "wrote", i mean "copied from apache-spark", which *did* set the MASTER value in /etc/environment [15:59] Cool. Going to go grab lunch. I assigned myself to the ticket, and will implement that fix when I get back ... [16:06] stub: you rock. thanks for updating layer-apt to use subprocess vs fetch.filter_x. just tested with our problematic charm and it looks good. [16:06] np. if you are bored you can cargo cult it into charm-helpers ;) [16:07] (thought apt layer was better here to not block on a new charm-helpers release) [16:07] roger that stub -- and fwiw, i agree that it would be nice if python-apt were pip installable [16:09] I could get wheelhouse.txt to download the latest version from Launchpad, but it was all C extensions and I didn't want to deal with architectures. [16:33] mbruzek - when you've got time, we spoke to this briefly and posted to the list about it, here's the pr to follow up on that line of thought https://github.com/juju-solutions/layer-etcd/pull/36 [16:34] Did someone just say immutable? [16:42] I posted a couple of MPs to fix the bip charm client ssl support bug #1604894 [16:42] Bug #1604894: 'client_side_ssl' is not supported as a config value [16:43] First MP is a preparation & cleanup one: [16:43] https://code.launchpad.net/~josvaz/charms/trusty/bip/charmhelpers-cleanup/+merge/301499 [16:43] Second MP is actually adding the ssl charmhelper and adding ssl support with certificate autogeneration [16:44] https://code.launchpad.net/~josvaz/charms/trusty/bip/client_side_ssl-with_helper-lp1604894/+merge/301802 [16:44] plase take a look === frankban is now known as frankban|afk [19:49] icey: following up on our conversation from earlier - the consul-agent would then also need to register to be part of the consul quorum then too? [19:49] think so bdx [19:50] icey: so cluster-relation-joined would need to fire on consul-agent joined to then eh? [19:50] icey [19:50] icey: got it, thanks