[04:52] <stub> kwmonroe: huh. charm-helpers should have bootstrapped itself and installed the python3-apt package already by that point.
[04:54] <stub> kwmonroe: Its always used the deb package for python-apt, as the pip install hasn't worked historically
[04:58] <stub> kwmonroe: oh right, so no include_system_packages. Which will break charm-helpers at various points unless yaml, six, apt and other dependencies get pip installed
[04:58] <stub> (charm-helpers doesn't bootstrap python3-apt - it is assumed installed on the image, as a system package)
[05:00] <stub> kwmonroe: How did it used to work? I don't think the clear_removed_package_states update will have changed anything, except changing the point at which it fails.
[05:02] <stub> kwmonroe: I'm tempted to say this is a baselayer or charm-tools problem, since it is both including charm-helpers and providing an option to stop it finding its dependencies.
[05:12] <stub> kwmonroe: python3-apt must become pip installable, or it is by definition incompatible with venvs not using system packages. At which point we can add it to wheelhouse.txt. Its either that or rewriting the relevant bits of the apt layer or charm-helpers to use subprocess.call() rather than apt.foo().
[05:19] <stub> kwmonroe: (rewriting isn't much work - the only thing I see using python3-apt is charmhelpers.fetch.apt_cache(), which is used by charmhelpers.fetch.filter_installed_packages()
[05:21] <stub> kwmonroe: which is only ever called by clear_removed_package_states(). I see. So I just need to replace that call in the apt layer and it is back to being compatible with system_packages disabled.
[08:52] <stub> kwmonroe: I've pushed a fix to the apt layer
[09:01] <kjackal> Hello Juju World!
[12:39] <kjackal> admcleod: hey are you there?
[12:54] <admcleod> kjackal: hi, yes
[12:55] <kjackal> admcleod: I wanted to ask you, when we had hadoop in HA mode did we wait for a specific number of Zookeeper units to register with the namenode?
[12:56] <admcleod> kjackal: min 1
[12:58] <admcleod> kjackal: https://github.com/juju-solutions/layer-apache-hadoop-namenode/blob/master/reactive/namenode_failover.py
[12:59] <kjackal> admcleod: I see. Thanks
[12:59] <admcleod> kjackal: 1 being sufficient for testing, etc, but not for prod
[13:12] <rick_h_> marcoceppi: ping if you get a sec
[13:12] <marcoceppi> rick_h_: pong got some sec's
[13:13] <rick_h_> marcoceppi: https://hangouts.google.com/hangouts/_/canonical.com/rick?authuser=1 please
[13:26] <lazyPower> rick_h_ - follow up re-pong from yesterday. Did you still want to sync sometime today?
[13:26] <rick_h_> lazyPower: no, thanks. I'm ok for now.
[13:26] <lazyPower> ack
[15:03] <andrey-mp> Hi, I've asked a question about glance-charm in the OpenStack dev mailing list - http://lists.openstack.org/pipermail/openstack-dev/2016-August/100660.html  is it a suitable place for the question or there is a better place present?
[15:31] <petevg> kwmonroe, kjackal: do either of you remember offhand where bigtop typically logs the output from a smoke test?
[15:38] <kwmonroe> petevg: not sure about bigtop smoke tests, but *we* put some logs in /opt, eg: https://github.com/juju-solutions/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test#L48
[15:39] <kwmonroe> petevg: i would guess the bigtop smoke tests just log to the respective application log (/var/log/zeppelin/*.out, for example)
[15:39] <petevg> kwmonroe: awesome. That is exactly where I needed to look. thx.
[15:40] <petevg> kwmonroe: I was grepping through the app logs, and wasn't finding things.
[15:40] <petevg> I do get a helpful error message in /opt, though.
[15:40] <petevg> kwmonroe: spark fails smoke tests due to "Error: Master must start with yarn, spark, mesos, or local"
[15:40] <kwmonroe> cool.. though perhaps /opt isn't a great place.. it might be better to shove those in the application log dir, since that's where most people would think to look.
[15:40] <petevg> I thought that we had some code in the charm to deal with the case where spark is running standalone.
[15:40] <petevg> Maybe it broke.
[15:41] <petevg> kwmonroe: yes. Sticking logs in /opt is a little weird :-p
[15:41] <kwmonroe> petevg: what is 'spark_execution_mode' set to?  perhaps we don't translate the default "standalone" to "local" properly.
[15:42]  * D4RKS1D3 Hi 
[15:42] <petevg> kwmonroe: how do I check that value? I don't see it in any of the files in /etc/spark/conf ...
[15:45] <kwmonroe> petevg: grep MASTER /etc/spark/conf, but based on the error, the charm may not be setting that value because it thinks it's invalid.. if you haven't explicitly chagned it, the default will be in /var/lib/juju/agents/unit-*/charm/config.yaml.  i'm guessing it's "standalone".
[15:48] <petevg> kwmonroe: Hmmm. It looks like we're setting SPARK_MASTER_IP as ourself in spark-env.sh, which seems like plausibly correct behavior.
[15:50] <petevg> kwmonroe: I think that I'm going to just file a ticket for now, and see if I can move on to leaving comments/testing some of the other upstream stuff.
[15:50] <kwmonroe> petevg: where the heck is that Error: Master... coming from?  i can't find that string in the spark branch of our bigtop fork
[15:50] <kjackal> petevg: thank you for doing this
[15:51] <kjackal> petevg: is this on AWS only?
[15:51] <petevg> kwmonroe: It's coming from /opt/sparkpi-results/<id>.log. No idea what part of the software stack is generating the error.
[15:52] <petevg> kjackal: it might be aws only. (Might be an issue w/ the public ip being different than the private ip?)
[15:53] <kjackal> petevg: not sure I am just asking for the setup I have to replicate
[15:54] <kwmonroe> oh hey petevg, it's coming from spark-submit itself.  petevg can you check to see if there's a $MASTER envar set on your spark unit?
[15:54] <kwmonroe> spark-pi is sourcing /etc/environment, which is where the apache spark charm would have set a MASTER envar... but we don't do that in bigtop-spark (i don't think... we don't do much of anything in /etc/env  in bigotp charms)
[15:55] <petevg> kwmonroe: ubuntu doesn't have MASTER set in its environment.
[15:55] <kwmonroe> so petevg, we may need to include a "--master=$(grep MASTER /etc/spark/conf/foo | awk -whatever-)" to get the right master value sent to spark-submit.
[15:56] <petevg> kwmonroe: got it ... we should probably set that up so that it happens automagically, though, right?
[15:56] <petevg> It's weird that it's the bigtop smoke test that fails.
[15:56] <petevg> You'd expect Bigtop to have setup things in a place where it can find it ...
[15:57] <kwmonroe> or even better petevg, make spark-pi a python script and call charms.layer.bigtop_spark.Spark().get_master_url.. that is, call this method to determine the master: https://github.com/juju-solutions/bigtop/blob/spark/bigtop-packages/src/charm/spark/layer-spark/lib/charms/layer/bigtop_spark.py#L31
[15:57] <kwmonroe> nope, not weird petevg, because we're not calling the bigtop smoke test.. we're calling the charm smoke-test action, which is just a symlink to the spark-pi action that we wrote.
[15:58] <petevg> kwmonroe: Got it. That makes more sense :-)
[15:58] <kwmonroe> and by "wrote", i mean "copied from apache-spark", which *did* set the MASTER value in /etc/environment
[15:59] <petevg> Cool. Going to go grab lunch. I assigned myself to the ticket, and will implement that fix when I get back ...
[16:06] <kwmonroe> stub: you rock.  thanks for updating layer-apt to use subprocess vs fetch.filter_x.  just tested with our problematic charm and it looks good.
[16:06] <stub> np. if you are bored you can cargo cult it into charm-helpers ;)
[16:07] <stub> (thought apt layer was better here to not block on a new charm-helpers release)
[16:07] <kwmonroe> roger that stub -- and fwiw, i agree that it would be nice if python-apt were pip installable
[16:09] <stub> I could get wheelhouse.txt to download the latest version from Launchpad, but it was all C extensions and I didn't want to deal with architectures.
[16:33] <lazyPower> mbruzek - when you've got time, we spoke to this briefly and posted to the list about it, here's the pr to follow up on that line of thought   https://github.com/juju-solutions/layer-etcd/pull/36
[16:34] <mbruzek> Did someone just say immutable?
[16:42] <josvaz>  I posted a couple of MPs to fix the bip charm client ssl support bug #1604894
[16:42] <mup> Bug #1604894: 'client_side_ssl' is not supported as a config value <bip (Juju Charms Collection):New> <https://launchpad.net/bugs/1604894>
[16:43] <josvaz> First MP is a preparation & cleanup one:
[16:43] <josvaz> https://code.launchpad.net/~josvaz/charms/trusty/bip/charmhelpers-cleanup/+merge/301499
[16:43] <josvaz> Second MP is actually adding the ssl charmhelper and adding ssl support with certificate autogeneration
[16:44] <josvaz> https://code.launchpad.net/~josvaz/charms/trusty/bip/client_side_ssl-with_helper-lp1604894/+merge/301802
[16:44] <josvaz> plase take a look
[19:49] <bdx> icey: following up on our conversation from earlier - the consul-agent would then also need to register to be part of the consul quorum then too?
[19:49] <icey> think so bdx
[19:50] <bdx> icey: so cluster-relation-joined would need to fire on consul-agent joined to then eh?
[19:50] <bdx> icey
[19:50] <bdx> icey: got it, thanks