[00:09] <lazyPower> gugpe - how long has the unit sat in pending?
[00:09] <lazyPower> gugpe - what series charm are you deploying?
[00:10] <gugpe> infinite
[00:10] <gugpe> xenial
[00:12] <gugpe> ive noticed progress
[00:12] <gugpe> I disabled ufw
[00:13] <gugpe> things seem to be rolling now
[00:13] <mattrae> thanks arosales and bdx for the suggestions :) i'll see if that page has what i need or i'll try the juju mailing list
[00:13] <arosales> mattrae: that was all bdx, but if you are still stuck be sure to mail the list cc maas list as well
[00:14] <arosales> gugpe: sorry missed you reply, reading back scroll now
[00:14] <gugpe> confirmed
[00:14] <gugpe> ufw was blocking juju agent
[00:14] <arosales> gugpe: ok if the machine is pending that the issue is with the provider not your machine
[00:14] <gugpe> the provider is localhost lxd
[00:14] <lazyPower> gugpe - ah yeah thats in the docs :)
[00:15] <lazyPower> tricky one to debug at first too
[00:15] <arosales> gugpe: if this was your first time using local provider the initial deploy can take some time as it needs to download the cloud image
[00:15] <arosales> hmm seems ufw got you resolved
[00:15] <arosales> lazyPower: is that in the docs?
[00:16] <gugpe> It wasn't a bootstrap. I actually ran into this earlier.
[00:16] <arosales> ok
[00:16] <gugpe> That's kind of how I came to this conclusion. I didn't see it in the docs though.
[00:16] <lazyPower> i'm full of lies, i swore i saw that in the getting started guide though
[00:16] <arosales> I don't see it here, and that is most likely where it should live
[00:17] <arosales> but I haven't had to do this on stock Xenial cloud image
[00:17] <arosales> perhaps desktop and cloud are different though . . .
[00:17] <arosales> hmmm, there was an askubuntu question on this sometime ago
[00:17] <arosales> folks weren't really happy about disabling ufw, which is understandable
[00:18] <gugpe> right, read that so post. I suspect the better way is to add rules to ufw.
[00:18] <gugpe> ufw rules to enable juju agent locally operate over lxdbr0
[00:19] <arosales> or juju should does this on the user behalf on install, but be very transparent about it
[00:19] <lazyPower> arosales i filed a bug about it https://github.com/juju/docs/issues/1142
[00:19] <gugpe> I'd be fine either way. Better docs, or do this as default.
[00:20] <arosales> lazyPower: well I am not sure we should just tell folks to disable their firewall
[00:20] <lazyPower> i think the package should be adjusted to apply those ufw rules on install
[00:20] <arosales> lazyPower: I think we can do this a little more tastefully with juju core
[00:20] <arosales> lazyPower: ya seems juju core should be more knowledgable here, and also observable
[00:20] <lazyPower> but until that happens, a doc callout to investigate UFW rules/disable is better than the nothing we have right now.
[00:21] <arosales> signal to the user that this firewall rule is getting updated
[00:21] <lazyPower> my humble opinion :)
[00:21] <arosales> lazyPower: gugpe: I am activing ufw on my system for a test and see if I can reproduce. If so I'll file a bug
[00:22] <arosales> lazyPower: thats reasonable. I'll comment in the issue with the lp bug
[00:22] <arosales> lazyPower: I thought there was a command to get leader on a service
[00:22] <arosales> lazyPower: did I imagine that command?
[00:22] <lazyPower> arosales is-leader
[00:22] <lazyPower> juju run --service foobar 'is-leader" --format=yaml
[00:22] <lazyPower> i've been hacking around this with my more recent work
[00:23] <arosales> ah
[00:23] <arosales> juju help commands | grep leader
[00:23] <arosales> didn't return anything for me
[00:23] <lazyPower> so, what i propose is this
[00:24] <lazyPower> https://www.evernote.com/l/AX6rlgwUROtLw7DCTvObHDxZ3s3gRfL4MjwB/image.png
[00:24] <arosales> gugpe: btw have you seen https://jujucharms.com/gitlab/precise/5
[00:25] <gugpe> arosales: thanks. I have seen the existing gitlab charms.
[00:25] <gugpe> as with most things my requirements are more specialized.
[00:25] <gugpe> I'm actually deploying the gitlab omnibus package with mattermost chat server and what not.
[00:26] <lazyPower> gugpe - highly recommended you look into layers then
[00:26] <lazyPower> a lot of that work has been done for you, such as deploying mattermost
[00:26] <gugpe> I'll publish when I can. I'll look at layers.
[00:26] <lazyPower> cmars wrote an excellent mattermost layer
[00:26] <arosales> gugpe: ah good to hear
[00:26] <cmars> lazyPower, thanks. gugpe, one word of caution, the charm only works with juju 2.0
[00:26] <bdx> gugpe: the gitlab solution is legit
[00:26] <arosales> lazyPower:  I like it. The swarm bits currently say something like, "Swarm leader running "
[00:26] <cmars> because it uses resources
[00:26] <cmars> mattermost, that is
[00:27] <bdx> gugpe: you can mod it post deploy to your specialized config
[00:27] <lazyPower> arosales - yeah thats an older pattern. it was a full takeover of potentially useful health information.
[00:27] <lazyPower> i want leader status regardless of actual message coming along on the pipeline. its a transparent way to know it as an admin this unit is special
[00:27] <cmars> gugpe, it could be adapted to install from mattermost releases on github, just something to watch for if you try installing the latest cs:~cmars/mattermost
[00:28] <arosales> lazyPower: would this only be for leaders or would non-leaders also have this message
[00:28] <arosales> I would prefer to only see this on applications which have leaders
[00:28] <lazyPower> arosales non-leaders get no special messaging additions. they get plain status output of whatever you sent through.
[00:28] <lazyPower> oh sure, if you mean something like say.. wordpress that has no real leader doing coordination
[00:29] <arosales> yup
[00:29]  * lazyPower nods
[00:29] <lazyPower> i intend this to be useful for charms making use of the feature
[00:29] <arosales> you putting that into core?
[00:30] <lazyPower> I have a little patch method at the bottom of my layer
[00:30] <lazyPower> this could be adjusted in charm-helpers, or in charms.* namespace. I dont think it needs ot go in core
[00:30] <lazyPower> considering its more up to the author to determine if the leader is special
[00:36] <arosales> lazyPower: gotcha
[00:47] <arosales> lazyPower: if your still around I am looking at https://github.com/juju-solutions/bundle-observable-swarm/blob/master/README.md
[00:48] <arosales> the scp of creds needst to come from the  leader
[00:48] <lazyPower> arosales - there should be swarm credentials on any swarm node
[00:48] <arosales> so we could do something like juju run --service swarm "is-leader" --format=yaml | grep -A 1 True | awk '{ print $2 }'
[00:48] <arosales> but not sure how elegant that is
[00:48] <arosales> lazyPower: not in my testing
[00:48] <lazyPower> if its not currently it willi be soon
[00:48] <arosales> oh
[00:49] <lazyPower> i have a branch that has that enabled. i'm sorry i'm so far behind
[00:49] <arosales> ok than. I'll just file a bug as a reminder then :-)
[00:49] <arosales> lazyPower: thanks
[00:49] <lazyPower> delivering on beats + etcd has been a very labor intensive spike. I'm about to the point wher ei can switch feet and finish that last little bit, and get my bundles ready for hte store.
[00:50] <arosales> *fixing* beats + ects has been labor intensive :-)
[00:50] <arosales> etcd, that is
[00:51] <lazyPower> packetbeat code is ready (i think) - it needs all the project meta and it'll be ready for the bundle as well
[00:52] <lazyPower> dockerbeat is lagging, i need to ping the maintainer about -stable release and it will need the same as pb, then its g2g
[00:53] <arosales> thanks for working on that
[10:06] <kjackal> admcleod: have you ever used Mahout?
[10:08] <admcleod> kjackal: no
[10:08] <admcleod> kjackal: why do you ask?
[10:09] <kjackal> I am trying to figureout what are valid use-cases so that I make sure the Mahout library is installed in the right places
[10:12] <admcleod> kjackal: are you doing it as a subordinate?
[10:12] <kjackal> admcleod: yes
[10:16] <admcleod> kjackal: well you can use it with mapreduce, spark, flink.. i think you only need it on the unit you're executing the job from
[10:18] <kjackal> I think so too, but then again there is this mahoot shell
[10:18] <admcleod> kjackal: i have a cluster running, ill install it on one slave and see what happens
[10:18] <kjackal> thanks
[10:40] <admcleod> kjackal: for MR it only needs to be on the unit which is submitting the job, as the libs are bundled as part of the MR job and distributed to the slaves. for spark/flink im not sure but would assume the same
[10:41] <kjackal> so, yes. Since it is essentially a library you can deliver it in a "fat" jar everywhere
[10:41] <kjackal> But you can always add it to your classpath, right?
[10:42] <admcleod> kjackal: yes. i think we only need it on the client
[10:46] <admcleod> kjackal: the only difference adding it to any other units will make is that if it is a slave/worker the job may potentially begin to execute slightly (a few seconds) faster, if its a non-slave/worker you will be able to submit jobs from that unit
[10:49] <admcleod> kjackal: .. or its own unit, i.e. pig
[10:49] <kjackal> I will try to follow https://github.com/hixiaoxi/hixiaoxi.github.io/wiki/Installing-and-Testing-Mahout and see what happens
[10:50] <admcleod> kjackal: yeah thats more or less what i just did
[10:50] <kjackal> nice thanks!
[11:15] <gnuoy> tinwood, https://github.com/openstack-charmers/charms.openstack/pull/8 is up for review if you have some time.
[11:15] <tinwood> gnuoy, yep, I'll take a look.
[11:15] <gnuoy> tinwood, thanks
[12:48] <gnuoy> tinwood, thanks for the review
[12:53] <gnuoy> jamespage, I have a pull request to add ha support to the charms_openstack module. Do you want to give it the once over? Otherwise I'll ask tinwood to hit the button.
[12:53] <gnuoy> https://github.com/openstack-charmers/charms.openstack/pull/8
[13:16] <jcastro> mbruzek: lazyPower: wrt. the view code discussion on the list, I am having a hard time finding the source to the elasticsearch charm
[13:18] <mbruzek> jcasto: if it is recommend you can get the source using "charm pull"
[13:18] <mbruzek> jcastro:  I am in meeting right now, but I can help later
[13:19] <jcastro> ack
[14:02] <lazyPower> jcastro - it still lives in onlineservices-charmers namespace
[14:03] <lazyPower> https://launchpad.net/~onlineservices-charmers/charms/trusty/elasticsearch/trunk
[14:18] <jcastro> hey lazyPower, nice work fixing that one.
[14:19] <lazyPower> o/ i do what i can
[14:21] <bryan_att> gnuoy: ping
[14:25] <gnuoy> bryan_att, hi
[14:26] <bryan_att> gnuoy: I tried your script, and ran into an error - the error is "ERROR cannot add service "mysql": service already exists" (since I just deployed OPNFV via the JOID installer)
[14:28] <gnuoy> bryan_att, that should be ignorable
[14:29] <bryan_att> gnuoy: ok, then the next error is "WARNING failed to load charm at "/home/ubuntu/save/joid/trusty/congress": open /home/ubuntu/save/joid/trusty/congress/metadata.yaml: no such file or directory"
[14:31] <bryan_att> gnuoy: maybe related to the earlier error "build: Unable to locate layer:openstack-api. Do you need to set LAYER_PATH?" ?
[14:33] <bryan_att> gnuoy: also not sure what the function of the http_proxy setting is (using the script from http://paste.ubuntu.com/16952298/)
[14:36] <gnuoy> bryan_att, yeah, it'll be the http_proxy causing the problem, that's specififc to my env. Try removing the "export http_proxy=..." line and rerunning
[14:38] <bryan_att> gnuoy: I removed the http_proxy setting and it got farther; note also the keystone charm did not work as there is also a keystone service defined. I'm watching now to see how far it gets
[14:38] <gnuoy> bryan_att, keystone will br a problem because the version already deployed does not have the congress fix.
[14:39] <bryan_att> gnuoy: how do I get the JOID installer to include your keystone patch? (ping: narindergupta)
[14:43] <narindergupta> bryan_att: you can destroy service and redeploy
[14:46] <gnuoy> narindergupta, JOID will redeploy the same version (wrong) version of keystone won't it? (I haven't used joid)
[14:49] <tinwood> gnuoy, is charmhelpers source only on bzr or it it on git now too?
[14:50] <gnuoy> narindergupta, where does JOID pick up the keystone from ? the charm store? If so we could run JOID and then do and a juju upgrade-charm --switch to upgrade to an updated local version
[14:50] <gnuoy> tinwood, just bzr I believe
[14:51] <tinwood> gnuoy, I though so.  Now I just have to find that page I found for you re: bzr specs for pip?
[14:51] <jamespage> beisner, btw I added some newton and 'branch' targets to oct
[14:51] <jamespage> for newton at least
[14:52] <narindergupta> gnuoy: i am using git location from openstack
[14:53] <gnuoy> narindergupta, is it straight forward for bryan_att to update joid to pick up the charm from an alternative repo?
[14:53] <narindergupta> gnuoy: i am git clone first then use charm: local:trusty/keystone
[14:54] <narindergupta> he has to chenge fetch.sh file
[14:54] <gnuoy> narindergupta, fantastic, that should be simple
[14:54] <narindergupta> he has to change fetch_charm.sh and give the git clone for bzr command
[14:54] <narindergupta> to download then rest will fall through
[14:57] <gnuoy> narindergupta, I'd like him to try using 'git@github.com:gnuoy/charm-keystone.git' . Is there more to is than just changing git@github.com:openstack/charm-keystone.git to git@github.com:gnuoy/charm-keystone.git in fetch.sh ?
[14:57] <beisner> jamespage, ack thx
[14:58] <narindergupta> gnuoy: this is file for opendaylight https://gerrit.opnfv.org/gerrit/gitweb?p=joid.git;a=blob;f=ci/odl/fetch-charms.sh
[14:59] <narindergupta> gnuoy: bryan_att: please change ./joid/ci/odl/fetch-charm.sh
[14:59] <narindergupta> and search for keystone and change the location of keystone as suggested by the gnuoy
[14:59] <gnuoy> which is: git@github.com:gnuoy/charm-keystone.git
[15:00] <narindergupta> gnuoy: currently i am usign   19 git clone https://github.com/openstack/charm-keystone.git $distro/keystone
[15:00] <bryan_att> gnuoy: the repo reference doesn't work for me. I had to change it to "https://github.com/gnuoy/charm-keystone.git"
[15:00] <narindergupta> bryan_att: thats correct bryan_att
[15:00] <gnuoy> bryan_att, yep, sounds good.
[15:04] <bryan_att> narindergupta: if I'm deploying for "nosdn", do I need to modify the file in the ODL folder or somewhere else?
[15:04] <narindergupta> it should be ./nosdn/ folder
[15:05] <bryan_att> ok, trying now
[15:37] <jamespage> gnuoy, can I get your opinion on a keystone charm thing
[15:37] <jamespage> gnuoy, ?
[15:41] <gnuoy> jamespage, sure
[15:41] <jamespage> gnuoy, okies
[15:41] <jamespage> gnuoy, so the newton keystone package ships a site config called 'keystone'
[15:41] <jamespage> gnuoy, and the charm enables one called wsgi-keystone
[15:41] <jamespage> gnuoy, should I rework the charm to overwrite the package provided one, or just disable the package provided one and keep with the wsgi-keystone named version?
[15:42] <jamespage> my tendency is to the former and make that the standard going forward
[15:42] <gnuoy> jamespage, agreed
[15:53] <jamespage> gnuoy, https://review.openstack.org/#/c/326597/
[15:59] <jamespage> gnuoy, nope that does not work...
[16:00]  * jamespage tries again
[16:32] <cholcombe> jamespage, my first stab at that this rgw race: https://gist.github.com/cholcombe973/2a6601456cd0ae1e6612695776b7e5a9  what do you think?
[16:56] <bdx> cory_fu: is there any functional difference between https://github.com/jamesbeedy/layer-puppet-agent/blob/7f84fcdcb8c615c5de91f0f94163cde64c4f550d/reactive/puppet_agent.py#L95-L104 and https://github.com/jamesbeedy/layer-puppet-agent/blob/master/reactive/puppet_agent.py#L110-120
[16:58] <bdx> cory_fu: I guess I'm unfamiliar with what that is doing .... is PuppetConfigs.puppet_active(p) over p.puppet_active() just a style thing?
[17:00] <bdx> I feel like it has deeper implication though
[17:27] <bdx> admcleod, cory_fu: I want to revert to the p.puppet_active() syntax for consistency if it makes no difference to you
[17:33] <rick_h_> cargonza: do you have the release schedule for OS handy?
[17:34] <bac> jujucharms.com is down and the issue is being investigated.  This will affect new deploys for charms from the charmstore.
[17:38] <cargonza> rick_h_ Ubuntu OS release?
[17:38] <rick_h_> cargonza: charm release schedule
[17:39] <rick_h_> cargonza: e.g. what's the target for the 16.07 release?
[17:40] <cargonza> 16.07 charms is end of July - 7/22 from our last discussions
[17:40] <rick_h_> k
[18:06] <petevg> I've got a testing question: in juju 2.0 beta8, I'm still getting an error about environments.yaml being missing when I run "juju test". Is there a different method that I can use to run amulet tests right now? (I can unpack everything in the wheelhouse into a virtualenv, and just run the tests manually, or with nose, but that gets annoyingly clunky fairly
[18:06] <petevg> quickly ...)
[18:12] <bac> jujucharms.com is back
[18:12] <magicaltrout> \o/
[18:15] <lazyPower> petevg - we in ~containers tend to use tip of the testing tooling. We're using python-jujuclient and juju-deployer from tvansteenburgh's ppa, and we're using bundletester to execute the tests
[18:16] <lazyPower> petevg - fyi, this is also included in charmbox:devel if you're into that sort of thing. You should be able to pick up charmbox, point it at a cloud, cd into the charms dir and then kick off bundletester
[18:18] <petevg> lazyPower: got it. Thank you.
[18:21] <skay> squeeee I got an easter egg from a typo. :D
[18:21] <skay> it's gone in the new version though
[18:21] <skay> I'm gonna typo everything now
[18:25] <mramm> skay:  :)
[19:20] <ejat> hi
[19:20] <ejat> ERROR autorest:WithErrorUnlessStatusCode POST <-- im getting this while trying to connect to azure
[19:20] <stokachu> cherylj: ^ seen this before?
[19:21] <ejat> /oauth2/token?api-version=1.0 failed with 400 Bad Request
[19:22] <ejat> ERROR autorest:WithErrorUnlessStatusCode POST https://login.microsoftonline.com/fb30bf07-xxxx-xxxx-xxxx-02ef08680fb9/oauth2/token?api-version=1.0 failed with 400 Bad Request
[19:23] <cherylj> no, I've never seen that before.  Is that juju 2.0?
[19:23] <stokachu> yea beta8
[19:23] <ejat> PS C:\Users\Lenovo> juju --version
[19:23] <ejat> 2.0-beta8-win10-i386
[19:26] <cherylj> ejat: are you getting this during bootstrap?
[19:26] <ejat> cherylj: yups
[19:27] <cherylj> ejat: can you run bootstrap with --debug and pastebin the output?
[19:29] <ejat> cherylj: http://paste.ubuntu.com/17098969/
[19:36] <cherylj> ejat: okay, let me poke around a bit, see what I can figure out
[19:37] <ejat> im checking my subscription id .. and authentication with azure cli
[19:42]  * ejat done .. double check my subscription id n tenant id 
[19:53] <ejat> its working fine if i tried to bootstrap in aws
[21:17] <ejat> cherylj: managed to pokes someone?
[21:19] <cherylj> ejat: I haven't gotten much time with it yet, but one of the more knowledgeable folks wrt azure is going to be coming online soon
[21:19] <cherylj> that would be axw
[21:19] <ejat> okie cherylj thanks so much
[21:50] <cherylj> ejat: are you using a user/pass authentication for azure?
[21:50] <ejat> auth-type : userpass
[21:51] <cherylj> ok
[22:17] <cherylj> ejat: can I ask you to open a bug for your issue?  It'll help me pass it on to someone
[22:17] <ejat> under juju @ juju-core
[22:18] <cherylj> yeah, here:  https://bugs.launchpad.net/juju-core
[22:24] <ejat> https://bugs.launchpad.net/juju-core/+bug/1590172
[22:24] <mup> Bug #1590172: ERROR cmd supercommand.go:448 autorest:WithErrorUnlessStatusCode POST https://login.microsoftonline.com/fb30bf07-xxxx-xxxx-xxxx-02ef08680fb9/oauth2/token?api-version=1.0 fa iled with 400 Bad Request <juju-core:New> <https://launchpad.net/bugs/1590172>
[22:52] <bryan_att> gnuoy: ping