[03:14] <jose> marcoceppi: hey, do you know if anyone submitted a talk for txlf? CFP closes in less than 2h and I wanna make sure we have a juju talk
[06:22] <stub> lutostag: https://bugs.launchpad.net/postgresql-charm is probably best, but I should see a report just about anywhere you can make one :)
[06:26] <stub> lutostag: Huh. I assumed there might be some minor glitches, but I certainly didn't assume that one :-/
[07:58] <simonklb> after upgrading to 2.0-beta6 my local charm has started complaining about hook missing - I let them be auto-generated by the reactive template and that worked fine before
[07:58] <simonklb> anyone know what changed and what needs to be done for it to find the hooks again?
[07:59] <simonklb> I should add that it seems that it finds the hooks if I create them manually, but then I get "ImportError: No module named 'charms'"
[07:59] <simonklb> So it's like it looks in the root folder of the charm now instead of in ./trusty or whatever
[08:33] <hoenir> simonklb, you tried to debug the hook?
[08:36] <simonklb> hoenir: is it possible to manually execute hooks?
[08:47] <hoenir> I don't hink soo
[08:48] <hoenir> you tried to read the docs about hooks?
[08:49] <simonklb> hoenir: yea, I mean, it hasn't been an issue for the last couple of weeks
[08:49] <simonklb> only now, after updating to a newer version it's starting to act up
[09:17] <wesleymason> simonklb: you can use juju run to manually execute a hook, e.g. juju run --unit {unit_name} 'hooks/hook-name'
[09:19] <wesleymason> or as hoenir said use debug-hooks and run 'hooks/hook-name' inside the tmux session
[09:50] <simonklb> wesleymason: /tmp/juju-exec390172100/script.sh: line 1: hooks/install: No such file or directory
[09:51] <wesleymason> simonklb: that definitely looks like the PWD for hooks has changed :-/
[09:51] <simonklb> yup
[09:51] <simonklb> is there a changelog somewhere?
[09:51]  * wesleymason is still on 1.25.x for current infra so haven't tested 2.0 hooks yet
[09:51] <simonklb> right right
[09:53] <wesleymason> simonklb: https://jujucharms.com/docs/devel/temp-release-notes
[09:53] <wesleymason> not really a proper changelog though
[09:56] <simonklb> thanks, I'll read it and see if I can find something about the hooks working directory
[09:59] <RAJITH> Hello
[10:02] <RAJITH> I am working on layered charm, simple charm is WORKLOAD-STATE : maintainence  AGENT-STATE :idle and message is Updating apt cache, please let me know what could be issue
[11:29] <stub> RAJITH: If it hangs there, your units likely have a networking or apt issue, like an inaccessible or unavailable apt proxy. Its trying to run 'sudo apt update', and it is failing.
[11:29] <stub> RAJITH: I'd connect to the unit, kill the hook, and try running 'sudo apt update' yourself and debugging from there.
[11:34] <RAJITH> sure will try that
[11:57] <RAJITH> if I tried to add a new machine , agent-state going to pending
[11:59] <RAJITH> have tried this steps: 1. Destroy juju environment, if running. Use --force option if destroy fails.  Uninstall juju  Reboot the system using reboot command reboot     2. Ensure bind9 dns process is stopped, if running  ps -ef | grep named   if named is running: service bind9 stop  To do: ensure that named does not start on reboot, maybe uninstall it  3. Reinstall juju    4. Run the following command:  ifconfig | grep lxcbr0  If lxcbr0 
[12:00] <RAJITH> lxc-net; service start lxc-net         Run ifconfig | grep lxcbr0              If lxcbr0 not present, do:            brctl addbr lxcbr0           ifconfig lxcbr0 10.0.3.1 netmask 255.255.255.0 up  5. Run the following command:          sudo iptables-save | grep lxcbr0          The output should be as follows:  	-A POSTROUTING -o lxcbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill         -A INPUT -i lxcbr0 -p tcp -m tcp --dport 
[13:12] <simonklb> hoenir: wesleymason: I wasn't able to find the cause but works if you deploy from the build folder instead of the root
[13:13] <simonklb> my guess is that they changed something in juju-deployer
[13:15] <wesleymason> simonklb: well juju-deployer definitely hasn't changed, it's been in maintenance mode for a while...but I think local repo got removed, so it would need to be a full path to the charm itself, not the "root" dir, so perhaps before it was deploying $ROOT and not $ROOT/$SERIES/$CHARM
[13:16] <simonklb> something like that
[14:17] <gnuoy> wolsen, I assume you have no major objection to https://github.com/openstack-charmers/charm-interface-hacluster/pull/1 ?
[14:18] <wolsen> gnuoy: that's correct
[14:18] <tvansteenburgh> ahasenack: re juju-deployer packaging, python-bzrlib should be removed, python-six added
[14:18] <tvansteenburgh> ahasenack: (deps)
[14:20] <ahasenack> tvansteenburgh: ok
[14:20] <gnuoy> wolsen, do you have the power to hit merge?
[14:20] <ahasenack> tvansteenburgh: and tox added
[14:20] <ahasenack> and other stuff
[14:20] <tvansteenburgh> well not really, make test will do that
[14:21] <wolsen> gnuoy: it doesn't appear that I do
[14:21] <gnuoy> wolsen, well thats not right. thanks for looking
[14:22] <wolsen> gnuoy: sure, thanks for looking@ that finally
[14:22] <gnuoy> wolsen, np, thanks for doing all that work
[14:23] <gnuoy> jamespage, can you add thedac, dosaboy and wolsen to the oipenstack-charmers on github please? (Or enable me to)
[14:24] <jamespage> gnuoy, done (enabled you)
[14:24] <gnuoy> thanks
[14:24] <SaltySolomon> hi
[14:26] <gnuoy> wolsen, with any luck you've got an invite to join
[14:26] <wolsen> gnuoy: awesome,thanks
[14:27] <gnuoy> wolsen, np ... any chance of hitting the button on my pull request if you have a sec?
[14:27] <wolsen> gnuoy: I'm logging back in now
[14:27] <gnuoy> \o/
[14:27] <gnuoy> thanks
[14:28] <wolsen> gnuoy: "only those with write permissions"
[14:28] <gnuoy> argh, let me look
[14:30] <wolsen> gnuoy: could be on my end too - let me look
[14:30] <wolsen> gnuoy: it's my mistake
[14:30] <gnuoy> ah, tip top
[14:30] <wolsen> gnuoy: done
[14:30] <gnuoy> thanks
[14:37] <gnuoy> wolsen, tinwood I've created a starter for 10 for enabling management of HA resources in the Openstack layer https://github.com/openstack-charmers/charm-layer-openstack/pull/4/files . Please feel free to comment on it if you have any thoughts/objections
[14:37] <gnuoy> tinwood (I know the importing is not standards compliant and I'll fix that)
[14:38] <tinwood> gnuoy, thanks for the heads-up; I'm wrangling barbican as we speak.
[14:56] <wolsen> gnuoy: will do - it'll likely be later this week - I'm in Austin this week
[15:54] <beisner> thedac, here's the sentry unit foo as an ex: https://review.openstack.org/#/c/314773/3..4/tests/basic_deployment.py
[15:56] <icey> any estimate of when charmhelpers tip will get onto pip?
[16:05] <thedac> beisner: thanks, I'll start consuming that now.
[16:19] <thedac> beisner: gnuoy I added ODL to the mojo specs https://code.launchpad.net/~thedac/openstack-mojo-specs/odl/+merge/294311
[16:19] <gnuoy> thedac, excellent, thanks
[16:20] <cory_fu> kjackal_: Hey, are we dropping the separate execution_mode config option for the Big Top Spark charm and moving to auto-switching to yarn mode based on hadoop.yarn.ready?
[16:21] <cory_fu> (Also, if we keep the config, can we shorten the name a bit?  spark_execution_mode seems unnecessarily long, and somewhat redundant.)
[16:47] <cory_fu> kjackal_: When you have a chance, can you take a look at https://github.com/juju-solutions/layer-apache-bigtop-base/pull/2  Those are some of the refactors I had played around with when looking at your Spark charm.  I'm thinking it should let you clean up some of the code  in that charm layer to something along these lines: http://pastebin.ubuntu.com/16365652/
[16:47] <cory_fu> Anyway, that's all completely untested of course, and was just what I had from the time I spent looking at it yesterday.  Let me know what you think
[17:19] <tvansteenburgh> icey: that has historically been by request. i just took a quick look and there are a number of failing tests.
[17:20] <icey> tvansteenburgh: ok, that's unfortunate; there's stuff that we've been using in the openstack charms for a while that aren't in the pip version, making a migration to layered charms more difficult -_-
[17:22] <tvansteenburgh> icey: i would fix them myself but i just don't have time right now
[17:22] <icey> tvansteenburgh: same boat -_-
[17:34] <aisrael> Any ideas on a way to force destroy a controller in juju 2 beta 6? There's no more --force flag
[17:35] <rye_> How can I remove a service that has a unit in an error state in juju2?
[17:35] <tvansteenburgh> aisrael: do you really want to destroy the controller, or just the model?
[17:36] <aisrael> tvansteenburgh: Either at this point. I think I've found a bug that's preventing destroy-controller, destroy-model, and kill-controller from working
[17:36] <tvansteenburgh> rye_: you need to `juju resolve` the unit first
[17:37] <tvansteenburgh> aisrael: juju destroy-model -y default # or whatever your model name is <- that doesn't work for you?
[17:39] <aisrael> tvansteenburgh: Nope, the model is stuck in destroying
[17:39] <rye_> tvansteenburgh: thanks!
[17:40] <tvansteenburgh> aisrael: well if you just want to keep working, bootstrap a new controller :D
[17:40] <aisrael> tvansteenburgh: Good point. That'll do while I file a bug
[17:40] <rye_> Hmm, upon resolution, it continues onto fail another hook (not unexpected). Is there a way to halt that process so that I can remove it?
[17:41] <tvansteenburgh> rye_: you just have to keep resolving until it finishes what's queued
[17:42] <rye_> tvansteenburgh: fair enough, thanks again
[17:45] <icey> is it possible to have a function execute _after_ every hook run with reactive?
[17:46] <icey> specifically, with reactive+layers
[17:48] <tvansteenburgh> icey: hookenv.atexit
[17:49] <icey> tvansteenburgh: would I att an empty @when to get that to register on each invocation?
[17:49] <icey> also, thanks tvansteenburgh!
[17:50] <tvansteenburgh> icey: i think you could just do it in module scope
[17:51] <icey> got it, the leadership layer has a nice example of atstart :)
[17:57] <zeus`> the'res a way to modify the default image user "ubuntu" to another custom user when doing a juju bootstrap?
[18:05] <gnuoy> thedac, if you get a sec would you mind taking a look at https://github.com/openstack-charmers/charm-tempest/pull/5
[18:05] <thedac> gnuoy: sure
[18:05] <gnuoy> thanks
[19:20] <LiftedKilt> is maas 2 support merged yet?
[19:21] <LiftedKilt> I thought I saw something about that a bit ago
[19:26] <rick_h_> LiftedKilt: yes, in trunk and will be in the next beta
[19:26] <rick_h_> LiftedKilt: it comes out of feature flag in this next release
[19:26] <LiftedKilt> rick_h_: What's the release timeline on beta7?
[19:27] <rick_h_> LiftedKilt: soooooon :) hopefully next two days
[19:28] <LiftedKilt> haha ok
[19:28] <LiftedKilt> my maas 1.9 has gotten buggy and I wanted to blow it away and move to 2.0
[19:28] <LiftedKilt> gotta wait for integration though
[19:29] <rick_h_> gotcha, yea there's rough work in the beta6 behind a feature flag, but I'd wait the day/two for the next one before jumping
[19:30] <DavidRama> hello folks, trying to deploy the openstack charm but got ceph-mon lxc's in maintenance/executing status since the start (about 3 hours now) any idea why ? Running on Xenial/juju2.0
[19:31] <LiftedKilt> rick_h_: for sure - I'll just hang tight
[19:31] <bdx> DavidRama: I've got the same thing going on here
[19:31] <bdx> icey: ^
[19:32] <bdx> icey: is that a thing right now?
[19:32] <rick_h_> DavidRama: bdx so there's a corner there for lxc because it's going away and should be lxd?
[19:33] <bdx> rck_h_, DavidRama: my bad ... I've been experiencing that issue on lxd
[19:34] <bdx> as well as lxc
[19:34] <DavidRama> same
[19:34] <DavidRama> ceph-mon/0              maintenance     executing   2.0-beta4 3/lxc/0                10.0.3.181     Bootstrapping MON cluster
[19:35] <bdx> icey, rick_h_, can we get some <3 in that^ area please
[19:36] <icey> bdx: DavidRama is that stable or next?
[19:36] <bdx> stable
[19:36] <rick_h_> bdx: yes, there's work ongoing (one of the reasons we're still beta) to finish pulling lxc and making lxd 100% equiv.
[19:36] <bdx> cs:xenial/ceph-0
[19:37] <rick_h_> bdx: DavidRama what provider is this on?
[19:37] <bdx> rick_h_: lxd
[19:37] <rick_h_> bdx: ah, so this is the issue, you can't do nested containers in lxd by default, you have to use the 'docker' profile lxd offers
[19:37] <rick_h_> bdx: that's got a set of discussions at our sprint next week on how to handle this, as lxd doesn't enable it ootb as a security issue
[19:38] <bdx> oooh, rick_h_: I'm not nesting...
[19:38] <rick_h_> bdx: ? so on lxd you're deploying openstack? or am I misreading?
[19:38] <rick_h_> bdx: is this on maas then? with the lxd container on there?
[19:39] <bdx> rick_h_: `juju bootstrap lxd lxd-test; juju deploy cs:xenial/ceph-0`
[19:39] <rick_h_> bdx: oic, ok. Do this is different than what DavidRama pasted my bad
[19:39]  * rick_h_ is confusing the different issues
[19:39] <bdx> oh my bad
[19:40] <rick_h_> so one at a time, bdx there is love going into it
[19:40] <bdx> nice, thx
[19:40] <rick_h_> DavidRama: can you try beta6? lots of lxd work between beta4 and 6
[19:40] <rick_h_> DavidRama: and nested lxd (on the lxd provider) isn't going to work atm, still more to be done
[19:40]  * rick_h_ thinks that's the summary for the moment
[19:41] <DavidRama> i'm on local provider
[19:42] <bdx> icey, rick_h_: while I've got you both here, whats the status of cs:xenial/{ceph,ceph-osd}-0 on for the MAAS provider?
[19:42] <rick_h_> bdx: so the maas provider will be out of feature flag and available for testing in the next beta release hopefully out by EOW
[19:42] <rick_h_> bdx: so we'll know more then once the charm maintainers can validate/etc
[19:44] <bdx> rick_h_: entirely .... what are we looking at here 2-3 weeks, or 16.07 charm release?
[19:45] <rick_h_> bdx: in between? for final GA of 2.0 I think
[19:45] <rick_h_> bdx: the charm will be there/working with 2.0 beta in a week I'd say, but GA of 2.0 is on the long end of that 2-3 weeks but well before July
[19:48] <bdx> rick_h_: ok, great. So what you are saying, is that ceph deploys  will hopefully be squared away within around a weeks time ...e.g. ceph, ceph-osd will deploy successfully using 16.04 in a weekish?
[19:49] <rick_h_> bdx: I'd hope so, cholcombe is there something else to watch out for besides juju2/maas2? ^
[21:30] <DavidRama> rick_h_ got the same symtom with beta6:
[21:30] <DavidRama> ID                      WORKLOAD-STATUS JUJU-STATUS VERSION   MACHINE PORTS          PUBLIC-ADDRESS MESSAGE
[21:30] <DavidRama> ceph-mon/0              maintenance     executing   2.0-beta6 2/lxc/0                10.0.3.247     Bootstrapping MON cluster
[21:37] <magicaltrout> kwmonroe:
[21:53] <magicaltrout> marcoceppi: you around?
[21:59] <cory_fu> magicaltrout: I don't think kwmonroe is on IRC, but you can ping him on Telegram (or I can ping him if you need me to)
[21:59] <magicaltrout> its alright thanks cory_fu i figured it out
[21:59] <magicaltrout> is marcoceppi on holiday or something?
[22:00] <cory_fu> I don't think so, but he might be travelling
[22:00] <magicaltrout> bleh
[22:00] <magicaltrout> fair enough
[22:03] <magicaltrout> cory_fu: actually just quickly, can i juju expose hdfs from a remote location?
[22:03] <cory_fu> What do you mean?
[22:03] <cory_fu> Do you mean, use a Juju-deployed HDFS with something outside of Juju?
[22:03] <cory_fu> Or vice-versa?
[22:03] <magicaltrout> yeah so if I have one of your bundles running
[22:04] <magicaltrout> expose HDFS so I can write from outside the juju network
[22:04] <magicaltrout> hdfs://.... uri
[22:07] <cory_fu> magicaltrout: You can, but there are two issues right now.  One is that the port is not opened by default.  That's easy enough to fix with `juju run --service namenode "open-port 8020"; juju expose namenode`
[22:09] <magicaltrout> is that one issue? or both the issues?
[22:10] <cory_fu> The other is that I'm not sure if it listens on the public interface by default.  We had a change put in to make absolutely sure it does, but that's not been released yet.
[22:10] <magicaltrout> ah right, i'm sure I can hack that around
[22:10] <magicaltrout> its only for this afternoons demo
[22:11] <cory_fu> magicaltrout: Here's the change to force it, if it doesn't by default: https://github.com/juju-solutions/jujubigdata/commit/0f0ff1bd98eb06661375c699a4172b3e2b94396b
[22:11] <magicaltrout> ta
[22:11] <cory_fu> You can just add those to the /etc/hadoop/conf/hdfs-site.xml file manually
[22:11] <cory_fu> You'll still need to do the open-port & expose thing, tho
[22:12] <magicaltrout> yup
[22:12] <cory_fu> magicaltrout: Let me know if you need me to ping kwmonroe for  you on telegram to arrange a rendezvous
[22:13] <magicaltrout> it's alright i can normally hear his texan drawl from a distance away ;)
[22:13] <cory_fu> lol, true
[22:31] <rye_> I'm trying to run bundletester, but jujuclient.py is throwing an EnvironmentNotBootstrapped exception. After digging into it a bit, it looks like I'm missing the account['password'] field here: http://bazaar.launchpad.net/~juju-deployers/python-jujuclient/trunk/view/head:/jujuclient.py#L291
[22:31] <rye_> I do have a password set up, however (as admin@local)
[22:31] <rye_> Any suggestions for what else I can check?
[23:14] <tvansteenburgh> rye_: juju1 or 2?
[23:17] <tvansteenburgh> rye_: make sure you have latest jujuclient from pypi. if you're testing on juju2, you must be bootstrapped before running bundletester, and you need to pass -e $(juju switch)
[23:18] <tvansteenburgh> rye_: also make sure you have latest bundletester from pypi
[23:18] <tvansteenburgh> rye_: if none of those things fixes your problem, pastebin the output
[23:37] <rye_> tvansteenburgh: juju2, and thanks!