[06:34] <kklimonda> can I define (and later reference) variables in bundles?
[06:34] <kklimonda> I have to pass the same gpg key to bunch of services
[07:39] <anrah> that should be a config-option on some layer / charms
[07:40] <anrah> If I understood your question correctly
[07:41] <anrah> for example on layer-apt there is config option install_keys which can be set on bundle-file
[08:15] <kjackal> Good morning Juju world!
[11:42] <SimonKLB> anyone know if it's possible to run actions in the virtualenv created by the basic layer?
[11:42] <SimonKLB> or perhaps in it's own virtual environment and then with it's own wheelhouse
[13:25] <kjackal> SimonKLB: Have a look here: https://github.com/juju-solutions/layer-cwr/blob/master/actions/build-on-commit
[13:26] <kjackal> You need to call activate_env found in the basic layer
[13:29] <SimonKLB> kjackal: perfect! thanks
[13:44] <SimonKLB> kjackal: do you know how to preserve, for example, sys.argv when activating the venv?
[13:46] <kjackal> SimonKLB: I do not know that. For sure you can work around this by storing information in a file.
[13:48] <jacekn_> hello. Any idea when juju 2.1 might be out? Is it days/weeks/months away?
[14:06] <lazyPower> Ankammarao_ hello. I dont respond to private messages :)
[14:06] <lazyPower> Ankammarao_ how may I assist you?
[14:06] <Ankammarao_> i am getting error while publishing the promulgated charm
[14:07] <SimonKLB> kjackal: https://github.com/juju-solutions/layer-basic/pull/90
[14:07] <SimonKLB> i think that does it
[14:07] <Ankammarao_> getting error like "denied access for the user"
[14:08] <Ankammarao_> lazypower: but i am able to push to other channels like edge,beta
[14:09] <lazyPower> Ankammarao_ - You can push to any channel that is not stable when your charm is promulgated. Promulgation means you must submit your charm for review (http://review.jujucharms.com) and a ~charmer can promote your edge,beta channel release to stable
[14:15] <Ankammarao_> lazypower : do i need to mention the charm is promulgated when submit for a review
[14:17] <kjackal> Ankammarao_: you do not have to but please do
[14:18] <lazyPower> ^
[14:18] <Ankammarao_> kjackal, lazypower : ok, thank you
[14:31] <lazyPower> np
[14:33] <Zic> lazyPower: HELP... oh, sorry, all is working actually :)
[14:33] <Zic> (hello :p)
[14:33] <lazyPower> \o/ YESSSSS
[14:33] <lazyPower> ITS HAPPENINGGGG
[14:33] <lazyPower> Zic :) I have new goodies for you
[14:34] <Zic> Juju tshirts? :p
[14:41] <lazyPower> nah thats something jcastro does
[14:41] <lazyPower> i have a new etcd charm
[14:41] <lazyPower> with new etcd snap stuff
[14:41] <lazyPower> well its a new old charm :)
[14:45] <lazyPower> Zic https://github.com/juju-solutions/layer-etcd/pull/77
[14:50] <catbus1> stokachu: Hi, conjure-up imports ubuntu .root.tar.gz images for novakvm, shouldn't it be ubuntu -disk1.img? or is it the lxd image will work on kvm as well?
[14:50] <stokachu> catbus1, hmm i dont think so
[14:51] <stokachu> lemme look at the spells
[14:52] <stokachu> catbus1, xenial-server-cloudimg-amd64-disk1.img is the correct one right?
[14:53] <stokachu> catbus1, for now you can do juju config nova-compute virt-type=lxd
[14:53] <stokachu> catbus1, ill fix the spell so it imports kvm though
[14:54] <Zic> lazyPower: oh nice, my apt pinning is living its last days so :)
[14:54] <lazyPower> Zic - thats the idea. channels + snaps will make it a fantastic experience
[14:54] <catbus1> stokachu: I believe so, that's what the configuration example shows in the openstack-base charm.
[14:54] <stokachu> catbus1, ok ill fix that in the spell
[14:55] <lazyPower> Zic - if you feel brave, build from that branch, deploy teh current etcd and upgrade to that assembled charm (juju upgrade-charm etcd --path=./etcd)  and you'll see "the magic" at work.
[14:55] <stokachu> catbus1, once my tests complete ill push a new version to the --edge channel for you to test
[14:56] <catbus1> stokachu: I can launch lxd instances fine, though. what's the difference between lxd image and -disk1.img?
[14:56] <stokachu> catbus1, nova-compute supports both so depending on if you want KVM or LXD as your compute nodes
[14:57] <catbus1> stokachu: no, I mean, it imports lxd image and I can launch instances with that image on thhe novakvm machines. horizon shows they were launched and running.
[14:58] <stokachu> catbus1, and you can access them via ssh?
[14:58] <catbus1> stokachu: I couldn't, but I haven't cleared whether that's a network connectivity issueor not.
[14:59] <stokachu> catbus1, yea i think horizon may be showing you incorrect info
[14:59] <stokachu> but yea lemme know if you figure out if its network or not
[14:59] <catbus1> ok.
[14:59] <stokachu> that would be a interesting test to add
[14:59] <stokachu> afaik the default compute node type is kvm
[14:59] <stokachu> so it shouldnt be able to run lxd containers
[15:00] <catbus1> ok.
[15:00] <catbus1> in the mean time I can import the kvm images manually.
[15:00] <stokachu> catbus1, ok cool, do you need help with that or are you good?
[15:00] <catbus1> I am good.
[15:01] <stokachu> catbus1, ok cool, ping me with what you find out, i gotta step away for a few hours
[15:01] <catbus1> ok
[15:16] <Zic> it's not really ontopic but if you have any starting points: do you have any experience of SGBD (MySQL, MariaDB, PostgreSQL) inside a K8s cluster without Vitess?
[15:17] <Zic> I'm looking at this currently: https://github.com/bitnami/charts/tree/master/incubator/mariadb-cluster
[15:17] <Zic> (but as you see, it's in incubator for now)
[15:18] <lazyPower> Zic - i myself tend to use lxd for my stateful workloads
[15:19] <lazyPower> however, if you're looking to run a chart in prod, as you've noticed, they are in incubation. I don't think any databases other than crate have really been identified as a production ready process-container database. Thats not to say you cant run PG containers in prod... just that they haven't actively promoted it as an option.
[15:21] <Zic> yeah, the only solution I saw is Vitess, but finally, we're on the path to give up this solution
[15:21] <Zic> the pdo-grpc PHP extension for Vitess's client is not mature for production code
[15:22] <Zic> we're falling back to the old php-mysqli connector so
[15:22] <Zic> (yeah, my customer is using PHP for his backend :p)
[15:26] <Zic> I don't think YouTube is using PHP so I guess it's normal after all
[15:28] <tvansteenburgh> i need a reality check. anyone know if lxd-in-lxd is *supposed* to work?
[15:29] <tvansteenburgh> for example, deploy a machine that's a lxd container, then deploy a lxd container to that machine, then deploy a charm to that container
[15:30] <tvansteenburgh> marcoceppi, lazyPower ^
[15:31] <lazyPower> tvansteenburgh - i'm pretty sure its going to give you trouble if its expected to be a privledged container
[15:32] <lazyPower> tvansteenburgh - however we did find that as of the 2.1 update nested lxd seems to have changed behavior. i dont think it works any longer, no.
[15:32] <tvansteenburgh> lazyPower: :(
[15:32] <tvansteenburgh> lazyPower: thanks for the info
[16:03] <SimonKLB> is it possible to list all actions that have been run on a unit?
[16:03] <SimonKLB> all action ids that is
[16:05] <SimonKLB> for example if i run an action using amulet, how can i see the result later on if its not logged in the test
[16:08] <tvansteenburgh> SimonKLB: the action uuid is returned from Deployment.action_do(...)
[16:09] <tvansteenburgh> or UnitSentry.run_action(...)
[16:09] <SimonKLB> tvansteenburgh: yea but there is no way to get hold of action ids that have been run before if you dont have any test logs where you printed the ids?
[16:10] <SimonKLB> for example if i want more historical data from actions that have been run the past week or such
[16:14] <tvansteenburgh> SimonKLB: yeah with amulet that'll be difficult. what's the use case?
[16:15] <SimonKLB> tvansteenburgh: right now i just wanted to see the result from an action while debugging my code and i didnt print the action id in the test
[16:15] <SimonKLB> so i just got curious if the action ids and/or results where stored anywhere so that they could be retrieved at a later time
[16:17] <tvansteenburgh> SimonKLB: they're not. we could add something that shells out to `juju show-action-status` or something i guess
[16:19] <SimonKLB> im not speaking about amulet specifically, but rather actions as a whole, i assume they are stored somewhere since you're able to fetch the results at a later time as long as you have the IDs
[16:20] <SimonKLB> it would be nice to just have a command that could list all actions that have been run on a unit, like `juju action-ids [unit]`
[16:20] <tvansteenburgh> juju show-action-status
[16:20] <SimonKLB> ah :D
[16:20] <SimonKLB> nvm then!
[16:22] <SimonKLB> tvansteenburgh: btw, have you seen this ? https://github.com/juju/amulet/issues/171
[16:23] <SimonKLB> it might be another miss on my part, but from the looks of it there doesnt seem to be a way for a test to both keep the model but also upgrade the charm?
[16:25] <SimonKLB> now the options are, do it all from scratch or run the test on the same charms as before, right?
[16:27] <marcoceppi> tvansteenburgh: it should work, but you have to have the right profiles, iirc
[16:27] <tvansteenburgh> marcoceppi: where can i find info that
[16:28] <marcoceppi> tvansteenburgh: idk, I've been trying to get my lxd containers setup to support it and #lxcontainers is pretty void
[16:29] <tvansteenburgh> marcoceppi: k
[16:29] <tvansteenburgh> SimonKLB: gimme a few, otp
[16:30] <tvansteenburgh> SimonKLB: i think you are right though, there's no good way to do that right now
[16:31] <tvansteenburgh> SimonKLB: i saw your issue when you submitted it, but i haven't had time to do anything with it
[16:31] <SimonKLB> tvansteenburgh: okok! it would be nice to have it when youre doing CI on charms - it's not a biggie having to reset, but the tests would run a lot faster if you could just keep it and upgrade the charms that have changed
[16:32] <tvansteenburgh> SimonKLB: yeah i hear ya
[16:38] <cory_fu> kjackal: I updated https://github.com/juju-solutions/layer-cwr/pull/92 so the only remaining issue should be the lxd storage
[16:38] <kjackal> cory_fu: I guess so, I havent had a full test yet
[16:39] <kjackal> cory_fu: should I try it tomorrow?
[16:39] <cory_fu> kjackal: Yeah.  Though if kwmonroe or petevg want to help out on reviewing it, I'd like to get it in so you guys can stop giving me merge conflicts.  ;)
[16:40] <kjackal> :) sounds resonable!
[16:40] <kwmonroe> i was gonna change the indentation on all the xml jobs real quick.  i'll look at cory_fu's PR after that.
[16:40] <cory_fu> lol
[18:37] <zeestrat> Hey juju folks, what's the status of the promulgated nagios charm and its maintenance? Would love to help on https://bugs.launchpad.net/nagios-charm/+bug/1605733 as it's breaking for us (and others), but it's unclear who's involved in the nagios-charmers and how it's maintained.
[18:37] <mup> Bug #1605733: Nagios charm does not add default host checks to nagios <canonical-bootstack> <family> <nagios> <nrpe> <unknown> <Nagios Charm:New> <nagios (Juju Charms Collection):Won't Fix> <https://launchpad.net/bugs/1605733>
[18:39] <stormmore> o/ Juju world
[18:43] <kjackal> hi zeestrat, the metadata.yaml has the maintainers. In addition to contacting them I would suggest you send an email to the juju list stating your intention. The discussions here are a bit ... ephemeral
[18:43] <erlon> kwmonroe: hey, have a few seconds?
[18:46] <kwmonroe> sorry erlon, i'm out the door to go pick up my kid for the next few (like 30) minutes.  ask away though and i'll see the message when i get back..
[18:47] <erlon> kwmonroe: hmm, sure ill leave a question around
[19:04] <erlon> all: kwmonroe: guys, I have deployed the base-openstack (https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml) charm and some services seem to be very broken (like some services not even configured rabbit or mysql properly), is there a problem with the charm?
[19:08] <kwmonroe> beisner: icey ^^.  erlon, can you link to a 'juju status' paste?
[19:10] <erlon> https://www.irccloud.com/pastebin/o2dkfrUN/
[19:12] <erlon> this is neutro.conf in neutron-gateway https://www.irccloud.com/pastebin/av0fuRhU/neutron.conf
[19:14] <erlon> This is in nova-cloud-controller https://www.irccloud.com/pastebin/aYIiSWsH/nova.conf
[19:15] <erlon> *1 is mitaka the other kilo*
[19:17] <erlon> just for reference a service that seems to be working fine https://www.irccloud.com/pastebin/rWRTvf15/cinder.conf
[19:27] <beisner> hi kwmonroe, erlon - i'd need to see the juju unit logs from the failed units neutron-gateway/0 and nova-cloud-controller/0 in order to advise;  the bundle is known-good, but certain customizations may be needed depending on the metal that it's laid down on.
[19:28] <erlon> beisner: those are kvm VMs, the only thing I changed in the charms was 1 - the name of the network interfaces from eth1 to ens1, 2 - the name of the disk from sdb to vdb
[19:29] <erlon> beisner: its a flat topology, 5 machines coneected to an admin network, and another interface (virtual but bridged to pub) making the public interface
[19:30] <erlon> beisner: how/where do I get that?
[19:34] <beisner> erlon, /var/log/juju/unit-* from those two machines/vms
[19:46] <erlon> beisner: didn't find a way to post the full logs, but here is the error
[19:47] <erlon> https://www.irccloud.com/pastebin/pPfMlade/unit-neutron-gateway-0.log%20
[19:50] <erlon> beisner: it seems its the network configuration, but, now, how will I tell juju to bypass the ens4 to the container?
[19:56] <erlon> beisner: some services it deploys on containers, others in the host VMs, for each of that the neutron interface will be different
[20:04] <erlon> https://www.irccloud.com/pastebin/7QgsM3tg/unit-nova-cloud-controller-0.log
[20:16] <erlon> beisner: may be the nova error is related to the neutron: 2017-02-20 21:53:00 ERROR juju.worker.dependency engine.go:547 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-neutron-gateway-0/charm: stat /var/lib/juju/agents/unit-neutron-gateway-0/charm: no such file or directory
[20:16] <erlon> beisner: this is the first error in unit-nova-cloud-controller-0.log
[20:16] <erlon> beisner: and then after some time those one above
[20:20] <beisner> erlon, neutron-gateway shouldn't  be in a container with that bundle.
[20:23] <beisner> erlon, the other failure (cloud-compute-relation-changed answers = dns.resolver.query(address, rtype)) indicates that the machines in your lab can't reverse-resolve themselves/peers.
[20:24] <erlon> beisner: that reverse resolving should be done by the DNS server in MaaS doesnt it?
[20:26] <erlon> beisner: hmmm got the neutron gateway error, it is running in a vm with only 1 interface
[21:18] <lazyPower> stokachu o/
[21:18] <lazyPower> er
[21:19] <lazyPower> stormmore \o
[21:19] <stokachu> lazyPower, \o/
[21:19] <lazyPower> haha
[21:19] <lazyPower> eyyyyy
[21:20] <lazyPower> stokachu - incoming fun here https://github.com/juju-solutions/layer-etcd/pull/77
[21:21] <stokachu> lazyPower, hah can't wait for that
[21:21] <lazyPower> if you're feeling froggy, i still need to port the actions but i think thats literally it
[21:21] <lazyPower> then its all snaps all the time
[21:21] <lazyPower> i have verified it works on lxd :D
[21:22] <marcoceppi> lazyPower: make sure you install squashfuse <3
[21:22] <lazyPower> marcoceppi - LAYER-SNAP ALREADY DOES THISSSSS
[21:22] <lazyPower> YASSSSSSS
[21:22] <marcoceppi> YAASSSSSS QUEEENNN
[21:22] <lazyPower> ikr
[21:22] <stokachu> lazyPower, made a few comments
[21:22] <marcoceppi> oh crap, this isn't #eco
[21:22] <lazyPower> stokachu ta
[21:22] <lazyPower> appreciate the wip review
[21:22] <lazyPower> it'll only help when i go to actually land this
[21:23] <stokachu> :D
[21:23] <lazyPower> anywho i need to jet, catch a flight in little over an hour
[21:23] <lazyPower> cheers
[21:23] <stokachu> safe flight
[21:59] <cory_fu> petevg, kwmonroe: All of the issues that kjackal hit in https://github.com/juju-solutions/layer-cwr/pull/92 are resolved, if you guys were up for giving it a review
[22:01] <kwmonroe> excellent cory_fu.. is this deployed on j.d-i.n?  looks like cwr-52 maybe?
[22:01] <cory_fu> kwmonroe: Yep.  Port 8081 for Jenkins.
[22:01] <kwmonroe> cool, i'll check it out
[22:09] <xavpaice> anyone in here working on python-libjuju?
[22:11]  * xavpaice is trying to ppa build a package that needs it, but it's not yet packaged - wondering who to talk to so I can help out with that
[22:33] <cory_fu> xavpaice: Hey, I've done some work on python-libjuju, as has tvansteenburgh and petevg.  By package you mean a Debian package?
[22:35] <petevg> cory_fu, xavpaice: I believe that tvansteenburgh was working on adding it to pypi.python.org, but there's a naming conflict (it's called juju, and so is an older Python lib).
[22:35] <tvansteenburgh> it's there
[22:35] <cory_fu> petevg: That's been resolved.  Pypi is correct now
[22:35] <petevg> Awesome! I'm behind the times, apparently.
[22:35] <cory_fu> :)
[22:35] <xavpaice> yeah, it's on pypi and can be grabbed via pip, but I want to put something in a PPA that needs it
[22:35] <xavpaice> so would prefer to get a deb
[22:36] <xavpaice> I'm working on putting a source package into a ppa now, just don't want to duplicate someone else's work
[22:36] <tvansteenburgh> xavpaice: no one else has done it that i know of
[22:36] <cory_fu> xavpaice: I don't think a deb is being worked on, and I personally have no experience packaging debs but am happy to help out in any way that I can
[22:37] <xavpaice> awesome - thanks
[22:37] <tvansteenburgh> cory_fu, petevg: while we're all standing around the water cooler... https://github.com/juju/python-libjuju/pull/56
[22:46] <xavpaice> xenial package: https://launchpad.net/~canonical-bootstack/+archive/ubuntu/bootstack-ops/+packages
[22:52] <petevg> tvansteenburgh: left comments.
[22:53] <tvansteenburgh> petevg: ty
[22:53] <petevg> np
[22:58] <cory_fu> tvansteenburgh: I got an error running the add_machines.py: http://pastebin.ubuntu.com/24043000/  I think it's a timing issue but the example probably needs to watch for the container to be fully removed
[23:18] <tvansteenburgh> cory_fu: interesting, thanks. i didn't hit that
[23:46] <kwmonroe> cory_fu: containerization looks great.  i'd really like to get multiple cwr jobs fired off at once.  any idea how to do that with 1 jenkins slave?
[23:46] <cory_fu> kwmonroe: You just have to increase the number of executors, under Manage Jenkins
[23:46] <cory_fu> I'm not sure how we'd do that in the charm.
[23:47] <kwmonroe> neat!  what's a good setting?  like 1000?
[23:47] <kwmonroe> i'll try 2 for now.
[23:47] <cory_fu> heh
[23:47] <cory_fu> That should probably be a config option on the Jenkins charm, I think
[23:48] <kwmonroe> HOLY CRAP cory_fu.  it's working.  it's really working!
[23:48] <cory_fu> :)