=== thumper is now known as thumper-afk === thumper-afk is now known as thumper [06:34] can I define (and later reference) variables in bundles? [06:34] I have to pass the same gpg key to bunch of services [07:39] that should be a config-option on some layer / charms [07:40] If I understood your question correctly [07:41] for example on layer-apt there is config option install_keys which can be set on bundle-file [08:15] Good morning Juju world! === jamespag` is now known as jamespage [11:42] anyone know if it's possible to run actions in the virtualenv created by the basic layer? [11:42] or perhaps in it's own virtual environment and then with it's own wheelhouse [13:25] SimonKLB: Have a look here: https://github.com/juju-solutions/layer-cwr/blob/master/actions/build-on-commit [13:26] You need to call activate_env found in the basic layer [13:29] kjackal: perfect! thanks === rogpeppe1 is now known as rogpeppe [13:44] kjackal: do you know how to preserve, for example, sys.argv when activating the venv? [13:46] SimonKLB: I do not know that. For sure you can work around this by storing information in a file. [13:48] hello. Any idea when juju 2.1 might be out? Is it days/weeks/months away? === jacekn_ is now known as jacekn === scuttle|afk is now known as scuttlemonkey [14:06] Ankammarao_ hello. I dont respond to private messages :) [14:06] Ankammarao_ how may I assist you? [14:06] i am getting error while publishing the promulgated charm [14:07] kjackal: https://github.com/juju-solutions/layer-basic/pull/90 [14:07] i think that does it [14:07] getting error like "denied access for the user" [14:08] lazypower: but i am able to push to other channels like edge,beta [14:09] Ankammarao_ - You can push to any channel that is not stable when your charm is promulgated. Promulgation means you must submit your charm for review (http://review.jujucharms.com) and a ~charmer can promote your edge,beta channel release to stable [14:15] lazypower : do i need to mention the charm is promulgated when submit for a review [14:17] Ankammarao_: you do not have to but please do [14:18] ^ [14:18] kjackal, lazypower : ok, thank you [14:31] np [14:33] lazyPower: HELP... oh, sorry, all is working actually :) [14:33] (hello :p) [14:33] \o/ YESSSSS [14:33] ITS HAPPENINGGGG [14:33] Zic :) I have new goodies for you [14:34] Juju tshirts? :p [14:41] nah thats something jcastro does [14:41] i have a new etcd charm [14:41] with new etcd snap stuff [14:41] well its a new old charm :) [14:45] Zic https://github.com/juju-solutions/layer-etcd/pull/77 [14:50] stokachu: Hi, conjure-up imports ubuntu .root.tar.gz images for novakvm, shouldn't it be ubuntu -disk1.img? or is it the lxd image will work on kvm as well? [14:50] catbus1, hmm i dont think so [14:51] lemme look at the spells === gonzo___ is now known as dgonzo [14:52] catbus1, xenial-server-cloudimg-amd64-disk1.img is the correct one right? [14:53] catbus1, for now you can do juju config nova-compute virt-type=lxd [14:53] catbus1, ill fix the spell so it imports kvm though [14:54] lazyPower: oh nice, my apt pinning is living its last days so :) [14:54] Zic - thats the idea. channels + snaps will make it a fantastic experience [14:54] stokachu: I believe so, that's what the configuration example shows in the openstack-base charm. [14:54] catbus1, ok ill fix that in the spell [14:55] Zic - if you feel brave, build from that branch, deploy teh current etcd and upgrade to that assembled charm (juju upgrade-charm etcd --path=./etcd) and you'll see "the magic" at work. [14:55] catbus1, once my tests complete ill push a new version to the --edge channel for you to test [14:56] stokachu: I can launch lxd instances fine, though. what's the difference between lxd image and -disk1.img? [14:56] catbus1, nova-compute supports both so depending on if you want KVM or LXD as your compute nodes [14:57] stokachu: no, I mean, it imports lxd image and I can launch instances with that image on thhe novakvm machines. horizon shows they were launched and running. [14:58] catbus1, and you can access them via ssh? [14:58] stokachu: I couldn't, but I haven't cleared whether that's a network connectivity issueor not. [14:59] catbus1, yea i think horizon may be showing you incorrect info [14:59] but yea lemme know if you figure out if its network or not [14:59] ok. [14:59] that would be a interesting test to add [14:59] afaik the default compute node type is kvm [14:59] so it shouldnt be able to run lxd containers [15:00] ok. [15:00] in the mean time I can import the kvm images manually. [15:00] catbus1, ok cool, do you need help with that or are you good? [15:00] I am good. [15:01] catbus1, ok cool, ping me with what you find out, i gotta step away for a few hours [15:01] ok [15:16] it's not really ontopic but if you have any starting points: do you have any experience of SGBD (MySQL, MariaDB, PostgreSQL) inside a K8s cluster without Vitess? [15:17] I'm looking at this currently: https://github.com/bitnami/charts/tree/master/incubator/mariadb-cluster [15:17] (but as you see, it's in incubator for now) [15:18] Zic - i myself tend to use lxd for my stateful workloads [15:19] however, if you're looking to run a chart in prod, as you've noticed, they are in incubation. I don't think any databases other than crate have really been identified as a production ready process-container database. Thats not to say you cant run PG containers in prod... just that they haven't actively promoted it as an option. [15:21] yeah, the only solution I saw is Vitess, but finally, we're on the path to give up this solution [15:21] the pdo-grpc PHP extension for Vitess's client is not mature for production code [15:22] we're falling back to the old php-mysqli connector so [15:22] (yeah, my customer is using PHP for his backend :p) [15:26] I don't think YouTube is using PHP so I guess it's normal after all [15:28] i need a reality check. anyone know if lxd-in-lxd is *supposed* to work? [15:29] for example, deploy a machine that's a lxd container, then deploy a lxd container to that machine, then deploy a charm to that container [15:30] marcoceppi, lazyPower ^ [15:31] tvansteenburgh - i'm pretty sure its going to give you trouble if its expected to be a privledged container [15:32] tvansteenburgh - however we did find that as of the 2.1 update nested lxd seems to have changed behavior. i dont think it works any longer, no. [15:32] lazyPower: :( [15:32] lazyPower: thanks for the info [16:03] is it possible to list all actions that have been run on a unit? [16:03] all action ids that is [16:05] for example if i run an action using amulet, how can i see the result later on if its not logged in the test [16:08] SimonKLB: the action uuid is returned from Deployment.action_do(...) [16:09] or UnitSentry.run_action(...) [16:09] tvansteenburgh: yea but there is no way to get hold of action ids that have been run before if you dont have any test logs where you printed the ids? [16:10] for example if i want more historical data from actions that have been run the past week or such [16:14] SimonKLB: yeah with amulet that'll be difficult. what's the use case? [16:15] tvansteenburgh: right now i just wanted to see the result from an action while debugging my code and i didnt print the action id in the test [16:15] so i just got curious if the action ids and/or results where stored anywhere so that they could be retrieved at a later time [16:17] SimonKLB: they're not. we could add something that shells out to `juju show-action-status` or something i guess [16:19] im not speaking about amulet specifically, but rather actions as a whole, i assume they are stored somewhere since you're able to fetch the results at a later time as long as you have the IDs [16:20] it would be nice to just have a command that could list all actions that have been run on a unit, like `juju action-ids [unit]` [16:20] juju show-action-status [16:20] ah :D [16:20] nvm then! [16:22] tvansteenburgh: btw, have you seen this ? https://github.com/juju/amulet/issues/171 [16:23] it might be another miss on my part, but from the looks of it there doesnt seem to be a way for a test to both keep the model but also upgrade the charm? [16:25] now the options are, do it all from scratch or run the test on the same charms as before, right? [16:27] tvansteenburgh: it should work, but you have to have the right profiles, iirc [16:27] marcoceppi: where can i find info that [16:28] tvansteenburgh: idk, I've been trying to get my lxd containers setup to support it and #lxcontainers is pretty void [16:29] marcoceppi: k [16:29] SimonKLB: gimme a few, otp [16:30] SimonKLB: i think you are right though, there's no good way to do that right now [16:31] SimonKLB: i saw your issue when you submitted it, but i haven't had time to do anything with it [16:31] tvansteenburgh: okok! it would be nice to have it when youre doing CI on charms - it's not a biggie having to reset, but the tests would run a lot faster if you could just keep it and upgrade the charms that have changed [16:32] SimonKLB: yeah i hear ya [16:38] kjackal: I updated https://github.com/juju-solutions/layer-cwr/pull/92 so the only remaining issue should be the lxd storage [16:38] cory_fu: I guess so, I havent had a full test yet [16:39] cory_fu: should I try it tomorrow? [16:39] kjackal: Yeah. Though if kwmonroe or petevg want to help out on reviewing it, I'd like to get it in so you guys can stop giving me merge conflicts. ;) [16:40] :) sounds resonable! [16:40] i was gonna change the indentation on all the xml jobs real quick. i'll look at cory_fu's PR after that. [16:40] lol === mup_ is now known as mup [18:37] Hey juju folks, what's the status of the promulgated nagios charm and its maintenance? Would love to help on https://bugs.launchpad.net/nagios-charm/+bug/1605733 as it's breaking for us (and others), but it's unclear who's involved in the nagios-charmers and how it's maintained. [18:37] Bug #1605733: Nagios charm does not add default host checks to nagios [18:39] o/ Juju world [18:43] hi zeestrat, the metadata.yaml has the maintainers. In addition to contacting them I would suggest you send an email to the juju list stating your intention. The discussions here are a bit ... ephemeral [18:43] kwmonroe: hey, have a few seconds? [18:46] sorry erlon, i'm out the door to go pick up my kid for the next few (like 30) minutes. ask away though and i'll see the message when i get back.. [18:47] kwmonroe: hmm, sure ill leave a question around [19:04] all: kwmonroe: guys, I have deployed the base-openstack (https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml) charm and some services seem to be very broken (like some services not even configured rabbit or mysql properly), is there a problem with the charm? [19:08] beisner: icey ^^. erlon, can you link to a 'juju status' paste? [19:10] https://www.irccloud.com/pastebin/o2dkfrUN/ [19:12] this is neutro.conf in neutron-gateway https://www.irccloud.com/pastebin/av0fuRhU/neutron.conf === frankban is now known as frankban|afk [19:14] This is in nova-cloud-controller https://www.irccloud.com/pastebin/aYIiSWsH/nova.conf [19:15] *1 is mitaka the other kilo* [19:17] just for reference a service that seems to be working fine https://www.irccloud.com/pastebin/rWRTvf15/cinder.conf [19:27] hi kwmonroe, erlon - i'd need to see the juju unit logs from the failed units neutron-gateway/0 and nova-cloud-controller/0 in order to advise; the bundle is known-good, but certain customizations may be needed depending on the metal that it's laid down on. [19:28] beisner: those are kvm VMs, the only thing I changed in the charms was 1 - the name of the network interfaces from eth1 to ens1, 2 - the name of the disk from sdb to vdb [19:29] beisner: its a flat topology, 5 machines coneected to an admin network, and another interface (virtual but bridged to pub) making the public interface [19:30] beisner: how/where do I get that? [19:34] erlon, /var/log/juju/unit-* from those two machines/vms [19:46] beisner: didn't find a way to post the full logs, but here is the error [19:47] https://www.irccloud.com/pastebin/pPfMlade/unit-neutron-gateway-0.log%20 [19:50] beisner: it seems its the network configuration, but, now, how will I tell juju to bypass the ens4 to the container? [19:56] beisner: some services it deploys on containers, others in the host VMs, for each of that the neutron interface will be different [20:04] https://www.irccloud.com/pastebin/7QgsM3tg/unit-nova-cloud-controller-0.log [20:16] beisner: may be the nova error is related to the neutron: 2017-02-20 21:53:00 ERROR juju.worker.dependency engine.go:547 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-neutron-gateway-0/charm: stat /var/lib/juju/agents/unit-neutron-gateway-0/charm: no such file or directory [20:16] beisner: this is the first error in unit-nova-cloud-controller-0.log [20:16] beisner: and then after some time those one above [20:20] erlon, neutron-gateway shouldn't be in a container with that bundle. [20:23] erlon, the other failure (cloud-compute-relation-changed answers = dns.resolver.query(address, rtype)) indicates that the machines in your lab can't reverse-resolve themselves/peers. [20:24] beisner: that reverse resolving should be done by the DNS server in MaaS doesnt it? [20:26] beisner: hmmm got the neutron gateway error, it is running in a vm with only 1 interface [21:18] stokachu o/ [21:18] er [21:19] stormmore \o [21:19] lazyPower, \o/ [21:19] haha [21:19] eyyyyy [21:20] stokachu - incoming fun here https://github.com/juju-solutions/layer-etcd/pull/77 [21:21] lazyPower, hah can't wait for that [21:21] if you're feeling froggy, i still need to port the actions but i think thats literally it [21:21] then its all snaps all the time [21:21] i have verified it works on lxd :D [21:22] lazyPower: make sure you install squashfuse <3 [21:22] marcoceppi - LAYER-SNAP ALREADY DOES THISSSSS [21:22] YASSSSSSS [21:22] YAASSSSSS QUEEENNN [21:22] ikr [21:22] lazyPower, made a few comments [21:22] oh crap, this isn't #eco [21:22] stokachu ta [21:22] appreciate the wip review [21:22] it'll only help when i go to actually land this [21:23] :D [21:23] anywho i need to jet, catch a flight in little over an hour [21:23] cheers === jog_ is now known as jog [21:23] safe flight [21:59] petevg, kwmonroe: All of the issues that kjackal hit in https://github.com/juju-solutions/layer-cwr/pull/92 are resolved, if you guys were up for giving it a review === frankban|afk is now known as frankban [22:01] excellent cory_fu.. is this deployed on j.d-i.n? looks like cwr-52 maybe? [22:01] kwmonroe: Yep. Port 8081 for Jenkins. [22:01] cool, i'll check it out [22:09] anyone in here working on python-libjuju? [22:11] * xavpaice is trying to ppa build a package that needs it, but it's not yet packaged - wondering who to talk to so I can help out with that === mup_ is now known as mup === mup_ is now known as mup [22:33] xavpaice: Hey, I've done some work on python-libjuju, as has tvansteenburgh and petevg. By package you mean a Debian package? [22:35] cory_fu, xavpaice: I believe that tvansteenburgh was working on adding it to pypi.python.org, but there's a naming conflict (it's called juju, and so is an older Python lib). [22:35] it's there [22:35] petevg: That's been resolved. Pypi is correct now [22:35] Awesome! I'm behind the times, apparently. [22:35] :) [22:35] yeah, it's on pypi and can be grabbed via pip, but I want to put something in a PPA that needs it [22:35] so would prefer to get a deb [22:36] I'm working on putting a source package into a ppa now, just don't want to duplicate someone else's work [22:36] xavpaice: no one else has done it that i know of [22:36] xavpaice: I don't think a deb is being worked on, and I personally have no experience packaging debs but am happy to help out in any way that I can [22:37] awesome - thanks [22:37] cory_fu, petevg: while we're all standing around the water cooler... https://github.com/juju/python-libjuju/pull/56 [22:46] xenial package: https://launchpad.net/~canonical-bootstack/+archive/ubuntu/bootstack-ops/+packages [22:52] tvansteenburgh: left comments. [22:53] petevg: ty === frankban is now known as frankban|afk [22:53] np === thumper is now known as thumper-dogwalk [22:58] tvansteenburgh: I got an error running the add_machines.py: http://pastebin.ubuntu.com/24043000/ I think it's a timing issue but the example probably needs to watch for the container to be fully removed === scuttlemonkey is now known as scuttle|afk [23:18] cory_fu: interesting, thanks. i didn't hit that [23:46] cory_fu: containerization looks great. i'd really like to get multiple cwr jobs fired off at once. any idea how to do that with 1 jenkins slave? [23:46] kwmonroe: You just have to increase the number of executors, under Manage Jenkins [23:46] I'm not sure how we'd do that in the charm. [23:47] neat! what's a good setting? like 1000? [23:47] i'll try 2 for now. [23:47] heh [23:47] That should probably be a config option on the Jenkins charm, I think [23:48] HOLY CRAP cory_fu. it's working. it's really working! [23:48] :)