=== lathiat_ is now known as lathiat === Spads_ is now known as Spads [00:42] coreycb that one has the bzr import issue and isn't in sync @ LP [00:45] beisner, oh.. so maybe I'm picking up the LP version === cos1 is now known as c0s === redir is now known as redir_afk === bdx_ is now known as bdx [05:56] Hi all, having trouble getting juju installed on wily. Just did in an install on a new machine yesterday and everything went just fine. Now today I decided to re-install on my dev machine to make sure I'm charm dev-ing in an identical environment. Eveything fails to start for some reason. No running jujud and my /var/lib/juju is completely empty === med_ is now known as Guest46337 === bradm_ is now known as bradm [07:13] hi guys, someone could help me with some commands in juju? I'm trying to remove a pending machine with the command juju remove-machine 0 [07:14] but when I do juju status I'm getting the machine.. and it does not remove [07:15] here is my output on juju status [07:15] https://paste.ubuntu.com/15824849/ [07:16] I was deploying this machine when my machine was suddenly shut down and now I have this machine in "pending.." state how can I remove it and start again the deployment? [07:17] opss, to "machine" words.. [07:18] anyway reading the doc on juju it says that I should run the juju destroy-environment but how can I know what I'm in? [07:18] hoenir: juju switch will tell you which env you're currently in === blahdeblah_ is now known as blahdeblah [07:19] ohh but something tells me that I should not remove this... anyway how can I remove that problematic machine? [07:21] or should I realease from the "MAAS web interface" that machine? [07:24] I tried from the MAAS web interface, my node is "released" but from juju status I'm still getting that machine in pending state any thoughts on this? [07:26] juju status [07:43] hoenir: no thoughts, but if destroy-environment does not work you can try --force [07:44] tried to destroy controller but I hangs.. any advice? [07:45] I just wanted to reset everything from 0 to bootstrap again [07:45] with --show-log --force? [07:46] on juju destroy-controller --show-log --force mycontroller [07:46] like this? [07:46] yes [07:47] "force dosen't exist" [07:48] with just --show-log opt it's just "dailing" [07:48] you are using juju2 i guess, i think its juju kill-controller [07:48] hoenir: this with juju2 ? you can try juju2 kill-controller [07:49] yeee, it worked..thanks a lot guys! [07:50] and yeah I'm using the 2.0 juju version [07:50] so what's the primary difference with destroy and kill? [07:50] could anyone clarify this? [07:51] I think kill is the new --force [07:53] oh, thanks a lot again ! [07:53] here is some explanation: https://jujucharms.com/docs/devel/commands [07:54] also destroys the model [07:56] Does somebody know where in Juju2 you should put the proxy configuration? (Juju 1 had set-env or you could put it in the environments.yaml) [08:05] gnuoy, urgh [08:05] xenial specific issue: [08:05] http://10.245.162.19/test_charm_pipeline_amulet_full/openstack/charm-neutron-gateway/305121/1/2016-04-13_18-21-57/test_charm_amulet_full/juju-stat-tabular-collect.txt [08:06] gnuoy, https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1547122 [08:06] Bug #1547122: xenial: nova-api-metadata not running post deployment [08:06] I rasied a but a while back, but had not hit it since... [08:06] systemd is less aggressive with restarting services that shutdown straight away === terje is now known as Guest76597 [08:37] jamespage, so you think previously is was crashing straight away but upstart was restarting it till it worked? [08:38] gnuoy, yeah [08:38] gnuoy, its racey so you don't see it all of the time on systemd based installs - this is the 3rd hit i've seen since I started testing xenial [08:38] jamespage, on the plus side its nice the way workload status calls it out clearly [08:39] gnuoy, yeah [08:58] gnuoy, ok fixes up for nova-cc and neutron-gateway [08:58] jamespage, kk, thanks [09:02] jamespage, it looks like you're not gating the restart on paused status for neutron-gateway [09:02] gnuoy, oh good point [09:17] hello. Can somebody check why my charm is still not recommended in the charm store? It was meant to happen 2 weeks ago. I know ingestion was broken but I'm pretty sure it's fixed now. https://bugs.launchpad.net/charms/+bug/1538573 [09:17] Bug #1538573: New collectd subordinate charm === jacekn_ is now known as jacekn [09:21] gnuoy, that restart service on nonce change is a pattern we should model [09:21] but later... [09:25] === terje is now known as Guest32809 [09:59] gnuoy, ok both of the reviews for https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1547122 are good [09:59] Bug #1547122: xenial: nova-api-metadata not running post deployment [09:59] not proposing a full recheck as the change is not specific to xenial [10:09] gnuoy, just running smoke testing on staging now so we can promote through to updates... [10:09] coreycb, hit a haproxy problem last night which has now been resolved.. [10:25] dosaboy_, rebased https://review.openstack.org/#/c/300164/ for you [10:25] turns out we can all contribute to each others changes ! [10:25] gnuoy, ^^ [10:27] kk === dosaboy_ is now known as dosaboy [10:49] jamespage: you are too kind [10:54] gnuoy, how long should simple_os_checks.py take? I think I've got a 'stuck' test? [10:55] tinwood, typically 2 or 3 mins [10:55] oh. [10:55] gnuoy, that's not good then. I'll have to see where it got stuck. [10:56] tinwood, give me a shout if you want a hand [10:57] gnuoy, will do. [11:04] gnuoy: might wanna squeeze https://bugs.launchpad.net/charms/+source/swift-proxy/+bug/1570314 into 16.04 [11:04] Bug #1570314: can't set min-part-hours back to zero [11:04] jamespage: ^^ [11:09] jacekn: which is your collectd charm? ingestion fix doesn't touch that part (just makes it fast again and deal with disk space) [11:10] urulama: https://jujucharms.com/u/jacekn/collectd/trusty/0 . Marked as "Fix Released" with +1 from marcoceppi here: https://bugs.launchpad.net/charms/+bug/1538573 [11:10] Bug #1538573: New collectd subordinate charm [11:14] jacekn: the logs see revision 0, but not newer one [11:15] jacekn: i'll put it on the list. if you want, you can always push it direclty with the new charm command from juju/devel ppa [11:48] beisner: no idea why that won't merge [11:48] oh yeah I do, it depended on the abandoned C-H sync commit, let me re make the commit on master [11:50] urulama - is there a way that I can flip the necessary bits in launchpad to enable things like bugreporting on a package that doesn't exist in the old launchpad structure? as an example try filing a bug here and watch it complain - https://bugs.launchpad.net/charms/+source/nexentaedge-swift-gw/+filebug [11:51] beisner: https://review.openstack.org/#/c/305780/ [11:51] jamespage: this replicates the abandoned merge's functionality [11:53] lazyPower: not sure i understand correctly. you'd like to have just bug reporting done in LP, but the code would be somewhere else? === Guest46337 is now known as med_ === CyberJacob is now known as zz_CyberJacob [12:44] urulama - correct, but i think i'm going to move out of the charms collection to to do this, as there are some restrictions in there that just dont make sense for this application. [12:56] jamespage, gnuoy - i believe ceilometer is legitimately failing to pause/resume @ wily-liberty (re: https://review.openstack.org/#/c/304188/) [12:57] beisner, \o/ [12:57] craple [12:57] beisner, jamespage, I can grab that and check it out [12:58] i'd say let's land that amulet test update, because it is identical to ceilometer-agent, except for the pause/resume, and i've not touched that. [12:58] gnuoy, I have my test in odl atm [12:58] test/head [12:58] you can see what its doing to me... [12:58] beisner, +1 [12:58] glad your head is in test, jamespage [12:58] ;-) [12:58] beisner, would you mind filing a bug for that and I'll grab it? [12:59] gnuoy, yep sec [12:59] beisner, btw I have a ch branch that switches our amulet to github [12:59] however [12:59] amulet borks on it atm [12:59] I tried working the feature for a hour this morning and got tangled - hoping marcoceppi might repay me my upload work for him in amulet features... [13:00] jamespage, :) saw that wip. fyi, already had a card in backlog, added your name to it and a link to that review, to revisit in case we don't push that now. [13:00] it looks like its close [13:00] beisner, what package provides juju-test ? [13:00] charm-tools i believe [13:00] beisner, oh [13:00] ok [13:03] gnuoy, https://bugs.launchpad.net/charms/+source/ceilometer/+bug/1570375 [13:03] Bug #1570375: pause/resume failing for wily-liberty (blocked: apache2 still running) [13:03] she's baaaack. the blessed apache2 [13:03] beisner, ta [13:04] yw gnuoy thx for looking === skay_ is now known as skay [13:07] icey, ha! good catch, i totally missed that review dependency. [13:08] heh yeah beisner, new change up also updates the C-H in tests but should be passing shortly :) [13:20] jamespage: let me know what features you need, you got it. [13:21] marcoceppi, https://github.com/juju/amulet/issues/127 [13:21] marcoceppi, just uploaded charm 2.1.1 btw [13:21] jamespage: <3 thank you so much. That's it for packaging from me. I'll just turn everyone else down from this point forward [13:22] jamespage: where would this show up? [13:22] jamespage: in like the the Deployment.add() ? [13:22] marcoceppi, yah so [13:22] * jamespage looks for references [13:23] right now we use the bzr branches on lp [13:23] marcoceppi, I want todo https://review.openstack.org/#/c/304477/1/tests/charmhelpers/contrib/openstack/amulet/deployment.py [13:24] marcoceppi, the location field gets handed to amulet here [13:24] https://review.openstack.org/gitweb?p=openstack/charm-neutron-api.git;a=blob;f=tests/charmhelpers/contrib/amulet/deployment.py;h=d451698d344942d957a922529d7caf352e31f1ec;hb=80108561c5b7dba5b7c62811c66a2d5b69e772f0#l68 [13:24] jamespage: gotchya, I'll see if we can get a patch to amulet this week [13:26] jamespage: this has ramifications on deployer, which we use to underpin amulet, but I'm sure we can make this happen [13:26] marcoceppi, deployer already supports this format :-) [13:26] jamespage: then this will be easy [13:36] beisner, gnuoy: hey so are we going todo the series in metadata thing now that 1.25.5 is out? [13:37] it would mean that anyone who wants to use the new charms would have to upgrade to latest 1.25 first... [13:37] jamespage - i tested that using whats in proposed and it still choked on series-in-metadata [13:38] lazyPower, well that answers that question - I thought mgz_ had fixed that... [13:38] I dont want to cry wolf if this was fixed since Monday [13:38] lazyPower jamespage series in medata choked 1.25.5? [13:38] lazyPower: yes, 1.25.5 that went out yesterday has the fix? [13:38] rick_h_ - was that release staged in -proposed ppa? [13:38] * lazyPower pulls latest stable to test [13:38] yes [13:39] ill re-verify, 1 sec [13:39] it was in proposed for a bit and hit released yeterday [13:39] yeah i was using what was in proposed [13:39] and it tanked on me on monday, so lemme flex this again and turn myself into a liar - which is best case scenario here [13:39] lazyPower: k, yes please let me know what the metadata looks like and verify version [13:39] lazyPower: because we specifically held onto 1.25.5 to get that fixed [13:39] ubuntu@charmbox:~$ juju --version [13:39] 1.25.5-trusty-amd64 [13:42] rick_h_ - i'm full of lies, looks like it works. Apologies for the noise :) [13:42] jamespage ^ [13:42] lazyPower, lol [13:42] lazyPower: ok, phew. I'll not go nuts then :) === beuno_ is now known as beuno [13:44] lazyPower: I meant to mention it yesterday at standup, but we tested it at bigdata sprint and it works [13:44] marcoceppi - its weird tho the proposed ppa was choking on series in metadata so we reverted it. we must have been running on -stable and not -proposed like we thought we were or something [13:44] * lazyPower shrugs [13:44] i'm just happy to see that i was wrong :D [13:44] lazyPower: all good, it happens to the best of us. I mean, there was that one time a few years ago it happened to me :P [13:45] rick_h_ - thats pretty much the bar i set for every week. Find one thing i've been complaining about thats been fixed. I'll gladly be the guy thats always wrong if you keep fixing my bugs :D [13:46] #thingsilearnedfromjorge [13:46] hah [13:46] * rick_h_ sees lazyPower turning that into a coffee table book [13:50] rick_h_ - i thought you were on vacation this week? [13:50] lazyPower: starts tomorrow [13:50] lazyPower: so one more day to bug you all before I go :P [13:51] * marcoceppi gets my long list of last min asks for rick_h_ [13:51] rick_h_ i want a pony, can you stuff a pony in before release? [13:51] * aisrael adds to the list [13:51] * rick_h_ notes all lists must be mailed in triplicate and delivered by USPS...no change of it getting here before I'm on a plane now! === med_ is now known as Guest90748 [13:57] Anyone got a working openstack bundle for xenial? [13:59] bbaqar, yeah [13:59] one second [13:59] bbaqar, I do - I'll try push it to lp today (have a couple of meetings todo) [14:00] bbaqar, it will appear here - https://jujucharms.com/u/openstack-charmers-next/openstack-base [14:02] Hi. The submission to the charmstore is described using bazaar, how can I use git instead? [14:02] cmars: stokachu: hey guys, any of your layered charms worth polishing off and pushing into review for the store? [14:02] james, much appreciated [14:02] jcastro, possibly. does that mean i have to use LP? [14:02] hah no [14:02] jcastro, how do i submit a charm for review straight out of CS? [14:02] https://jujucharms.com/docs/devel/authors-charm-store [14:02] sweet [14:03] ^^ this will cover you too BrunoR [14:03] jcastro, cs:~cmars/gogs is close [14:03] jcastro, i've also got mattermost in devel [14:03] I see it, that's awesome. [14:03] I'd like to like, tell those projects what you're up to, but ideally it'd be something solid and testable, something they could be proud of if you know what I mean [14:04] BrunoR: the new store doesn't care about vcs, you just use the charm command to publish to the store from a working directory [14:04] but you will need the latest version of the charm tools from the ppa [14:08] jcastro: ok, thanks. that means I can publish the charms on github and do charm push (with my launchpad-account) from my working directory? === Guest90748 is now known as medberry [14:11] BrunoR: yep. [14:11] BrunoR: from there you can basically publish into a devel and stable channels as you see fit [14:12] cmars jcastro that won't get it submitted for review, that will just get it into the charm store [14:12] right [14:12] I was just about to get to the review queue portion [14:13] so we're working on a new review queue where you'll just submit whatever version you just published in your stable channel for review [14:13] and then at some point jujucharms.com/foo will point to your version of the charm [14:14] or you can leave it in your personal namespace, depending on what you want. [14:22] jamespage, beisner: may have found a bug in the dpdk code in neutron-openvswitch whilst testing dvr (mojo): https://bugs.launchpad.net/charms/+source/neutron-openvswitch/+bug/1570411 [14:22] Bug #1570411: custom add_bridge_port(...) function doesn't bring up interface [14:28] tinwood, woot :) appreciate the test spike on that. [14:29] beisner, kk :) I'm trying out a fix -- let you know how it goes. === cos1 is now known as c0s [14:45] beisner: you've got 2 community approves on https://code.launchpad.net/~xfactor973/charm-helpers/ceph-pool-replicas/+merge/291827 :) [14:46] icey, cholcombe - ok thanks guys. will merge shortly. will both ceph and ceph-mon need resync'd? [14:46] I believe so, yes [14:46] and thanks beisner! [14:47] also, I'll see about writing a couple of tests to cover this case in the future [14:59] tinwood, ugh thats possible [14:59] jamespage, I'm trialling a fix at the moment. === scuttle|afk is now known as scuttlemonkey [15:17] cholcombe, c-h change landed. clear to propose re-syncs. (fyi jhobbs ) [15:17] beisner, thanks [15:17] tyvm cholcombe jhobbs [15:19] beisner, so how many charms do you think are affected? i think ceph, ceph-mon, cinder, glance, radosgw. anything else? [15:19] cholcombe, let's get jamespage 's assessment on that. i'm not sure. [15:19] ok [15:29] jamespage, fyi bug 1565120 ^ [15:29] Bug #1565120: incorrect replica count in a single-unit ceph deployment [15:30] beisner, cholcombe: does the pool creation happen in the ceph and ceph-mon charms right? [15:30] jamespage, i believe it does [15:30] i'm going to resync both of them [15:30] cholcombe, I think that scope makes sense to me [15:31] that was the intent of the broker - shove everything serverside in ceph itself... [15:31] so i can skip cinder/glance/radosgw [15:48] beisner, ok I switch the xenial test for odl to use BE - point testing that before I do a recheck-all [15:48] full rather [15:49] ack jamespage thx [15:50] beisner, graaddhdhdhhdhsafdchjk cvsdjhn]# [15:50] 14:43:20 Permission denied (publickey). [15:50] 14:43:20 ConnectionReset reading response for 'BzrDir.open_2.1', retrying [15:50] 14:43:20 Permission denied (publickey). [15:50] on a recheck-full on mitaka-xenial [15:50] that's like the last test... [15:50] wee. i suspect the IS-outage is affecting us. [15:50] ggrrrreeat! [15:50] also hit rockstar's lxd test [15:51] Yup. I was waiting for that to get sorted before charm-recheck [15:51] is it just me or has everything been working against us this week.... [15:54] jamespage: it's not just you, but I don't have the luxury of drinking a beer at the end of the day. :) [15:54] http://i.imgur.com/KZyNequ.gif [16:07] guys, any references to the nodejs layer source code? I can not find anything on the github ;( [16:08] stokachu - you did the node layer didn't you? [16:08] c0s https://github.com/battlemidget/juju-layer-node [16:08] yea [16:08] yep, thanks lazyPower - just found it too ;) [16:09] c0s: interfaces.juju.solutions points to the upstream git repo too [16:09] k! [16:09] thanks stokachu [16:09] np, patches very welcome too ;) [16:11] Sure, but I won't touch nodejs :) [16:11] I want to take a look at it so I can do puppet layer ;) [16:11] total noob with charms === Spads_ is now known as Spads [16:31] c0s: whats your strategy on the puppet layer? [16:32] c0s: heres what I have going on so far for a puppet agent layer -> https://github.com/jamesbeedy/layer-puppet-agent [16:35] beisner, oh chicken and egg [16:35] I need https://review.openstack.org/#/c/305121/ to land before I can get the xenial amulet tests to pass again... [16:36] for odl-controller [16:54] bdx I am doing a master-less puppet layer, so we can get fixed version of the puppet and (mostly) hiera in the trusty [16:55] this is will be pretty dumb-ass one: just installing packages from a correct puppetlabs repo [16:56] jamespage, cholcombe, rockstar - i've squashed the check that has been intermittently causing "ERROR:root:One or more units are not ok (machine 0 ssh check fails)" .. you'll need a recheck if you saw that. the underlying issue is unidentified, but the bootstrap node is clearly not accessible at the moment we were checking. [16:57] will add card to revisit and t-shoot as a potential juju issue [16:57] beisner, yeah i saw a bunch of failures that seemed wrong because they passed tests locally [16:57] zul: if you please - https://review.openstack.org/#/c/305896/ [16:58] fyi that is re: the charm single test. compounding that, it looks like several tests were false-failed by an internet/infra hiccup (lp bzr ssh fails) [16:58] recheck ftw [16:58] rockstar: done [17:14] c0s: look at my layer [17:14] c0s: that is exactly what it does [17:15] c0s: if you `charm build` layer-puppet-agent, you will get a charm that does what you are talking about [17:15] c0s: I made layer-puppet-agent for that purpose alone === redelmann is now known as rudi_meet [17:34] Getting a ERROR cannot resolve charm URL "cs:xenial/ubuntu": charm not found when i deploy a ubuntu charm or xenial .. where should i pull it from .. should i just use the trusty one? === matthelmke is now known as matthelmke-afk [17:52] bbaqar: just use the trusty one [17:52] bbaqar: why do you need the ubuntu charm anyways? [17:52] * marcoceppi is curious [17:58] What is the user/pass for logging into mysql === redir_afk is now known as redir [18:01] bbaqar: https://jujucharms.com/u/landscape/ubuntu/xenial/0 if you need it :) [18:09] bbaqar: that's created when you create a relation, the root password is available on the unit in /var/lib/mysql/mysql.passwd [18:10] lazyPower: 2.1.1 backported to juju/stable should be built in a few mins [18:13] \nn/, [18:15] sparkiegeek: thanks :) [18:21] hey beisner - just curious when this charmhelpers fix will be synced to the -next ceph charm https://code.launchpad.net/~xfactor973/charm-helpers/ceph-pool-replicas/+merge/291827 [18:21] any ideas? [18:22] jhobbs, cholcombe has the syncs proposed and pending CI/functional tests now [18:22] hoping to have results and land tonight [18:22] beisner: excellent, thanks [18:23] marcoceppi: there is no /var/lib/mysql/mysql.passwd .. i am on cs:trusty/percona-cluster-32 [18:24] bbaqar: I'm not sure where percona-cluster puts it [18:27] jhobbs, yw & thx for raising that. fyi we've got a card to implement that scenario as a regression test before 16.07. [18:28] beisner: np, glad we decided to start testing on -next. regression test sounds good, that will ease my mind about keeping that scenario working [18:36] jamespage, looks like your neutron-gateway change is tripping over bug 1570032 (raised yesterday, tldr: all precise is borky on this) [18:36] Bug #1570032: Precise-Icehouse: publicURL endpoint for network service not found [18:36] thedac, did you have any cycles to poke at that ?^ [18:37] beisner: I have not yet. I'll try and look this afternoon [18:37] much appreciated thedac [18:38] in HA why doesn't horizon ask for provider type and segmentation id, when creating a network from admin? [18:38] Is there any config that needs to be done for it? [18:39] I can use CLI to make an external network with provider type and segmentation id but why cant i do that in horizon [18:53] hmm... interestingly it seems that apt layer doesn't work ;( [18:54] c0s: how so? [18:54] I use it frequently and works quite well [18:55] beisner coreycb do you have an answer for bbaqar's question? ^^ [18:56] after apt layer is done there's no sources added to the /etc/apt/sources.apt.d/ [18:56] as the results, wrong version of the package is getting installed [18:58] marcoceppi: I am using it for 2 install sources, not sure if that causes the problem [18:58] hi bbaqar_, marcoceppi - i've only done that sort of admin network create/wiring foo via nova and neutron cli. if the behavior is unexpected, please raise a bug and be sure to include details of what is deployed (openstack version, ubuntu version, bundle if you have it, etc). [18:58] https://github.com/c0s/juju-layer-puppet/blob/master/config.yaml [18:59] c0s: remote the quotes? [18:59] c0s: https://git.launchpad.net/layer-apt/tree/README.md#n28 [18:59] ok, lemme try [18:59] yeah, I read it... [18:59] options: apt: .... [19:00] aren't you missing the apt key? [19:01] tvansteenburgh: these are config.yaml not layer.yaml [19:01] it's right there in the config [19:02] what marcoceppi said === cos1 is now known as c0s [19:05] beisner: could I get a Workflow +2 on this? https://review.openstack.org/#/c/305896/ [19:08] nope, marcoceppi - removing quotes doesn't have any effect [19:16] rockstar, we'd need to update bundles in o-c-t (and possibly the charm store <-- jamespage ) and i want to make sure we have that all queued up to land along side this. otherwise deploy tests will start to fail if we land this alone. [19:16] beisner: are those gate tests? [19:17] rockstar, no, but they run every day, and are part of the release process [19:17] beisner: do you want me to do that? [19:19] rockstar, yes please. lp:openstack-charm-testing ...the *next* files in the bundles/lxd and bundles/multi-hypervisor dirs will need MPs. then after release, the *default* files in the same dirs will need MPs. [19:21] rockstar, fwiw, your gate would be ++many-hours if we did all that on commits ;-) hence the scheduled runs. [19:24] beisner: cool. Branch coming. [19:59] marcoceppi: is there a need to do configre_sources call from apt layer to make these sources available somehow? [19:59] c0s: shouldn't [19:59] c0s: stub wrote the layer, he might be able to help [20:04] thanks marcoceppi. stub - if you can help to figure out why apt doesn't configure sources in this commit [20:04] https://github.com/c0s/juju-layer-puppet/commit/59bdb701af0dc65054bc33e935c993fd97cbb2ab [20:04] I would be real grateful ;) [20:12] c0s: layer is off to a good start [20:18] hey so marcoceppi [20:18] https://jujucharms.com/docs/devel/getting-started [20:18] so if get-started ends up going here I have some concerns [20:18] so first off, this is a kickass local/zfs setup, which is fine [20:18] jcastro: some is an understatement [20:19] but it feels a little full on if someone just wants to try it [20:19] jcastro: yes, but it's not terrible [20:19] right [20:19] I am talking from a zero-to-workload aspect [20:20] obviously if someone is using juju for more than just trying it we'd want to give them this experience [20:20] I am just concerned that it's like "want to try juju? step one, ZFS IN YOUR FACE." [20:20] c0s: you are using the layer apt to install the sources, but not the packages [20:20] it's like dang all I wanted to do was fire up mysql [20:21] c0s: see -> https://github.com/jamesbeedy/layer-kibana [20:22] jcastro: lets talk Monday about it [20:22] ack [20:32] thanks bdx - checking [20:33] bdx - I am actually installing the packages in the layer [20:34] bdx, the call is right here https://github.com/c0s/juju-layer-puppet/blob/master/reactive/puppet.py#L48 [20:45] bdx, I see what you're saying [20:45] I think you're right - let me try [20:48] c0s: also, you must run `apt update` after you add the puppet sources [20:48] that is the root of your issue [20:48] ok, got it - let do just that. [20:53] c0s: https://github.com/c0s/juju-layer-puppet/pull/1 [20:54] that should get you going [21:04] c0s: per ^ - I said that backwards .. you are installing the packages with apt layer, but not the sources [21:04] c0s: which shouldn't really matter, but consistency is also nice [21:09] c0s: also, you will need the packages puppet + puppet-common [21:09] just puppet doesn't cut it unfortunately === redir is now known as redir_afk