[00:42] <beisner> coreycb that one has the bzr import issue and isn't in sync @ LP
[00:45] <coreycb> beisner, oh..  so maybe I'm picking up the LP version
[05:56] <nullagent> Hi all, having trouble getting juju installed on wily. Just did in an install on a new machine yesterday and everything went just fine. Now today I decided to re-install on my dev machine to make sure I'm charm dev-ing in an identical environment. Eveything fails to start for some reason. No running jujud and my /var/lib/juju is completely empty
[07:13] <hoenir> hi guys, someone could help me with some commands in juju? I'm trying to remove a pending machine with the command juju remove-machine 0
[07:14] <hoenir> but when I do juju status I'm getting the machine.. and it does not remove
[07:15] <hoenir> here is my output on juju status
[07:15] <hoenir> https://paste.ubuntu.com/15824849/
[07:16] <hoenir> I was deploying this machine when my machine was suddenly shut down and now I have this machine in "pending.." state how can I remove it and start again the deployment?
[07:17] <hoenir> opss, to "machine" words..
[07:18] <hoenir> anyway reading the doc on juju it says that I should run the juju destroy-environment <environment> but how can I know what <environment> I'm in?
[07:18] <blahdeblah_> hoenir: juju switch will tell you which env you're currently in
[07:19] <hoenir> ohh but something tells me that I should not remove this... anyway how can I remove that problematic machine?
[07:21] <hoenir> or should I realease from the "MAAS web interface" that machine?
[07:24] <hoenir> I tried from the MAAS web interface, my node is "released" but from juju status I'm still getting that machine in pending state any thoughts on this?
[07:26] <hoenir> juju status
[07:43] <chrido> hoenir: no thoughts, but if destroy-environment does not work you can try --force
[07:44] <hoenir> tried to destroy controller but I hangs.. any advice?
[07:45] <hoenir> I just wanted to reset everything from 0 to bootstrap again
[07:45] <chrido> with --show-log --force?
[07:46] <hoenir> on juju destroy-controller --show-log --force mycontroller
[07:46] <hoenir> like this?
[07:46] <chrido> yes
[07:47] <hoenir> "force dosen't exist"
[07:48] <hoenir> with just --show-log opt it's just "dailing"
[07:48] <chrido> you are using juju2 i guess, i think its juju kill-controller
[07:48] <bradm> hoenir: this with juju2 ?  you can try juju2 kill-controller
[07:49] <hoenir> yeee, it worked..thanks a lot guys!
[07:50] <hoenir> and yeah I'm using the 2.0 juju version
[07:50] <hoenir> so what's the primary difference with destroy and kill?
[07:50] <hoenir> could anyone clarify this?
[07:51] <bradm> I think kill is the new --force
[07:53] <hoenir> oh, thanks a lot again !
[07:53] <chrido> here is some explanation: https://jujucharms.com/docs/devel/commands
[07:54] <chrido> also destroys the model
[07:56] <chrido> Does somebody know where in Juju2 you should put the proxy configuration? (Juju 1 had set-env or you could put it in the environments.yaml)
[08:05] <jamespage> gnuoy, urgh
[08:05] <jamespage> xenial specific issue:
[08:05] <jamespage> http://10.245.162.19/test_charm_pipeline_amulet_full/openstack/charm-neutron-gateway/305121/1/2016-04-13_18-21-57/test_charm_amulet_full/juju-stat-tabular-collect.txt
[08:06] <jamespage> gnuoy, https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1547122
[08:06] <mup> Bug #1547122: xenial: nova-api-metadata not running post deployment <openstack> <xenial> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1547122>
[08:06] <jamespage> I rasied a but a while back, but had not hit it since...
[08:06] <jamespage> systemd is less aggressive with restarting services that shutdown straight away
[08:37] <gnuoy> jamespage, so you think previously is was crashing straight away but upstart was restarting it till it worked?
[08:38] <jamespage> gnuoy, yeah
[08:38] <jamespage> gnuoy, its racey so you don't see it all of the time on systemd based installs - this is the 3rd hit i've seen since I started testing xenial
[08:38] <gnuoy> jamespage, on the plus side its nice the way workload status calls it out clearly
[08:39] <jamespage> gnuoy, yeah
[08:58] <jamespage> gnuoy, ok fixes up for nova-cc and neutron-gateway
[08:58] <gnuoy> jamespage, kk, thanks
[09:02] <gnuoy> jamespage, it looks like you're not gating the restart on paused status for neutron-gateway
[09:02] <jamespage> gnuoy, oh good point
[09:17] <jacekn_> hello. Can somebody check why my charm is still not recommended in the charm store? It was meant to happen 2 weeks ago. I know ingestion was broken but I'm pretty sure it's fixed now. https://bugs.launchpad.net/charms/+bug/1538573
[09:17] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Released> <https://launchpad.net/bugs/1538573>
[09:21] <jamespage> gnuoy, that restart service on nonce change is a pattern we should model
[09:21] <jamespage> but later...
[09:25] <freak_>  
[09:59] <jamespage> gnuoy, ok both of the reviews for https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1547122 are good
[09:59] <mup> Bug #1547122: xenial: nova-api-metadata not running post deployment <openstack> <xenial> <neutron-gateway (Juju Charms Collection):In Progress by james-page> <nova-cloud-controller (Juju Charms Collection):In Progress by james-page> <https://launchpad.net/bugs/1547122>
[09:59] <jamespage> not proposing a full recheck as the change is not specific to xenial
[10:09] <jamespage> gnuoy, just running smoke testing on staging now so we can promote through to updates...
[10:09] <jamespage> coreycb, hit a haproxy problem last night which has now been resolved..
[10:25] <jamespage> dosaboy_, rebased https://review.openstack.org/#/c/300164/ for you
[10:25] <jamespage> turns out we can all contribute to each others changes !
[10:25] <jamespage> gnuoy, ^^
[10:27] <gnuoy> kk
[10:49] <dosaboy> jamespage: you are too kind
[10:54] <tinwood> gnuoy, how long should simple_os_checks.py take?  I think I've got a 'stuck' test?
[10:55] <gnuoy> tinwood, typically 2 or 3 mins
[10:55] <tinwood> oh.
[10:55] <tinwood> gnuoy, that's not good then.  I'll have to see where it got stuck.
[10:56] <gnuoy> tinwood, give me a shout if you want a hand
[10:57] <tinwood> gnuoy, will do.
[11:04] <dosaboy> gnuoy: might wanna squeeze https://bugs.launchpad.net/charms/+source/swift-proxy/+bug/1570314 into 16.04
[11:04] <mup> Bug #1570314: can't set min-part-hours back to zero <swift-proxy (Juju Charms Collection):In Progress by hopem> <https://launchpad.net/bugs/1570314>
[11:04] <dosaboy> jamespage: ^^
[11:09] <urulama> jacekn: which is your collectd charm? ingestion fix doesn't touch that part (just makes it fast again and deal with disk space)
[11:10] <jacekn> urulama: https://jujucharms.com/u/jacekn/collectd/trusty/0 . Marked as "Fix Released" with +1 from marcoceppi here: https://bugs.launchpad.net/charms/+bug/1538573
[11:10] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Released> <https://launchpad.net/bugs/1538573>
[11:14] <urulama> jacekn: the logs see revision 0, but not newer one
[11:15] <urulama> jacekn: i'll put it on the list. if you want, you can always push it direclty with the new charm command from juju/devel ppa
[11:48] <icey> beisner: no idea why that won't merge
[11:48] <icey> oh yeah I do, it depended on the abandoned C-H sync commit, let me re make the commit on master
[11:50] <lazyPower> urulama - is there a way that I can flip the necessary bits in launchpad to enable things like bugreporting on a package that doesn't exist in the old launchpad structure? as an example try filing a bug here and watch it complain - https://bugs.launchpad.net/charms/+source/nexentaedge-swift-gw/+filebug
[11:51] <icey> beisner: https://review.openstack.org/#/c/305780/
[11:51] <icey> jamespage: this replicates the abandoned merge's functionality
[11:53] <urulama> lazyPower: not sure i understand correctly. you'd like to have just bug reporting done in LP, but the code would be somewhere else?
[12:44] <lazyPower> urulama - correct, but i think i'm going to move out of the charms collection to to do this, as there are some restrictions in there that just dont make sense for this application.
[12:56] <beisner> jamespage, gnuoy - i believe ceilometer is legitimately failing to pause/resume @ wily-liberty (re: https://review.openstack.org/#/c/304188/)
[12:57] <jamespage> beisner, \o/
[12:57] <jamespage> craple
[12:57] <gnuoy> beisner, jamespage, I can grab that and check it out
[12:58] <beisner> i'd say let's land that amulet test update, because it is identical to ceilometer-agent, except for the pause/resume, and i've not touched that.
[12:58] <jamespage> gnuoy, I have my test in odl atm
[12:58] <jamespage> test/head
[12:58] <jamespage> you can see what its doing to me...
[12:58] <gnuoy> beisner, +1
[12:58] <beisner> glad your head is in test, jamespage
[12:58] <beisner> ;-)
[12:58] <gnuoy> beisner, would you mind filing a bug for that and I'll grab it?
[12:59] <beisner> gnuoy, yep sec
[12:59] <jamespage> beisner, btw I have a ch branch that switches our amulet to github
[12:59] <jamespage> however
[12:59] <jamespage> amulet borks on it atm
[12:59] <jamespage> I tried working the feature for a hour this morning and got tangled - hoping marcoceppi might repay me my upload work for him in amulet features...
[13:00] <beisner> jamespage, :) saw that wip.  fyi, already had a card in backlog, added your name to it and a link to that review, to revisit in case we don't push that now.
[13:00] <beisner> it looks like its close
[13:00] <jamespage> beisner, what package provides juju-test ?
[13:00] <beisner> charm-tools i believe
[13:00] <jamespage> beisner, oh
[13:00] <jamespage> ok
[13:03] <beisner> gnuoy, https://bugs.launchpad.net/charms/+source/ceilometer/+bug/1570375
[13:03] <mup> Bug #1570375: pause/resume failing for wily-liberty (blocked: apache2 still running) <uosci> <ceilometer (Juju Charms Collection):New> <https://launchpad.net/bugs/1570375>
[13:03] <beisner> she's baaaack.  the blessed apache2
[13:03] <gnuoy> beisner, ta
[13:04] <beisner> yw gnuoy thx for looking
[13:07] <beisner> icey, ha!  good catch, i totally missed that review dependency.
[13:08] <icey> heh yeah beisner, new change up also updates the C-H in tests but should be passing shortly :)
[13:20] <marcoceppi> jamespage: let me know what features you need, you got it.
[13:21] <jamespage> marcoceppi, https://github.com/juju/amulet/issues/127
[13:21] <jamespage> marcoceppi, just uploaded charm 2.1.1 btw
[13:21] <marcoceppi> jamespage: <3 thank you so much. That's it for packaging from me. I'll just turn everyone else down from this point forward
[13:22] <marcoceppi> jamespage: where would this show up?
[13:22] <marcoceppi> jamespage: in like the the Deployment.add() ?
[13:22] <jamespage> marcoceppi, yah so
[13:22]  * jamespage looks for references
[13:23] <jamespage> right now we use the bzr branches on lp
[13:23] <jamespage> marcoceppi, I want todo https://review.openstack.org/#/c/304477/1/tests/charmhelpers/contrib/openstack/amulet/deployment.py
[13:24] <jamespage> marcoceppi, the location field gets handed to amulet here
[13:24] <jamespage> https://review.openstack.org/gitweb?p=openstack/charm-neutron-api.git;a=blob;f=tests/charmhelpers/contrib/amulet/deployment.py;h=d451698d344942d957a922529d7caf352e31f1ec;hb=80108561c5b7dba5b7c62811c66a2d5b69e772f0#l68
[13:24] <marcoceppi> jamespage: gotchya, I'll see if we can get a patch to amulet this week
[13:26] <marcoceppi> jamespage: this has ramifications on deployer, which we use to underpin amulet, but I'm sure we can make this happen
[13:26] <jamespage> marcoceppi, deployer already supports this format :-)
[13:26] <marcoceppi> jamespage: then this will be easy
[13:36] <jamespage> beisner, gnuoy: hey so are we going todo the series in metadata thing now that 1.25.5 is out?
[13:37] <jamespage> it would mean that anyone who wants to use the new charms would have to upgrade to latest 1.25 first...
[13:37] <lazyPower> jamespage - i tested that using whats in proposed and it still choked on series-in-metadata
[13:38] <jamespage> lazyPower, well that answers that question - I thought mgz_ had fixed that...
[13:38] <lazyPower> I dont want to cry wolf if this was fixed since Monday
[13:38] <rick_h_>  lazyPower jamespage series in medata choked 1.25.5?
[13:38] <rick_h_> lazyPower: yes, 1.25.5 that went out yesterday has the fix?
[13:38] <lazyPower> rick_h_ - was that release staged in -proposed ppa?
[13:38]  * lazyPower pulls latest stable to test
[13:38] <rick_h_> yes
[13:39] <lazyPower> ill re-verify, 1 sec
[13:39] <rick_h_> it was in proposed for a bit and hit released yeterday
[13:39] <lazyPower> yeah i was using what was in proposed
[13:39] <lazyPower> and it tanked on me on monday, so lemme flex this again and turn myself into a liar - which is best case scenario here
[13:39] <rick_h_> lazyPower: k, yes please let me know what the metadata looks like and verify version
[13:39] <rick_h_> lazyPower: because we specifically held onto 1.25.5 to get that fixed
[13:39] <lazyPower> ubuntu@charmbox:~$ juju --version
[13:39] <lazyPower> 1.25.5-trusty-amd64
[13:42] <lazyPower> rick_h_ - i'm full of lies, looks like it works. Apologies for the noise :)
[13:42] <lazyPower> jamespage ^
[13:42] <jamespage> lazyPower, lol
[13:42] <rick_h_> lazyPower: ok, phew. I'll not go nuts then :)
[13:44] <marcoceppi> lazyPower: I meant to mention it yesterday at standup, but we tested it at bigdata sprint and it works
[13:44] <lazyPower> marcoceppi - its weird tho the proposed ppa was choking on  series in metadata so we reverted it. we must have been running on -stable and not -proposed like we thought we were or something
[13:44]  * lazyPower shrugs
[13:44] <lazyPower> i'm just happy to see that i was wrong :D
[13:44] <rick_h_> lazyPower: all good, it happens to the best of us. I mean, there was that one time a few years ago it happened to me :P
[13:45] <lazyPower> rick_h_ - thats pretty much the bar i set for every week. Find one thing i've been complaining about thats been fixed. I'll gladly be the guy thats always wrong if you keep fixing my bugs :D
[13:46] <lazyPower> #thingsilearnedfromjorge
[13:46] <rick_h_> hah
[13:46]  * rick_h_ sees lazyPower turning that into a coffee table book
[13:50] <lazyPower> rick_h_ - i thought you were on vacation this week?
[13:50] <rick_h_> lazyPower: starts tomorrow
[13:50] <rick_h_> lazyPower: so one more day to bug you all before I go :P
[13:51]  * marcoceppi gets my long list of last min asks for rick_h_
[13:51] <lazyPower> rick_h_ i want a pony, can you stuff a pony in before release?
[13:51]  * aisrael adds to the list
[13:51]  * rick_h_ notes all lists must be mailed in triplicate and delivered by USPS...no change of it getting here before I'm on a plane now!
[13:57] <bbaqar> Anyone got a working openstack bundle for xenial?
[13:59] <jamespage> bbaqar, yeah
[13:59] <jamespage> one second
[13:59] <jamespage> bbaqar, I do - I'll try push it to lp today (have a couple of meetings todo)
[14:00] <jamespage> bbaqar, it will appear here - https://jujucharms.com/u/openstack-charmers-next/openstack-base
[14:02] <BrunoR> Hi. The submission to the charmstore is described using bazaar, how can I use git instead?
[14:02] <jcastro> cmars: stokachu: hey guys, any of your layered charms worth polishing off and pushing into review for the store?
[14:02] <bbaqar> james, much appreciated
[14:02] <cmars> jcastro, possibly. does that mean i have to use LP?
[14:02] <jcastro> hah no
[14:02] <cmars> jcastro, how do i submit a charm for review straight out of CS?
[14:02] <jcastro> https://jujucharms.com/docs/devel/authors-charm-store
[14:02] <cmars> sweet
[14:03] <jcastro> ^^ this will cover you too BrunoR
[14:03] <cmars> jcastro, cs:~cmars/gogs is close
[14:03] <cmars> jcastro, i've also got mattermost in devel
[14:03] <jcastro> I see it, that's awesome.
[14:03] <jcastro> I'd like to like, tell those projects what you're up to, but ideally it'd be something solid and testable, something they could be proud of if you know what I mean
[14:04] <jcastro> BrunoR: the new store doesn't care about vcs, you just use the charm command to publish to the store from a working directory
[14:04] <jcastro> but you will need the latest version of the charm tools from the ppa
[14:08] <BrunoR> jcastro: ok, thanks. that means I can publish the charms on github and do charm push (with my launchpad-account) from my working directory?
[14:11] <jcastro> BrunoR: yep.
[14:11] <jcastro> BrunoR: from there you can basically publish into a devel and stable channels as you see fit
[14:12] <marcoceppi> cmars jcastro that won't get it submitted for review, that will just get it into the charm store
[14:12] <jcastro> right
[14:12] <jcastro> I was just about to get to the review queue portion
[14:13] <jcastro> so we're working on a new review queue where you'll just submit whatever version you just published in your stable channel for review
[14:13] <jcastro> and then at some point jujucharms.com/foo will point to your version of the charm
[14:14] <jcastro> or you can leave it in your personal namespace, depending on what you want.
[14:22] <tinwood> jamespage, beisner: may have found a bug in the dpdk code in neutron-openvswitch whilst testing dvr (mojo): https://bugs.launchpad.net/charms/+source/neutron-openvswitch/+bug/1570411
[14:22] <mup> Bug #1570411: custom add_bridge_port(...) function doesn't bring up interface <neutron-openvswitch (Juju Charms Collection):New> <https://launchpad.net/bugs/1570411>
[14:28] <beisner> tinwood, woot :) appreciate the test spike on that.
[14:29] <tinwood> beisner, kk :) I'm trying out a fix -- let you know how it goes.
[14:45] <icey> beisner: you've got 2 community approves on https://code.launchpad.net/~xfactor973/charm-helpers/ceph-pool-replicas/+merge/291827 :)
[14:46] <beisner> icey, cholcombe - ok thanks guys.  will merge shortly.  will both ceph and ceph-mon need resync'd?
[14:46] <icey> I believe so, yes
[14:46] <icey> and thanks beisner!
[14:47] <icey> also, I'll see about writing a couple of tests to cover this case in the future
[14:59] <jamespage> tinwood, ugh thats possible
[14:59] <tinwood> jamespage, I'm trialling a fix at the moment.
[15:17] <beisner> cholcombe, c-h change landed.  clear to propose re-syncs.  (fyi jhobbs )
[15:17] <cholcombe> beisner, thanks
[15:17] <beisner> tyvm cholcombe jhobbs
[15:19] <cholcombe> beisner, so how many charms do you think are affected?  i think ceph, ceph-mon, cinder, glance, radosgw.  anything else?
[15:19] <beisner> cholcombe, let's get jamespage 's assessment on that.  i'm not sure.
[15:19] <cholcombe> ok
[15:29] <beisner> jamespage, fyi bug 1565120 ^
[15:29] <mup> Bug #1565120: incorrect replica count in a single-unit ceph deployment <oil> <ceph (Juju Charms Collection):In Progress by xfactor973> <glance (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1565120>
[15:30] <jamespage> beisner, cholcombe: does the pool creation happen in the ceph and ceph-mon charms right?
[15:30] <cholcombe> jamespage, i believe it does
[15:30] <cholcombe> i'm going to resync both of them
[15:30] <jamespage> cholcombe, I think that scope makes sense to me
[15:31] <jamespage> that was the intent of the broker - shove everything serverside in ceph itself...
[15:31] <cholcombe> so i can skip cinder/glance/radosgw
[15:48] <jamespage> beisner, ok I switch the xenial test for odl to use BE - point testing that before I do a recheck-all
[15:48] <jamespage> full rather
[15:49] <beisner> ack jamespage thx
[15:50] <jamespage> beisner, graaddhdhdhhdhsafdchjk cvsdjhn]#
[15:50] <jamespage> 14:43:20 Permission denied (publickey).
[15:50] <jamespage> 14:43:20 ConnectionReset reading response for 'BzrDir.open_2.1', retrying
[15:50] <jamespage> 14:43:20 Permission denied (publickey).
[15:50] <jamespage> on a recheck-full on mitaka-xenial
[15:50] <jamespage> that's like the last test...
[15:50] <beisner> wee.   i suspect the IS-outage is affecting us.
[15:50] <jamespage> ggrrrreeat!
[15:50] <beisner> also hit rockstar's lxd test
[15:51] <rockstar> Yup. I was waiting for that to get sorted before charm-recheck
[15:51] <jamespage> is it just me or has everything been working against us this week....
[15:54] <rockstar> jamespage: it's not just you, but I don't have the luxury of drinking a beer at the end of the day. :)
[15:54] <beisner> http://i.imgur.com/KZyNequ.gif
[16:07] <c0s> guys, any references to the nodejs layer source code? I can not find anything on the github ;(
[16:08] <lazyPower> stokachu - you did the node layer didn't you?
[16:08] <lazyPower> c0s https://github.com/battlemidget/juju-layer-node
[16:08] <stokachu> yea
[16:08] <c0s> yep, thanks lazyPower - just found it too ;)
[16:09] <stokachu> c0s: interfaces.juju.solutions points to the upstream git repo too
[16:09] <c0s> k!
[16:09] <c0s> thanks stokachu
[16:09] <stokachu> np, patches very welcome too ;)
[16:11] <c0s> Sure, but I won't touch nodejs :)
[16:11] <c0s> I want to take a look at it so I can do puppet layer ;)
[16:11] <c0s> total noob with charms
[16:31] <bdx> c0s: whats your strategy on the puppet layer?
[16:32] <bdx> c0s: heres what I have going on so far for a puppet agent layer -> https://github.com/jamesbeedy/layer-puppet-agent
[16:35] <jamespage> beisner, oh chicken and egg
[16:35] <jamespage> I need https://review.openstack.org/#/c/305121/ to land before I can get the xenial amulet tests to pass again...
[16:36] <jamespage> for odl-controller
[16:54] <c0s> bdx I am doing a master-less puppet layer, so we can get fixed version of the puppet and (mostly) hiera in the trusty
[16:55] <c0s> this is will be pretty dumb-ass one: just installing packages from a correct puppetlabs repo
[16:56] <beisner> jamespage, cholcombe, rockstar - i've squashed the check that has been intermittently causing "ERROR:root:One or more units are not ok (machine 0 ssh check fails)" .. you'll need a recheck if you saw that.   the underlying issue is unidentified, but the bootstrap node is clearly not accessible at the moment we were checking.
[16:57] <beisner> will add card to revisit and t-shoot as a potential juju issue
[16:57] <cholcombe> beisner, yeah i saw a bunch of failures that seemed wrong because they passed tests locally
[16:57] <rockstar> zul: if you please - https://review.openstack.org/#/c/305896/
[16:58] <beisner> fyi that is re: the charm single test.   compounding that, it looks like several tests were false-failed by an internet/infra hiccup (lp bzr ssh fails)
[16:58] <beisner> recheck ftw
[16:58] <zul> rockstar: done
[17:14] <bdx> c0s: look at my layer
[17:14] <bdx> c0s: that is exactly what it does
[17:15] <bdx> c0s: if you `charm build` layer-puppet-agent, you will get a charm that does what you are talking about
[17:15] <bdx> c0s: I made layer-puppet-agent for that purpose alone
[17:34] <bbaqar> Getting a ERROR cannot resolve charm URL "cs:xenial/ubuntu": charm not found when i deploy a ubuntu charm or xenial .. where should i pull it from .. should i just use the trusty one?
[17:52] <marcoceppi> bbaqar: just use the trusty one
[17:52] <marcoceppi> bbaqar: why do you need the ubuntu charm anyways?
[17:52]  * marcoceppi is curious
[17:58] <bbaqar_> What is the user/pass for logging into mysql
[18:01] <sparkiegeek> bbaqar: https://jujucharms.com/u/landscape/ubuntu/xenial/0 if you need it :)
[18:09] <marcoceppi> bbaqar: that's created when you create a relation, the root password is available on the unit in /var/lib/mysql/mysql.passwd
[18:10] <marcoceppi> lazyPower: 2.1.1 backported to juju/stable should be built in a few mins
[18:13] <lazyPower> \nn/,
[18:15] <bbaqar_> sparkiegeek: thanks :)
[18:21] <jhobbs> hey beisner - just curious when this charmhelpers fix will be synced to the -next ceph charm https://code.launchpad.net/~xfactor973/charm-helpers/ceph-pool-replicas/+merge/291827
[18:21] <jhobbs> any ideas?
[18:22] <beisner> jhobbs, cholcombe has the syncs proposed and pending CI/functional tests now
[18:22] <beisner> hoping to have results and land tonight
[18:22] <jhobbs> beisner: excellent, thanks
[18:23] <bbaqar_> marcoceppi: there is no /var/lib/mysql/mysql.passwd .. i am on cs:trusty/percona-cluster-32
[18:24] <marcoceppi> bbaqar: I'm not sure where percona-cluster puts it
[18:27] <beisner> jhobbs, yw & thx for raising that.  fyi we've got a card to implement that scenario as a regression test before 16.07.
[18:28] <jhobbs> beisner: np, glad we decided to start testing on -next. regression test sounds good, that will ease my mind about keeping that scenario working
[18:36] <beisner> jamespage, looks like your neutron-gateway change is tripping over bug 1570032 (raised yesterday, tldr:  all precise is borky on this)
[18:36] <mup> Bug #1570032: Precise-Icehouse:  publicURL endpoint for network service not found <uosci> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1570032>
[18:36] <beisner> thedac, did you have any cycles to poke at that ?^
[18:37] <thedac> beisner: I have not yet. I'll try and look this afternoon
[18:37] <beisner> much appreciated thedac
[18:38] <bbaqar_> in HA why doesn't horizon ask for provider type and segmentation id, when creating a network from admin?
[18:38] <bbaqar_> Is there any config that needs to be done for it?
[18:39] <bbaqar_> I can use CLI to make an external network with provider type and segmentation id but why cant i do that in horizon
[18:53] <c0s> hmm... interestingly it seems that apt layer doesn't work ;(
[18:54] <marcoceppi> c0s: how so?
[18:54] <marcoceppi> I use it frequently and works quite well
[18:55] <marcoceppi> beisner coreycb do you have an answer for bbaqar's question? ^^
[18:56] <c0s> after apt layer is done there's no sources added to the /etc/apt/sources.apt.d/
[18:56] <c0s> as the results, wrong version of the package is getting installed
[18:58] <c0s> marcoceppi: I am using it for 2 install sources, not sure if that causes the problem
[18:58] <beisner> hi bbaqar_, marcoceppi - i've only done that sort of admin network create/wiring foo via nova and neutron cli.   if the behavior is unexpected, please raise a bug and be sure to include details of what is deployed (openstack version, ubuntu version, bundle if you have it, etc).
[18:58] <c0s> https://github.com/c0s/juju-layer-puppet/blob/master/config.yaml
[18:59] <marcoceppi> c0s: remote the quotes?
[18:59] <marcoceppi> c0s: https://git.launchpad.net/layer-apt/tree/README.md#n28
[18:59] <c0s> ok, lemme try
[18:59] <c0s> yeah, I read it...
[18:59] <tvansteenburgh> options: apt: ....
[19:00] <tvansteenburgh> aren't you missing the apt key?
[19:01] <marcoceppi> tvansteenburgh: these are config.yaml not layer.yaml
[19:01] <c0s> it's right there in the config
[19:02] <c0s> what marcoceppi said
[19:05] <rockstar> beisner: could I get a Workflow +2 on this? https://review.openstack.org/#/c/305896/
[19:08] <c0s> nope, marcoceppi - removing quotes doesn't have any effect
[19:16] <beisner> rockstar, we'd need to update bundles in o-c-t (and possibly the charm store <-- jamespage ) and i want to make sure we have that all queued up to land along side this.  otherwise deploy tests will start to fail if we land this alone.
[19:16] <rockstar> beisner: are those gate tests?
[19:17] <beisner> rockstar, no, but they run every day, and are part of the release process
[19:17] <rockstar> beisner: do you want me to do that?
[19:19] <beisner> rockstar, yes please.   lp:openstack-charm-testing    ...the *next* files in the bundles/lxd and bundles/multi-hypervisor dirs will need MPs.   then after release, the *default* files in the same dirs will need MPs.
[19:21] <beisner> rockstar, fwiw, your gate would be ++many-hours if we did all that on commits ;-)   hence the scheduled runs.
[19:24] <rockstar> beisner: cool. Branch coming.
[19:59] <c0s> marcoceppi: is there a need to do configre_sources call from apt layer to make these sources available somehow?
[19:59] <marcoceppi> c0s: shouldn't
[19:59] <marcoceppi> c0s: stub wrote the layer, he might be able to help
[20:04] <c0s> thanks marcoceppi. stub - if you can help to figure out why apt doesn't configure sources in this commit
[20:04] <c0s>   https://github.com/c0s/juju-layer-puppet/commit/59bdb701af0dc65054bc33e935c993fd97cbb2ab
[20:04] <c0s> I would be real grateful ;)
[20:12] <marcoceppi> c0s: layer is off to a good start
[20:18] <jcastro> hey so marcoceppi
[20:18] <jcastro> https://jujucharms.com/docs/devel/getting-started
[20:18] <jcastro> so if get-started ends  up going here I have some concerns
[20:18] <jcastro> so first off, this is a kickass local/zfs setup, which is fine
[20:18] <marcoceppi> jcastro: some is an understatement
[20:19] <jcastro> but it feels a little full on if someone just wants to try it
[20:19] <marcoceppi> jcastro: yes, but it's not terrible
[20:19] <jcastro> right
[20:19] <jcastro> I am talking from a zero-to-workload aspect
[20:20] <jcastro> obviously if someone is using juju for more than just trying it we'd want to give them this experience
[20:20] <jcastro> I am just concerned that it's like "want to try juju? step one, ZFS IN YOUR FACE."
[20:20] <bdx> c0s: you are using the layer apt to install the sources, but not the packages
[20:20] <jcastro> it's like dang all I wanted to do was fire up mysql
[20:21] <bdx> c0s: see -> https://github.com/jamesbeedy/layer-kibana
[20:22] <marcoceppi> jcastro: lets talk Monday about it
[20:22] <jcastro> ack
[20:32] <c0s> thanks bdx - checking
[20:33] <c0s> bdx - I am actually installing the packages in the layer
[20:34] <c0s> bdx, the call is right here https://github.com/c0s/juju-layer-puppet/blob/master/reactive/puppet.py#L48
[20:45] <c0s> bdx, I see what you're saying
[20:45] <c0s> I think you're right - let me try
[20:48] <bdx> c0s: also, you must run `apt update` after you add the puppet sources
[20:48] <bdx> that is the root of your issue
[20:48] <c0s> ok, got it - let do just that.
[20:53] <bdx> c0s: https://github.com/c0s/juju-layer-puppet/pull/1
[20:54] <bdx> that should get you going
[21:04] <bdx> c0s: per ^ -  I said that backwards .. you are installing the packages with apt layer, but not the sources
[21:04] <bdx> c0s: which shouldn't really matter, but consistency is also nice
[21:09] <bdx> c0s: also, you will need the packages puppet + puppet-common
[21:09] <bdx> just puppet doesn't cut it unfortunately