[03:01] <spaok> how does provides work in the charms? I'm looking at https://github.com/juju-solutions/layer-docker/blob/master/metadata.yaml  but I don't see any other reference to dockerhost besides it being mentioned in the provides
[08:26] <caribou> Hello, are there (known) issues with amulet testing on Xenial ?
[08:26] <caribou> my tests run fine on Trusty, but on Xenial ./amulet/filesystem_data.py expects python2 to be available which is not
[08:27] <magicaltrout> marcoceppi is/was in europe so you might get a quick answer to that when he's around
[09:49] <marosg> I did "juju deploy ubuntu". It failed from command line with "unknown channel candidate". I am on beta15, so that's ok, I would need Beta16. However, when I did the same from Juju GUI, it worked. Just curious - is GUI using different mechanism to access charm store ?
[09:51] <magicaltrout> works here on RC1 marosg
[09:51] <magicaltrout> there was a bunch of changes to channel naming
[09:51] <magicaltrout> which I suspect is what you're seeing
[09:55] <marosg> I understand why cli does not work, it is exactly becasue of those name changes. I am just surprised GUI works.
[09:56] <magicaltrout> you might be able to fudge it with --channel stable
[09:56] <magicaltrout> or something
[09:56] <magicaltrout> instead of having it ponder which to choose
[10:19] <marosg> yes, --channel stable helps. But my original question was how come GUI worked. Is GUI using different mechanism to access charmstore?
[10:21] <magicaltrout> marosg: it will just append that channel flag to the call
[10:22] <magicaltrout> where as your out of date juju command line client won't
[10:22] <magicaltrout> :)
[10:22] <magicaltrout> if you apt-get update and rebootstrap you'd see that you don't need to do that on the CLI either
[10:23] <marosg> ok, now I understand, thanks
[10:23] <magicaltrout> no probs
[12:12] <Andrew_jedi> Hello guys, I was wondering whether this bug fix was included in the openstack oslo messaging charms for Liberty? https://bugs.launchpad.net/oslo.service/+bug/1524907
[12:12] <mup> Bug #1524907: [SRU] Race condition in SIGTERM signal handler <sts> <sts-sru> <Ubuntu Cloud Archive:Fix Released> <Ubuntu Cloud Archive liberty:In Progress by hopem> <oslo.service:Fix Released> <python-oslo.service (Ubuntu):Fix Released> <python-oslo.service (Ubuntu Wily):Won't Fix>
[12:12] <mup> <python-oslo.service (Ubuntu Xenial):Fix Released> <python-oslo.service (Ubuntu Yakkety):Fix Released> <https://launchpad.net/bugs/1524907>
[12:14] <Andrew_jedi> jamespage: ^^
[12:23] <jamespage> Andrew_jedi, there is a patch on the bug report, but its not been pulled into the SRU process for the Liberty UCA yet
[12:30] <KpuCko> hello, is it there any way to do charm search on the command line?
[12:40] <Andrew_jedi> jamespage: Thanks, so the only way for me now is to manually apply this patch. I am not sure where should i apply this patch. Is there any other workaround?
[12:41] <jamespage> not really - the patch is in the queue, just lots of other things also contending for developer time
[12:42] <Andrew_jedi> jamespage: My cinder scheduler is refusing to remain in active state. Any pointer what should i do in the meantime?
[12:42] <magicaltrout> KpuCko: not currently
[12:43] <KpuCko> mhm, thanks
[12:43] <KpuCko> another question, how to add charm from cli without deploying it?
[12:44] <KpuCko> i mean i have to do some configuration before deployment?
[12:44] <magicaltrout> you can't stage them, but I think you can pass configuration options along with the deploy command
[12:45] <magicaltrout> https://jujucharms.com/docs/1.24/charms-config
[12:45] <magicaltrout> like there
[12:46] <jamespage> Andrew_jedi, give me an hour and I'll get it up into liberty-proposed
[12:46] <KpuCko> mhm, okey will try that
[12:46] <jamespage> its been kicking around a few weeks and I'm not sure why
[12:46] <Andrew_jedi> jamespage: Thanks a ton :)
[12:46] <Andrew_jedi> \O/
[12:47] <jamespage> Andrew_jedi, track the bug - there will be an automatic comment telling you how to test it when it gets uploaded
[12:47] <Andrew_jedi> jamespage: Roger that!
[12:56] <bbaqar__> exit
[13:00] <MrDan> hi guys
[13:00] <MrDan> is rc2 out on ppa?
[13:00] <rick_h_> MrDan: not yet, it'll be late today.
[13:00] <MrDan> great, thanks
[13:01] <rick_h_> MrDan: we're working on getting the CI run with the network fix for the NG thing through so hasn't started to build the release yet
[13:46] <lazyPower> o/ Morning #juju
[14:05] <pitti> hello
[14:06] <pitti> I just tried to redeploy a service with juju-1.25 in Canonical's Prodstack; before it even gets to the actual charm, the agent install fails with
[14:06] <pitti> 2016-09-29 13:44:00 WARNING juju.worker.dependency engine.go:304 failed to start "leadership-tracker" manifold worker: dependency not available
[14:06] <pitti> 2016-09-29 13:48:30 ERROR juju.worker.uniter.filter filter.go:137 tomb: dying
[14:06] <pitti> (the latter repeasts over and over)
[14:06] <pitti> does that ring a bell?
[14:06] <pitti> (it
[14:06] <pitti> 's a xenial instance)
[14:14] <lazyPower> pitti:  i'm not seeing anything in the issue tracker that looks relevant
[14:15] <lazyPower> pitti: can you bug that along with the controller logs and machine-agent logs if there are any on the unit thats failing?
[14:17] <pitti> lazyPower: yes, there are; I'll do that
[14:17] <lazyPower> thanks, sorry about the inconvenience :/
[14:19] <bdx_> lazyPower: sup
[14:19] <bdx_> lazyPower: I'm going to be on a fire mission to re-write the elasticsearch charm
[14:20] <bdx_> lazyPower: we require a client node architecture
[14:20] <lazyPower> bdx_: I'm OK with this - but i ahve 2 things to put out there
[14:21] <lazyPower> 1) retain all the existing relations, 2) deploy the old charm and upgrade to hte new one to make sure its a drop in replacement for existing deployments (if you're keeping trusty as the target series)
[14:21] <lazyPower> (or multiseries)
[14:21] <bdx_> ok, xenial will be my target .... do I need to support trusty too?
[14:21] <lazyPower> well, there's a lot of elasticsearch deployments out there on our trusty charm
[14:22] <lazyPower> so, maybe once there's a straw man poll the list and we go from there?
[14:22] <lazyPower> s/straw man/straw man,/
[14:22] <bdx_> perfect
[14:22] <lazyPower> i doubt they will want it, as es upgrades between major versions is hairy
[14:22] <lazyPower> requires data dump + data restore in most cases
[14:22] <lazyPower> bdx_: are you using logstash in any of your deployments?
[14:23] <bdx_> not yet
[14:23] <bdx_> why, whats up with it?
[14:24] <lazyPower> its about to get some TLC after i get ceph moving
[14:24] <lazyPower> i need to buffer beats input
[14:25] <lazyPower> i've discovered that you can tank an eS instance with beats fairly quickly
[14:25] <bdx_> lazyPower: really?
[14:25] <bdx_> good to know
[14:25] <lazyPower> yep. i had an 8 node cluster pushing to an underpowered ES host lastnight that died
[14:25] <bdx_> wow
[14:25] <lazyPower> when i buffered it through logstash i had a better garantee of the packets coming in at a consistent rate and it didn't tank the database
[14:25] <bdx_> that makes sense
[14:26] <lazyPower> what really happened is it was taking too long to respond to kibana so kibana thought the es adapter was dead
[14:26] <lazyPower> its a fun set of dependencies... the logrouter is a more important thing than i gave it credit
[14:26] <bdx_> I'm pretty sure I've experienced what you've described
[14:27] <bdx_> I just assumed kibana was bugging out
[14:27] <lazyPower> its a hair more complex than that
[14:27] <lazyPower> but you had it mostly right
[14:27] <bdx_> right
[14:27] <lazyPower> i feel that kibana should be more intelligent with what its reporting is the issue. just saying ES Adapter Failed isn't terribly helpful when you're staring at a 502 page
[14:28] <bdx_> totally
[14:28] <lazyPower> like "query latency" or "omg load wtf"
[14:28] <bdx_> YES
[14:28] <bdx_> my plan is to create layer-elasticsearch-base
[14:28] <lazyPower> well bdx_, let me show you this
[14:29] <lazyPower> https://www.elastic.co/products/watcher
[14:29] <lazyPower> coupled with https://www.elastic.co/products/reporting
[14:29] <bdx_> oooooh
[14:29] <bdx_> thats sick
[14:29] <lazyPower> elastic flavored prometheus?
[14:29] <lazyPower> with the capacity to email reports on a daily/weekly/monthly basis of charts you define in kibana
[14:30] <bdx_> wow
[14:30] <lazyPower> i have no time to write this stack up
[14:30] <lazyPower> but seems interesting
[14:30] <bdx_> I want it
[14:30] <lazyPower> os i thought i'd put it out there
[14:30] <bdx_> thx
[14:30] <lazyPower> i'm happy to patch pilot you in if you want to contribute these
[14:30] <bdx_> I entirely do
[14:30] <lazyPower> brb refreshon coffee
[14:31] <bdx_> I'm currently charming up a set of 10+ apps
[14:31] <pitti> lazyPower: I filed https://bugs.launchpad.net/juju/+bug/1628946
[14:31] <mup> Bug #1628946: [juju 1.25] agent fails to install on xenial node <juju:New> <https://launchpad.net/bugs/1628946>
[14:31] <bdx_> I think all but 1 uses elasticsearch
[14:33] <bdx_> 3 apps are already being deployed as charms .... but I am under high load atm ... not sure if I'll be able to start hacking at it for a minute yet
[14:33] <lazyPower> bdx_: sounds like a good litmus. you have some interfaces already written for you (a start with requires)
[14:33] <bdx_> I'm trying to finish the other 7
[14:33] <lazyPower> i'm curious to see your test structure for that bundle when its done :)
[14:33] <lazyPower> pitti: i've +1d the heat. thanks for getting that filed
[14:34] <bdx_> canonical needs to hire you an apprentice
[14:34] <bdx_> bdb
[14:34] <bdx_> brb
[14:34] <lazyPower> pfft canonical needs to hire me 3 more mentors. Mbruzek is tired of nacking my late night code ;)
[14:35] <pitti> lazyPower: cheers
[14:35] <lazyPower> i clearly havent had the bad habbits beaten out of me yet
[14:35] <lazyPower> s/habbits/habits/
[14:38] <magicaltrout> hobbits?
[14:44] <lazyPower> with their fuzzy feetses
[14:45] <lazyPower> what are mesos my troutness?
[14:45] <lazyPower> eh i was reaching there, disregard that last bit
[14:46] <lazyPower> jose: i haven't forgotten that i still owe you some couch db testing time. hows tomorrow looking for you?
[14:48] <jose> lazyPower: owncloud. do you have time in a couple hours? have to do some.uni stuff tomorrow
[14:48] <lazyPower> jose: i can stick around after hours. i'm pretty solid today with post release cleanup
[14:48] <lazyPower> but i can lend a hand w/ OC tests or couch tests. take your pick
[14:49] <jose> OC, couch is being check by couch, oc is an amulet thing (old amulet)
[14:50] <jose> I should be home at around noon your time, does that work?
[14:51] <lazyPower> i'm still goign to be full tilt on current duties but i can TAL and MP's and test run results/questions
[14:51] <lazyPower> s/and M/at M/
[14:51] <lazyPower> btw, again, this coffee man... :fire:
[14:51] <lazyPower> i've been spacing it out so it lasts longer
[14:51] <jose> lol I can get you some more soon
[17:57] <smgoller> hey all, so I'm using juju to deploy the openstack bundle, and I've got an external network with a vlan tag set up. The goal is assigning VMs IP addresses directly on that network so no floating IPs involved. The network is up, and I can ping openstack's dhcp server instance from the outside world. However, the VM I launched connected to that network is unable to get metadata from nova. How should I configure the bundle so that can work?
[18:01] <smgoller> according to this post: http://abregman.com/2016/01/06/openstack-neutron-troubleshooting-and-solving-common-problems/ I need to set 'enable_isolated_metadata = True' in the dhcp agent configuration file. I'm not sure if that solves the problem, but is there a way from juju to add that configuration?
[18:06] <smgoller> thedac: any ideas? instances directly connected to an external VLAN aren't getting cloud-init data properly. I can ping openstack's dhcp server address from the outside world, so connectivity is good.
[18:08] <thedac> smgoller: Hi. Are you using the neutron-gateway at all? By default this is where we run nova-api-metadata. If not it can be run on the nova-compute nodes directly
[18:09] <smgoller> there's no openstack router involved if that's what you mean
[18:09] <smgoller> thedac: the gateway is external to openstack on the vlan
[18:10] <thedac> right
[18:10] <thedac> I mean are you deploying with our charm "neutron-gateway" in the mix?
[18:10] <smgoller> yeah
[18:10] <smgoller> this is your openstack-base bundle
[18:10] <thedac> ok
[18:10]  * thedac re-reads through the details
[18:11] <smgoller> only thing that's configured in the charm is bridge-mappings and data port
[18:11] <smgoller> i've created the vlan network manually
[18:11] <thedac> Ok, does the VM get a DHCP address?
[18:11] <smgoller> according to openstack it does
[18:11] <thedac> you can figure that out from nova console-log $ID
[18:11] <smgoller> thedac: ok one sec.
[18:13] <thedac> smgoller: you are looking for something along the lines of http://pastebin.ubuntu.com/23252275/
[18:13] <thedac> in that console output
[18:14] <smgoller> definitely nothing like that
[18:14] <smgoller> nning for Raise ne...k interfaces (5min 7s / 5min 8s)[K[[0;1;31mFAILED[0m] Failed to start Raise network interfaces.
[18:14] <smgoller> See 'systemctl status networking.service' for details.
[18:14] <smgoller> [[0;1;33mDEPEND[0m] Dependency failed for Initial cloud... job (metadata service crawler).
[18:14] <smgoller> [
[18:14] <smgoller> ack
[18:15] <thedac> Any chance I can see a pastebin of that?
[18:15] <smgoller> thedac: http://paste.ubuntu.com/23252290/
[18:15] <thedac> thanks. Let me take a look
[18:20] <thedac> smgoller: ok, the VM is definitely not getting a DHCP address. Do you know if your neutron network setup commands set GRE instead of flat network for your tenant network?
[18:21] <thedac> Let me find you the right commands to check. One sec
[18:21] <smgoller> thedac: I set up the network via horizon and set the type to vlan.
[18:21] <smgoller> but yeah, let's verify
[18:26] <thedac> sorry, struggling to find the right command. Give me a few more minutes
[18:26] <smgoller> sure
[18:28] <smgoller> thedac: thank you so much for helping me with this.
[18:28] <thedac> ah, ha. I was not admin. neutron net-show $tenant_net   What does provider:network_type say?
[18:28] <thedac> no problem
[18:30] <thedac> smgoller: couple more questions in the PM
[19:53] <spaok> whats the best way to get the IP of the unit running my charm?
[20:13] <valeech> spaok: is it in a container?
[20:18] <smgoller> anyone have any ideas why juju-list-models would hang for a very long time?
[20:18] <smgoller> adding and switching models happens instantly
[20:19] <spaok> valeech: yes, it will be
[20:20] <spaok> i was looking at unit_get('public-address')
[21:04] <marcoceppi> magicaltrout: I'm back in the US :)
[21:19] <magicaltrout> booooo
[21:20] <magicaltrout> i hope you're jetlagged
[21:28] <spaok> when I try to build my charm I get "build: Please add a `repo` key to your layer.yaml", but I added repo to th elayer.yaml
[21:29] <spaok> oh nevermind, doc confusered me
[21:30] <lazyPower> spaok: thats a known bug. the build process is linting the layer directory and not the output charm directory. https://github.com/juju/charm-tools/pull/256
[21:31] <lazyPower> but glad you got it sorted :)
[21:49] <spaok> is there a way to make config options required, like juju won't deploy a service unless you set the values?
[21:50] <magicaltrout> nope, but you could block up the actual install until they are met spaok
[21:51] <magicaltrout> the relay a status message like "waiting for configuration you moron"
[21:51] <spaok> ok, I'll try that
[21:52] <spaok> is the pythonhosted api for charmhelpers the most current?
[21:52] <lazyPower> yep
[21:52] <spaok> http://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.rsync
[21:53] <spaok> cause it just says -r as the flags
[21:53] <spaok> but when it runs it uses --delete
[21:54] <lazyPower> spaok: https://bugs.launchpad.net/charm-helpers if you would be so kind sir
[21:55] <spaok> sure, gonna see if I set flags if that changes it
[22:07] <x58> What's the best place to ask for JuJu features that are missing and would make my life easier?
[22:07] <x58> I filed a support case with my support team too... but I can make this one public.
[22:08] <x58> marcoceppi / magicaltrout ^^
[22:09] <magicaltrout> x58: depends what component i guess
[22:09] <x58> Juju itself.
[22:10] <magicaltrout> the core platform?
[22:11] <spaok> lazyPower: I confirmed that if I set the flags it removes the delete option, I'll file a bug in a bit
[22:11] <magicaltrout> x58: for juju core I believe its: https://bugs.launchpad.net/juju/+filebug
[22:12] <magicaltrout> filing stuff there is good, and raising it on the mailing list with a usecase is a bonus
[22:12] <magicaltrout> and recommended
[22:12] <x58> magicaltrout: https://gist.github.com/bertjwregeer/919fe70e8cfc5184399d83ad11df3932
[22:12] <x58> I want to report that. Where do I report that feature request?
[22:13] <x58> Sorry, I try to be helpful, but signing up for another mailing list is not something I really want to deal with.
[22:13] <magicaltrout> yeah stick it in launchpad x58
[22:14] <x58> https://bugs.launchpad.net/juju/+bug/1629124
[22:14] <mup> Bug #1629124: JuJu should learn about customizing configuration by tags <juju:New> <https://launchpad.net/bugs/1629124>
[22:14] <magicaltrout> then prod rick_h_ about it ;)
[22:14] <x58> rick_h_: *prod* https://bugs.launchpad.net/juju/+bug/1629124
[22:15] <magicaltrout> he might know a thing or two that already exist... or just say "we'll schedule that for 2.x " ;)
[22:15] <x58> magicaltrout: It probably will through the support organisation too ;-) dparrish is our DSE
[22:15] <magicaltrout> always helps
[22:15] <lazyPower> hey x58, how's etcd treating you these days?
[22:16] <x58> lazyPower: It is working well. Sometimes you need to kick it once or twice when you spin up a new one due to some SSL cert issue
[22:16] <x58> but resolved -r seems to make it behave.
[22:17] <x58> And removing a running instance seems to fail about 30% of the time.
[22:17] <lazyPower> x58: what if i told you, we're going to replace that peering certificate with a ca
[22:17] <x58> No rhyme or reason.
[22:17] <lazyPower> hmm. is it the last one around?
[22:18] <lazyPower> i've observed where the leader seems to get behind in unregistering units and it tanks trying to remove a member
[22:18] <x58> Nope, not the last one around.
[22:18] <lazyPower> if you can confirm that i think i know how to fix it, and it would be *wonderful* if you could keep an eye out for that and do a simple is-leader check on the unit
[22:19] <x58> Let's say I spin up 3 - 4 of them
[22:19] <x58> I then remove 1
[22:19] <x58> that 1 that I remove might or might not succeed in removal. Sometimes it hangs and a resolved -r kicks it.
[22:19] <lazyPower> ah ok
[22:19] <lazyPower> i'll add a scaleup/down test and try to root that out
[22:20] <x58> etcd just seems finicky.
[22:20] <lazyPower> terribly
[22:20] <lazyPower> thanks for the feedback
[22:22] <x58> lazyPower: Thanks for your work on it :-)
[22:22] <x58> So long as I don't touch how many we have, things are fine :P
[22:33] <smgoller> thedac: ok, so I've set up a second cluster. this time I'm seeing this error constantly in the /var/log/neutron/neutron-openvswitch-agent.log: 2016-09-29 21:49:14.443 127296 ERROR oslo.messaging._drivers.impl_rabbit [req-c79a053b-8c2e-45d6-9ebc-4fddab0cf279 - - - - -] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 32 seconds.
[22:35] <thedac> smgoller: during deploy those messages are normal. After everything has settled and the rabbit connection info is in /etc/neutron.conf you should no longer see those.
[22:35] <smgoller> ok
[22:35] <smgoller> that makes sense
[22:38] <admcleod> how does one deploy a multiseries charm locally?
[22:39] <thedac> admcleod: juju deploy ./$CHARM --series $SERIES
[22:40] <admcleod> thedac: juju 1.25?
[22:40] <thedac> ah, for juju 1.25 you need the charm in a series named directory. juju deploy $SERIES/$CHARM
[22:41] <admcleod> thedac: so... ive built it as multiseries, and it goes into ./builds, you saying just copy it into ../trusty/ ?
[22:41] <thedac> yes, that should work
[22:45] <admcleod> thedac: thanks
[22:45] <thedac> no problem
[22:52] <admcleod> thedac: (it worked)
[22:52] <thedac> \o/
[23:12] <spaok> magicaltrout: do you know any charm examples doing the blocking thing you mentioned?
[23:15] <magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L19 spaok something like that but instead of the decorated class call
[23:16] <magicaltrout> (hookenv.config()['pdi_url'] or whatever
[23:16] <magicaltrout> and make sure its set to your liking
[23:18] <spaok> so would I make a def for init and put in a call to check method like that one, and if it passes set my state and use a when decorator to look for the ok state?
[23:23] <magicaltrout> correct spaok
[23:24] <spaok> kk, thanks