/srv/irclogs.ubuntu.com/2016/09/29/#juju.txt

spaokhow does provides work in the charms? I'm looking at https://github.com/juju-solutions/layer-docker/blob/master/metadata.yaml  but I don't see any other reference to dockerhost besides it being mentioned in the provides03:01
=== bpierre_ is now known as bpierre
=== frankban|afk is now known as frankban
caribouHello, are there (known) issues with amulet testing on Xenial ?08:26
cariboumy tests run fine on Trusty, but on Xenial ./amulet/filesystem_data.py expects python2 to be available which is not08:26
magicaltroutmarcoceppi is/was in europe so you might get a quick answer to that when he's around08:27
=== rogpeppe1 is now known as rogpeppe
marosgI did "juju deploy ubuntu". It failed from command line with "unknown channel candidate". I am on beta15, so that's ok, I would need Beta16. However, when I did the same from Juju GUI, it worked. Just curious - is GUI using different mechanism to access charm store ?09:49
magicaltroutworks here on RC1 marosg09:51
magicaltroutthere was a bunch of changes to channel naming09:51
magicaltroutwhich I suspect is what you're seeing09:51
marosgI understand why cli does not work, it is exactly becasue of those name changes. I am just surprised GUI works.09:55
magicaltroutyou might be able to fudge it with --channel stable09:56
magicaltroutor something09:56
magicaltroutinstead of having it ponder which to choose09:56
marosgyes, --channel stable helps. But my original question was how come GUI worked. Is GUI using different mechanism to access charmstore?10:19
magicaltroutmarosg: it will just append that channel flag to the call10:21
magicaltroutwhere as your out of date juju command line client won't10:22
magicaltrout:)10:22
magicaltroutif you apt-get update and rebootstrap you'd see that you don't need to do that on the CLI either10:22
marosgok, now I understand, thanks10:23
magicaltroutno probs10:23
Andrew_jediHello guys, I was wondering whether this bug fix was included in the openstack oslo messaging charms for Liberty? https://bugs.launchpad.net/oslo.service/+bug/152490712:12
mupBug #1524907: [SRU] Race condition in SIGTERM signal handler <sts> <sts-sru> <Ubuntu Cloud Archive:Fix Released> <Ubuntu Cloud Archive liberty:In Progress by hopem> <oslo.service:Fix Released> <python-oslo.service (Ubuntu):Fix Released> <python-oslo.service (Ubuntu Wily):Won't Fix>12:12
mup<python-oslo.service (Ubuntu Xenial):Fix Released> <python-oslo.service (Ubuntu Yakkety):Fix Released> <https://launchpad.net/bugs/1524907>12:12
Andrew_jedijamespage: ^^12:14
jamespageAndrew_jedi, there is a patch on the bug report, but its not been pulled into the SRU process for the Liberty UCA yet12:23
KpuCkohello, is it there any way to do charm search on the command line?12:30
Andrew_jedijamespage: Thanks, so the only way for me now is to manually apply this patch. I am not sure where should i apply this patch. Is there any other workaround?12:40
jamespagenot really - the patch is in the queue, just lots of other things also contending for developer time12:41
Andrew_jedijamespage: My cinder scheduler is refusing to remain in active state. Any pointer what should i do in the meantime?12:42
magicaltroutKpuCko: not currently12:42
KpuCkomhm, thanks12:43
KpuCkoanother question, how to add charm from cli without deploying it?12:43
KpuCkoi mean i have to do some configuration before deployment?12:44
magicaltroutyou can't stage them, but I think you can pass configuration options along with the deploy command12:44
magicaltrouthttps://jujucharms.com/docs/1.24/charms-config12:45
magicaltroutlike there12:45
jamespageAndrew_jedi, give me an hour and I'll get it up into liberty-proposed12:46
KpuCkomhm, okey will try that12:46
jamespageits been kicking around a few weeks and I'm not sure why12:46
Andrew_jedijamespage: Thanks a ton :)12:46
Andrew_jedi\O/12:46
jamespageAndrew_jedi, track the bug - there will be an automatic comment telling you how to test it when it gets uploaded12:47
Andrew_jedijamespage: Roger that!12:47
bbaqar__exit12:56
MrDanhi guys13:00
MrDanis rc2 out on ppa?13:00
rick_h_MrDan: not yet, it'll be late today.13:00
MrDangreat, thanks13:00
rick_h_MrDan: we're working on getting the CI run with the network fix for the NG thing through so hasn't started to build the release yet13:01
lazyPowero/ Morning #juju13:46
pittihello14:05
pittiI just tried to redeploy a service with juju-1.25 in Canonical's Prodstack; before it even gets to the actual charm, the agent install fails with14:06
pitti2016-09-29 13:44:00 WARNING juju.worker.dependency engine.go:304 failed to start "leadership-tracker" manifold worker: dependency not available14:06
pitti2016-09-29 13:48:30 ERROR juju.worker.uniter.filter filter.go:137 tomb: dying14:06
pitti(the latter repeasts over and over)14:06
pittidoes that ring a bell?14:06
pitti(it14:06
pitti's a xenial instance)14:06
lazyPowerpitti:  i'm not seeing anything in the issue tracker that looks relevant14:14
lazyPowerpitti: can you bug that along with the controller logs and machine-agent logs if there are any on the unit thats failing?14:15
pittilazyPower: yes, there are; I'll do that14:17
lazyPowerthanks, sorry about the inconvenience :/14:17
bdx_lazyPower: sup14:19
bdx_lazyPower: I'm going to be on a fire mission to re-write the elasticsearch charm14:19
bdx_lazyPower: we require a client node architecture14:20
lazyPowerbdx_: I'm OK with this - but i ahve 2 things to put out there14:20
lazyPower1) retain all the existing relations, 2) deploy the old charm and upgrade to hte new one to make sure its a drop in replacement for existing deployments (if you're keeping trusty as the target series)14:21
lazyPower(or multiseries)14:21
bdx_ok, xenial will be my target .... do I need to support trusty too?14:21
lazyPowerwell, there's a lot of elasticsearch deployments out there on our trusty charm14:21
lazyPowerso, maybe once there's a straw man poll the list and we go from there?14:22
lazyPowers/straw man/straw man,/14:22
bdx_perfect14:22
lazyPoweri doubt they will want it, as es upgrades between major versions is hairy14:22
lazyPowerrequires data dump + data restore in most cases14:22
lazyPowerbdx_: are you using logstash in any of your deployments?14:22
bdx_not yet14:23
bdx_why, whats up with it?14:23
lazyPowerits about to get some TLC after i get ceph moving14:24
lazyPoweri need to buffer beats input14:24
lazyPoweri've discovered that you can tank an eS instance with beats fairly quickly14:25
bdx_lazyPower: really?14:25
bdx_good to know14:25
lazyPoweryep. i had an 8 node cluster pushing to an underpowered ES host lastnight that died14:25
bdx_wow14:25
lazyPowerwhen i buffered it through logstash i had a better garantee of the packets coming in at a consistent rate and it didn't tank the database14:25
bdx_that makes sense14:25
lazyPowerwhat really happened is it was taking too long to respond to kibana so kibana thought the es adapter was dead14:26
lazyPowerits a fun set of dependencies... the logrouter is a more important thing than i gave it credit14:26
bdx_I'm pretty sure I've experienced what you've described14:26
bdx_I just assumed kibana was bugging out14:27
lazyPowerits a hair more complex than that14:27
lazyPowerbut you had it mostly right14:27
bdx_right14:27
lazyPoweri feel that kibana should be more intelligent with what its reporting is the issue. just saying ES Adapter Failed isn't terribly helpful when you're staring at a 502 page14:27
bdx_totally14:28
lazyPowerlike "query latency" or "omg load wtf"14:28
bdx_YES14:28
bdx_my plan is to create layer-elasticsearch-base14:28
lazyPowerwell bdx_, let me show you this14:28
lazyPowerhttps://www.elastic.co/products/watcher14:29
lazyPowercoupled with https://www.elastic.co/products/reporting14:29
bdx_oooooh14:29
bdx_thats sick14:29
lazyPowerelastic flavored prometheus?14:29
lazyPowerwith the capacity to email reports on a daily/weekly/monthly basis of charts you define in kibana14:29
bdx_wow14:30
lazyPoweri have no time to write this stack up14:30
lazyPowerbut seems interesting14:30
bdx_I want it14:30
lazyPoweros i thought i'd put it out there14:30
bdx_thx14:30
lazyPoweri'm happy to patch pilot you in if you want to contribute these14:30
bdx_I entirely do14:30
lazyPowerbrb refreshon coffee14:30
bdx_I'm currently charming up a set of 10+ apps14:31
pittilazyPower: I filed https://bugs.launchpad.net/juju/+bug/162894614:31
mupBug #1628946: [juju 1.25] agent fails to install on xenial node <juju:New> <https://launchpad.net/bugs/1628946>14:31
bdx_I think all but 1 uses elasticsearch14:31
bdx_3 apps are already being deployed as charms .... but I am under high load atm ... not sure if I'll be able to start hacking at it for a minute yet14:33
lazyPowerbdx_: sounds like a good litmus. you have some interfaces already written for you (a start with requires)14:33
bdx_I'm trying to finish the other 714:33
lazyPoweri'm curious to see your test structure for that bundle when its done :)14:33
lazyPowerpitti: i've +1d the heat. thanks for getting that filed14:33
bdx_canonical needs to hire you an apprentice14:34
bdx_bdb14:34
bdx_brb14:34
lazyPowerpfft canonical needs to hire me 3 more mentors. Mbruzek is tired of nacking my late night code ;)14:34
pittilazyPower: cheers14:35
lazyPoweri clearly havent had the bad habbits beaten out of me yet14:35
lazyPowers/habbits/habits/14:35
magicaltrouthobbits?14:38
lazyPowerwith their fuzzy feetses14:44
lazyPowerwhat are mesos my troutness?14:45
lazyPowereh i was reaching there, disregard that last bit14:45
lazyPowerjose: i haven't forgotten that i still owe you some couch db testing time. hows tomorrow looking for you?14:46
joselazyPower: owncloud. do you have time in a couple hours? have to do some.uni stuff tomorrow14:48
lazyPowerjose: i can stick around after hours. i'm pretty solid today with post release cleanup14:48
lazyPowerbut i can lend a hand w/ OC tests or couch tests. take your pick14:48
joseOC, couch is being check by couch, oc is an amulet thing (old amulet)14:49
joseI should be home at around noon your time, does that work?14:50
lazyPoweri'm still goign to be full tilt on current duties but i can TAL and MP's and test run results/questions14:51
lazyPowers/and M/at M/14:51
lazyPowerbtw, again, this coffee man... :fire:14:51
lazyPoweri've been spacing it out so it lasts longer14:51
joselol I can get you some more soon14:51
=== lutostag_ is now known as lutostag
=== scuttle|afk is now known as scuttlemonkey
=== frankban is now known as frankban|afk
smgollerhey all, so I'm using juju to deploy the openstack bundle, and I've got an external network with a vlan tag set up. The goal is assigning VMs IP addresses directly on that network so no floating IPs involved. The network is up, and I can ping openstack's dhcp server instance from the outside world. However, the VM I launched connected to that network is unable to get metadata from nova. How should I configure the bundle so that can work?17:57
smgolleraccording to this post: http://abregman.com/2016/01/06/openstack-neutron-troubleshooting-and-solving-common-problems/ I need to set 'enable_isolated_metadata = True' in the dhcp agent configuration file. I'm not sure if that solves the problem, but is there a way from juju to add that configuration?18:01
smgollerthedac: any ideas? instances directly connected to an external VLAN aren't getting cloud-init data properly. I can ping openstack's dhcp server address from the outside world, so connectivity is good.18:06
thedacsmgoller: Hi. Are you using the neutron-gateway at all? By default this is where we run nova-api-metadata. If not it can be run on the nova-compute nodes directly18:08
smgollerthere's no openstack router involved if that's what you mean18:09
smgollerthedac: the gateway is external to openstack on the vlan18:09
thedacright18:10
thedacI mean are you deploying with our charm "neutron-gateway" in the mix?18:10
smgolleryeah18:10
smgollerthis is your openstack-base bundle18:10
thedacok18:10
* thedac re-reads through the details18:10
smgolleronly thing that's configured in the charm is bridge-mappings and data port18:11
smgolleri've created the vlan network manually18:11
thedacOk, does the VM get a DHCP address?18:11
smgolleraccording to openstack it does18:11
thedacyou can figure that out from nova console-log $ID18:11
smgollerthedac: ok one sec.18:11
thedacsmgoller: you are looking for something along the lines of http://pastebin.ubuntu.com/23252275/18:13
thedacin that console output18:13
smgollerdefinitely nothing like that18:14
smgollernning for Raise ne...k interfaces (5min 7s / 5min 8s)[K[[0;1;31mFAILED[0m] Failed to start Raise network interfaces.18:14
smgollerSee 'systemctl status networking.service' for details.18:14
smgoller[[0;1;33mDEPEND[0m] Dependency failed for Initial cloud... job (metadata service crawler).18:14
smgoller[18:14
smgollerack18:14
thedacAny chance I can see a pastebin of that?18:15
smgollerthedac: http://paste.ubuntu.com/23252290/18:15
thedacthanks. Let me take a look18:15
thedacsmgoller: ok, the VM is definitely not getting a DHCP address. Do you know if your neutron network setup commands set GRE instead of flat network for your tenant network?18:20
thedacLet me find you the right commands to check. One sec18:21
smgollerthedac: I set up the network via horizon and set the type to vlan.18:21
smgollerbut yeah, let's verify18:21
thedacsorry, struggling to find the right command. Give me a few more minutes18:26
smgollersure18:26
smgollerthedac: thank you so much for helping me with this.18:28
thedacah, ha. I was not admin. neutron net-show $tenant_net   What does provider:network_type say?18:28
thedacno problem18:28
thedacsmgoller: couple more questions in the PM18:30
=== frankban|afk is now known as frankban
=== frankban is now known as frankban|afk
spaokwhats the best way to get the IP of the unit running my charm?19:53
=== mup_ is now known as mup
=== mup_ is now known as mup
=== mup_ is now known as mup
valeechspaok: is it in a container?20:13
=== mup_ is now known as mup
smgolleranyone have any ideas why juju-list-models would hang for a very long time?20:18
smgolleradding and switching models happens instantly20:18
=== mup_ is now known as mup
spaokvaleech: yes, it will be20:19
spaoki was looking at unit_get('public-address')20:20
=== mup_ is now known as mup
marcoceppimagicaltrout: I'm back in the US :)21:04
magicaltroutbooooo21:19
magicaltrouti hope you're jetlagged21:20
spaokwhen I try to build my charm I get "build: Please add a `repo` key to your layer.yaml", but I added repo to th elayer.yaml21:28
spaokoh nevermind, doc confusered me21:29
lazyPowerspaok: thats a known bug. the build process is linting the layer directory and not the output charm directory. https://github.com/juju/charm-tools/pull/25621:30
lazyPowerbut glad you got it sorted :)21:31
=== mup_ is now known as mup
spaokis there a way to make config options required, like juju won't deploy a service unless you set the values?21:49
magicaltroutnope, but you could block up the actual install until they are met spaok21:50
magicaltroutthe relay a status message like "waiting for configuration you moron"21:51
spaokok, I'll try that21:51
spaokis the pythonhosted api for charmhelpers the most current?21:52
lazyPoweryep21:52
spaokhttp://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.rsync21:52
spaokcause it just says -r as the flags21:53
spaokbut when it runs it uses --delete21:53
lazyPowerspaok: https://bugs.launchpad.net/charm-helpers if you would be so kind sir21:54
spaoksure, gonna see if I set flags if that changes it21:55
=== mup_ is now known as mup
=== mup_ is now known as mup
=== mup_ is now known as mup
x58What's the best place to ask for JuJu features that are missing and would make my life easier?22:07
x58I filed a support case with my support team too... but I can make this one public.22:07
x58marcoceppi / magicaltrout ^^22:08
=== mup_ is now known as mup
magicaltroutx58: depends what component i guess22:09
x58Juju itself.22:09
magicaltroutthe core platform?22:10
spaoklazyPower: I confirmed that if I set the flags it removes the delete option, I'll file a bug in a bit22:11
magicaltroutx58: for juju core I believe its: https://bugs.launchpad.net/juju/+filebug22:11
magicaltroutfiling stuff there is good, and raising it on the mailing list with a usecase is a bonus22:12
magicaltroutand recommended22:12
x58magicaltrout: https://gist.github.com/bertjwregeer/919fe70e8cfc5184399d83ad11df393222:12
x58I want to report that. Where do I report that feature request?22:12
x58Sorry, I try to be helpful, but signing up for another mailing list is not something I really want to deal with.22:13
magicaltroutyeah stick it in launchpad x5822:13
x58https://bugs.launchpad.net/juju/+bug/162912422:14
mupBug #1629124: JuJu should learn about customizing configuration by tags <juju:New> <https://launchpad.net/bugs/1629124>22:14
magicaltroutthen prod rick_h_ about it ;)22:14
x58rick_h_: *prod* https://bugs.launchpad.net/juju/+bug/162912422:14
magicaltrouthe might know a thing or two that already exist... or just say "we'll schedule that for 2.x " ;)22:15
x58magicaltrout: It probably will through the support organisation too ;-) dparrish is our DSE22:15
magicaltroutalways helps22:15
lazyPowerhey x58, how's etcd treating you these days?22:15
x58lazyPower: It is working well. Sometimes you need to kick it once or twice when you spin up a new one due to some SSL cert issue22:16
x58but resolved -r seems to make it behave.22:16
x58And removing a running instance seems to fail about 30% of the time.22:17
lazyPowerx58: what if i told you, we're going to replace that peering certificate with a ca22:17
x58No rhyme or reason.22:17
lazyPowerhmm. is it the last one around?22:17
lazyPoweri've observed where the leader seems to get behind in unregistering units and it tanks trying to remove a member22:18
x58Nope, not the last one around.22:18
lazyPowerif you can confirm that i think i know how to fix it, and it would be *wonderful* if you could keep an eye out for that and do a simple is-leader check on the unit22:18
x58Let's say I spin up 3 - 4 of them22:19
x58I then remove 122:19
x58that 1 that I remove might or might not succeed in removal. Sometimes it hangs and a resolved -r kicks it.22:19
lazyPowerah ok22:19
lazyPoweri'll add a scaleup/down test and try to root that out22:19
x58etcd just seems finicky.22:20
lazyPowerterribly22:20
lazyPowerthanks for the feedback22:20
=== mup_ is now known as mup
x58lazyPower: Thanks for your work on it :-)22:22
x58So long as I don't touch how many we have, things are fine :P22:22
=== mup_ is now known as mup
smgollerthedac: ok, so I've set up a second cluster. this time I'm seeing this error constantly in the /var/log/neutron/neutron-openvswitch-agent.log: 2016-09-29 21:49:14.443 127296 ERROR oslo.messaging._drivers.impl_rabbit [req-c79a053b-8c2e-45d6-9ebc-4fddab0cf279 - - - - -] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 32 seconds.22:33
thedacsmgoller: during deploy those messages are normal. After everything has settled and the rabbit connection info is in /etc/neutron.conf you should no longer see those.22:35
smgollerok22:35
smgollerthat makes sense22:35
admcleodhow does one deploy a multiseries charm locally?22:38
thedacadmcleod: juju deploy ./$CHARM --series $SERIES22:39
admcleodthedac: juju 1.25?22:40
thedacah, for juju 1.25 you need the charm in a series named directory. juju deploy $SERIES/$CHARM22:40
admcleodthedac: so... ive built it as multiseries, and it goes into ./builds, you saying just copy it into ../trusty/ ?22:41
thedacyes, that should work22:41
=== mup_ is now known as mup
admcleodthedac: thanks22:45
thedacno problem22:45
admcleodthedac: (it worked)22:52
thedac\o/22:52
spaokmagicaltrout: do you know any charm examples doing the blocking thing you mentioned?23:12
magicaltrouthttps://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L19 spaok something like that but instead of the decorated class call23:15
magicaltrout(hookenv.config()['pdi_url'] or whatever23:16
magicaltroutand make sure its set to your liking23:16
spaokso would I make a def for init and put in a call to check method like that one, and if it passes set my state and use a when decorator to look for the ok state?23:18
magicaltroutcorrect spaok23:23
spaokkk, thanks23:24
=== scuttlemonkey is now known as scuttle|afk

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!