[03:01] how does provides work in the charms? I'm looking at https://github.com/juju-solutions/layer-docker/blob/master/metadata.yaml but I don't see any other reference to dockerhost besides it being mentioned in the provides === bpierre_ is now known as bpierre === frankban|afk is now known as frankban [08:26] Hello, are there (known) issues with amulet testing on Xenial ? [08:26] my tests run fine on Trusty, but on Xenial ./amulet/filesystem_data.py expects python2 to be available which is not [08:27] marcoceppi is/was in europe so you might get a quick answer to that when he's around === rogpeppe1 is now known as rogpeppe [09:49] I did "juju deploy ubuntu". It failed from command line with "unknown channel candidate". I am on beta15, so that's ok, I would need Beta16. However, when I did the same from Juju GUI, it worked. Just curious - is GUI using different mechanism to access charm store ? [09:51] works here on RC1 marosg [09:51] there was a bunch of changes to channel naming [09:51] which I suspect is what you're seeing [09:55] I understand why cli does not work, it is exactly becasue of those name changes. I am just surprised GUI works. [09:56] you might be able to fudge it with --channel stable [09:56] or something [09:56] instead of having it ponder which to choose [10:19] yes, --channel stable helps. But my original question was how come GUI worked. Is GUI using different mechanism to access charmstore? [10:21] marosg: it will just append that channel flag to the call [10:22] where as your out of date juju command line client won't [10:22] :) [10:22] if you apt-get update and rebootstrap you'd see that you don't need to do that on the CLI either [10:23] ok, now I understand, thanks [10:23] no probs [12:12] Hello guys, I was wondering whether this bug fix was included in the openstack oslo messaging charms for Liberty? https://bugs.launchpad.net/oslo.service/+bug/1524907 [12:12] Bug #1524907: [SRU] Race condition in SIGTERM signal handler [12:12] [12:14] jamespage: ^^ [12:23] Andrew_jedi, there is a patch on the bug report, but its not been pulled into the SRU process for the Liberty UCA yet [12:30] hello, is it there any way to do charm search on the command line? [12:40] jamespage: Thanks, so the only way for me now is to manually apply this patch. I am not sure where should i apply this patch. Is there any other workaround? [12:41] not really - the patch is in the queue, just lots of other things also contending for developer time [12:42] jamespage: My cinder scheduler is refusing to remain in active state. Any pointer what should i do in the meantime? [12:42] KpuCko: not currently [12:43] mhm, thanks [12:43] another question, how to add charm from cli without deploying it? [12:44] i mean i have to do some configuration before deployment? [12:44] you can't stage them, but I think you can pass configuration options along with the deploy command [12:45] https://jujucharms.com/docs/1.24/charms-config [12:45] like there [12:46] Andrew_jedi, give me an hour and I'll get it up into liberty-proposed [12:46] mhm, okey will try that [12:46] its been kicking around a few weeks and I'm not sure why [12:46] jamespage: Thanks a ton :) [12:46] \O/ [12:47] Andrew_jedi, track the bug - there will be an automatic comment telling you how to test it when it gets uploaded [12:47] jamespage: Roger that! [12:56] exit [13:00] hi guys [13:00] is rc2 out on ppa? [13:00] MrDan: not yet, it'll be late today. [13:00] great, thanks [13:01] MrDan: we're working on getting the CI run with the network fix for the NG thing through so hasn't started to build the release yet [13:46] o/ Morning #juju [14:05] hello [14:06] I just tried to redeploy a service with juju-1.25 in Canonical's Prodstack; before it even gets to the actual charm, the agent install fails with [14:06] 2016-09-29 13:44:00 WARNING juju.worker.dependency engine.go:304 failed to start "leadership-tracker" manifold worker: dependency not available [14:06] 2016-09-29 13:48:30 ERROR juju.worker.uniter.filter filter.go:137 tomb: dying [14:06] (the latter repeasts over and over) [14:06] does that ring a bell? [14:06] (it [14:06] 's a xenial instance) [14:14] pitti: i'm not seeing anything in the issue tracker that looks relevant [14:15] pitti: can you bug that along with the controller logs and machine-agent logs if there are any on the unit thats failing? [14:17] lazyPower: yes, there are; I'll do that [14:17] thanks, sorry about the inconvenience :/ [14:19] lazyPower: sup [14:19] lazyPower: I'm going to be on a fire mission to re-write the elasticsearch charm [14:20] lazyPower: we require a client node architecture [14:20] bdx_: I'm OK with this - but i ahve 2 things to put out there [14:21] 1) retain all the existing relations, 2) deploy the old charm and upgrade to hte new one to make sure its a drop in replacement for existing deployments (if you're keeping trusty as the target series) [14:21] (or multiseries) [14:21] ok, xenial will be my target .... do I need to support trusty too? [14:21] well, there's a lot of elasticsearch deployments out there on our trusty charm [14:22] so, maybe once there's a straw man poll the list and we go from there? [14:22] s/straw man/straw man,/ [14:22] perfect [14:22] i doubt they will want it, as es upgrades between major versions is hairy [14:22] requires data dump + data restore in most cases [14:22] bdx_: are you using logstash in any of your deployments? [14:23] not yet [14:23] why, whats up with it? [14:24] its about to get some TLC after i get ceph moving [14:24] i need to buffer beats input [14:25] i've discovered that you can tank an eS instance with beats fairly quickly [14:25] lazyPower: really? [14:25] good to know [14:25] yep. i had an 8 node cluster pushing to an underpowered ES host lastnight that died [14:25] wow [14:25] when i buffered it through logstash i had a better garantee of the packets coming in at a consistent rate and it didn't tank the database [14:25] that makes sense [14:26] what really happened is it was taking too long to respond to kibana so kibana thought the es adapter was dead [14:26] its a fun set of dependencies... the logrouter is a more important thing than i gave it credit [14:26] I'm pretty sure I've experienced what you've described [14:27] I just assumed kibana was bugging out [14:27] its a hair more complex than that [14:27] but you had it mostly right [14:27] right [14:27] i feel that kibana should be more intelligent with what its reporting is the issue. just saying ES Adapter Failed isn't terribly helpful when you're staring at a 502 page [14:28] totally [14:28] like "query latency" or "omg load wtf" [14:28] YES [14:28] my plan is to create layer-elasticsearch-base [14:28] well bdx_, let me show you this [14:29] https://www.elastic.co/products/watcher [14:29] coupled with https://www.elastic.co/products/reporting [14:29] oooooh [14:29] thats sick [14:29] elastic flavored prometheus? [14:29] with the capacity to email reports on a daily/weekly/monthly basis of charts you define in kibana [14:30] wow [14:30] i have no time to write this stack up [14:30] but seems interesting [14:30] I want it [14:30] os i thought i'd put it out there [14:30] thx [14:30] i'm happy to patch pilot you in if you want to contribute these [14:30] I entirely do [14:30] brb refreshon coffee [14:31] I'm currently charming up a set of 10+ apps [14:31] lazyPower: I filed https://bugs.launchpad.net/juju/+bug/1628946 [14:31] Bug #1628946: [juju 1.25] agent fails to install on xenial node [14:31] I think all but 1 uses elasticsearch [14:33] 3 apps are already being deployed as charms .... but I am under high load atm ... not sure if I'll be able to start hacking at it for a minute yet [14:33] bdx_: sounds like a good litmus. you have some interfaces already written for you (a start with requires) [14:33] I'm trying to finish the other 7 [14:33] i'm curious to see your test structure for that bundle when its done :) [14:33] pitti: i've +1d the heat. thanks for getting that filed [14:34] canonical needs to hire you an apprentice [14:34] bdb [14:34] brb [14:34] pfft canonical needs to hire me 3 more mentors. Mbruzek is tired of nacking my late night code ;) [14:35] lazyPower: cheers [14:35] i clearly havent had the bad habbits beaten out of me yet [14:35] s/habbits/habits/ [14:38] hobbits? [14:44] with their fuzzy feetses [14:45] what are mesos my troutness? [14:45] eh i was reaching there, disregard that last bit [14:46] jose: i haven't forgotten that i still owe you some couch db testing time. hows tomorrow looking for you? [14:48] lazyPower: owncloud. do you have time in a couple hours? have to do some.uni stuff tomorrow [14:48] jose: i can stick around after hours. i'm pretty solid today with post release cleanup [14:48] but i can lend a hand w/ OC tests or couch tests. take your pick [14:49] OC, couch is being check by couch, oc is an amulet thing (old amulet) [14:50] I should be home at around noon your time, does that work? [14:51] i'm still goign to be full tilt on current duties but i can TAL and MP's and test run results/questions [14:51] s/and M/at M/ [14:51] btw, again, this coffee man... :fire: [14:51] i've been spacing it out so it lasts longer [14:51] lol I can get you some more soon === lutostag_ is now known as lutostag === scuttle|afk is now known as scuttlemonkey === frankban is now known as frankban|afk [17:57] hey all, so I'm using juju to deploy the openstack bundle, and I've got an external network with a vlan tag set up. The goal is assigning VMs IP addresses directly on that network so no floating IPs involved. The network is up, and I can ping openstack's dhcp server instance from the outside world. However, the VM I launched connected to that network is unable to get metadata from nova. How should I configure the bundle so that can work? [18:01] according to this post: http://abregman.com/2016/01/06/openstack-neutron-troubleshooting-and-solving-common-problems/ I need to set 'enable_isolated_metadata = True' in the dhcp agent configuration file. I'm not sure if that solves the problem, but is there a way from juju to add that configuration? [18:06] thedac: any ideas? instances directly connected to an external VLAN aren't getting cloud-init data properly. I can ping openstack's dhcp server address from the outside world, so connectivity is good. [18:08] smgoller: Hi. Are you using the neutron-gateway at all? By default this is where we run nova-api-metadata. If not it can be run on the nova-compute nodes directly [18:09] there's no openstack router involved if that's what you mean [18:09] thedac: the gateway is external to openstack on the vlan [18:10] right [18:10] I mean are you deploying with our charm "neutron-gateway" in the mix? [18:10] yeah [18:10] this is your openstack-base bundle [18:10] ok [18:10] * thedac re-reads through the details [18:11] only thing that's configured in the charm is bridge-mappings and data port [18:11] i've created the vlan network manually [18:11] Ok, does the VM get a DHCP address? [18:11] according to openstack it does [18:11] you can figure that out from nova console-log $ID [18:11] thedac: ok one sec. [18:13] smgoller: you are looking for something along the lines of http://pastebin.ubuntu.com/23252275/ [18:13] in that console output [18:14] definitely nothing like that [18:14] nning for Raise ne...k interfaces (5min 7s / 5min 8s)[K[[0;1;31mFAILED[0m] Failed to start Raise network interfaces. [18:14] See 'systemctl status networking.service' for details. [18:14] [[0;1;33mDEPEND[0m] Dependency failed for Initial cloud... job (metadata service crawler). [18:14] [ [18:14] ack [18:15] Any chance I can see a pastebin of that? [18:15] thedac: http://paste.ubuntu.com/23252290/ [18:15] thanks. Let me take a look [18:20] smgoller: ok, the VM is definitely not getting a DHCP address. Do you know if your neutron network setup commands set GRE instead of flat network for your tenant network? [18:21] Let me find you the right commands to check. One sec [18:21] thedac: I set up the network via horizon and set the type to vlan. [18:21] but yeah, let's verify [18:26] sorry, struggling to find the right command. Give me a few more minutes [18:26] sure [18:28] thedac: thank you so much for helping me with this. [18:28] ah, ha. I was not admin. neutron net-show $tenant_net What does provider:network_type say? [18:28] no problem [18:30] smgoller: couple more questions in the PM === frankban|afk is now known as frankban === frankban is now known as frankban|afk [19:53] whats the best way to get the IP of the unit running my charm? === mup_ is now known as mup === mup_ is now known as mup === mup_ is now known as mup [20:13] spaok: is it in a container? === mup_ is now known as mup [20:18] anyone have any ideas why juju-list-models would hang for a very long time? [20:18] adding and switching models happens instantly === mup_ is now known as mup [20:19] valeech: yes, it will be [20:20] i was looking at unit_get('public-address') === mup_ is now known as mup [21:04] magicaltrout: I'm back in the US :) [21:19] booooo [21:20] i hope you're jetlagged [21:28] when I try to build my charm I get "build: Please add a `repo` key to your layer.yaml", but I added repo to th elayer.yaml [21:29] oh nevermind, doc confusered me [21:30] spaok: thats a known bug. the build process is linting the layer directory and not the output charm directory. https://github.com/juju/charm-tools/pull/256 [21:31] but glad you got it sorted :) === mup_ is now known as mup [21:49] is there a way to make config options required, like juju won't deploy a service unless you set the values? [21:50] nope, but you could block up the actual install until they are met spaok [21:51] the relay a status message like "waiting for configuration you moron" [21:51] ok, I'll try that [21:52] is the pythonhosted api for charmhelpers the most current? [21:52] yep [21:52] http://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.rsync [21:53] cause it just says -r as the flags [21:53] but when it runs it uses --delete [21:54] spaok: https://bugs.launchpad.net/charm-helpers if you would be so kind sir [21:55] sure, gonna see if I set flags if that changes it === mup_ is now known as mup === mup_ is now known as mup === mup_ is now known as mup [22:07] What's the best place to ask for JuJu features that are missing and would make my life easier? [22:07] I filed a support case with my support team too... but I can make this one public. [22:08] marcoceppi / magicaltrout ^^ === mup_ is now known as mup [22:09] x58: depends what component i guess [22:09] Juju itself. [22:10] the core platform? [22:11] lazyPower: I confirmed that if I set the flags it removes the delete option, I'll file a bug in a bit [22:11] x58: for juju core I believe its: https://bugs.launchpad.net/juju/+filebug [22:12] filing stuff there is good, and raising it on the mailing list with a usecase is a bonus [22:12] and recommended [22:12] magicaltrout: https://gist.github.com/bertjwregeer/919fe70e8cfc5184399d83ad11df3932 [22:12] I want to report that. Where do I report that feature request? [22:13] Sorry, I try to be helpful, but signing up for another mailing list is not something I really want to deal with. [22:13] yeah stick it in launchpad x58 [22:14] https://bugs.launchpad.net/juju/+bug/1629124 [22:14] Bug #1629124: JuJu should learn about customizing configuration by tags [22:14] then prod rick_h_ about it ;) [22:14] rick_h_: *prod* https://bugs.launchpad.net/juju/+bug/1629124 [22:15] he might know a thing or two that already exist... or just say "we'll schedule that for 2.x " ;) [22:15] magicaltrout: It probably will through the support organisation too ;-) dparrish is our DSE [22:15] always helps [22:15] hey x58, how's etcd treating you these days? [22:16] lazyPower: It is working well. Sometimes you need to kick it once or twice when you spin up a new one due to some SSL cert issue [22:16] but resolved -r seems to make it behave. [22:17] And removing a running instance seems to fail about 30% of the time. [22:17] x58: what if i told you, we're going to replace that peering certificate with a ca [22:17] No rhyme or reason. [22:17] hmm. is it the last one around? [22:18] i've observed where the leader seems to get behind in unregistering units and it tanks trying to remove a member [22:18] Nope, not the last one around. [22:18] if you can confirm that i think i know how to fix it, and it would be *wonderful* if you could keep an eye out for that and do a simple is-leader check on the unit [22:19] Let's say I spin up 3 - 4 of them [22:19] I then remove 1 [22:19] that 1 that I remove might or might not succeed in removal. Sometimes it hangs and a resolved -r kicks it. [22:19] ah ok [22:19] i'll add a scaleup/down test and try to root that out [22:20] etcd just seems finicky. [22:20] terribly [22:20] thanks for the feedback === mup_ is now known as mup [22:22] lazyPower: Thanks for your work on it :-) [22:22] So long as I don't touch how many we have, things are fine :P === mup_ is now known as mup [22:33] thedac: ok, so I've set up a second cluster. this time I'm seeing this error constantly in the /var/log/neutron/neutron-openvswitch-agent.log: 2016-09-29 21:49:14.443 127296 ERROR oslo.messaging._drivers.impl_rabbit [req-c79a053b-8c2e-45d6-9ebc-4fddab0cf279 - - - - -] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 32 seconds. [22:35] smgoller: during deploy those messages are normal. After everything has settled and the rabbit connection info is in /etc/neutron.conf you should no longer see those. [22:35] ok [22:35] that makes sense [22:38] how does one deploy a multiseries charm locally? [22:39] admcleod: juju deploy ./$CHARM --series $SERIES [22:40] thedac: juju 1.25? [22:40] ah, for juju 1.25 you need the charm in a series named directory. juju deploy $SERIES/$CHARM [22:41] thedac: so... ive built it as multiseries, and it goes into ./builds, you saying just copy it into ../trusty/ ? [22:41] yes, that should work === mup_ is now known as mup [22:45] thedac: thanks [22:45] no problem [22:52] thedac: (it worked) [22:52] \o/ [23:12] magicaltrout: do you know any charm examples doing the blocking thing you mentioned? [23:15] https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L19 spaok something like that but instead of the decorated class call [23:16] (hookenv.config()['pdi_url'] or whatever [23:16] and make sure its set to your liking [23:18] so would I make a def for init and put in a call to check method like that one, and if it passes set my state and use a when decorator to look for the ok state? [23:23] correct spaok [23:24] kk, thanks === scuttlemonkey is now known as scuttle|afk