[03:00] <Gil> can anyone share a link with steps to install juju-gui charm to openstack liberty? thanks!
[04:32] <lazyPower> stokachu usually that was a symptom of something being sick in my env, like the security group not opening the port to the mongo instance on the model controller, or maybe my maas bootstrap timeout wasn't long enough
[04:33] <lazyPower> Gil - hey there, not really sure what you're asking. Do you have an openstack liberty provider available to you that you would like to consume in juju, and additionally deploy the juju gui?
[04:33] <lazyPower> s/consume/model your applications/
[06:14] <narindergupta> marcoceppi: hey marco
[09:32] <gennadiy> hi everybody, may i use git launchpad repo to publish charms?
[09:33] <bloodearnest> yo lazyPower, just looking over
[09:33] <bloodearnest> https://docs.google.com/presentation/d/1a5l1bKX8dPwx21LkMQmp-zVjzOsgoTE_iQ2urD9znxk/edit?pref=2&pli=1#slide=id.g70d4533c6_2_140
[09:33] <bloodearnest> and have some questions when you are about
[10:00] <kjackal> hi everyone. I am manually provisioning an IPv6 only VM. Deployment of charms there fail with cannot get archive: Get https://api.jujucharms.com/charmstore/v4/trusty/mysql-35/archive: dial tcp 162.213.33.121:443: network is unreachable
[10:00] <kjackal> Does juju handle this case or is it all up to the admin?
[12:02] <pitti> hello
[12:03] <pitti> what could be the cause of
[12:03] <pitti> $ juju deploy --repository deployment/charms  local:trusty/langpack-o-matic
[12:03] <pitti> ERROR charm not found in "/home/martin/ubuntu/langpack-o-matic/deployment/charms": local:trusty/langpack-o-matic
[12:03] <pitti> the charm definitively exists:
[12:03] <pitti> $ ls /home/martin/ubuntu/langpack-o-matic/deployment/charms/trusty/langpack-o-matic/
[12:03] <pitti> config.yaml  hooks  metadata.yaml  README.md
[12:03] <pitti> (juju 1.25 in xenial)
[12:04] <pitti> I have another charm in that dir ("bootstrap-node"), and deploying that works fine
[12:04] <pitti> $ ls /home/martin/ubuntu/langpack-o-matic/deployment/charms/trusty/bootstrap-node/
[12:04] <pitti> hooks  metadata.yaml
[12:07] <pitti> ah, nevermind! typo in "name:" in metadata.yaml
[12:07] <pitti> typical "you have to ask, and then you'll figure it out" situation
[14:06] <lazyPower> yo bloodearnest  o/
[14:07] <bloodearnest> lazyPower, hey
[14:07] <lazyPower> hows things?
[14:08] <bloodearnest> good, sprint next week, so lots of prep
[14:08] <bloodearnest> and trying to understand this new systemd world...
[14:08] <lazyPower> gennadiy - (super late reply) - you can warehouse the code in git, however bzr is still the only ingestion method right now. There's a new feature coming that decouples your charms from DVCS which in turn provides instant publishing
[14:09] <lazyPower> bloodearnest - i hear ya man!
[14:09] <lazyPower> i upgraded to xenial on my primary workhorse and its been slow going getting the system stood back up. i need another weekend on it
[14:09] <bloodearnest> so, tls layer
[14:09] <lazyPower> still totally a thing :)
[14:09] <bloodearnest> yeah
[14:10] <lazyPower> anything specific i can help with?
[14:10] <bloodearnest> my usage is very different though
[14:10] <gennadiy> i have found bzr-sync module. it will sync my code from git to bzr
[14:10] <bloodearnest> so, some thoughts:
[14:10] <gennadiy> thanks for your response
[14:10] <lazyPower> gennadiy - its a nice solution for a short term problem :)
[14:10] <lazyPower> gennadiy also o/ great to see you in here
[14:10] <bloodearnest> 1) a standard layer for generating certs would be useful
[14:11] <bloodearnest> for us to use easyrsa, it would need to be installed by system packages
[14:11] <bloodearnest> but we could use
[14:11] <bloodearnest> it
[14:11] <lazyPower> yeah?
[14:12] <lazyPower> so, we need to put easy-rsa in a PPA or is that still a red-flag?
[14:12] <bloodearnest> 2) there are 2 distinct uses of tls certs here: intra service comms (your layer/interface), and public service comms
[14:14] <jrwren> kjackal: most charms have no support for ipv6. Its not that they don't support them but they've never been tested there, so they often do things that do not support ipv6.
[14:14] <lazyPower> bloodearnest - yeah, matt and I have talked about this, the public facing ssl bits. we dont have a path forward with any time alotted to get that done
[14:14] <mbruzek> heyo
[14:14] <lazyPower> but we've been kicking around ideas. we started with an idea to wrap lets-encrypt as a layer
[14:14] <jrwren> kjackal: e.g. take an ip address from juju and curl http://thatip/   which doesn't work because it needs to be wrapped in [] to be a valid ipv6 url.
[14:15] <lazyPower> bloodearnest - if you were going to put public facing ssl infrastructure in your modeling language. what would be your preferred method to do so?
[14:15] <bloodearnest> lazyPower, ppa won't work, but we could add a package to our archives, perhaps. I'm also not sure how much control it provides. Can it do SubjectAlternativeName for DNS *and* IP?
[14:16] <lazyPower> yeah
[14:16] <lazyPower> it already does add SAN for DNS and IP
[14:16] <bloodearnest> k
[14:16] <lazyPower> i think we can tune the config to include a config option for additional SAN
[14:16] <lazyPower> right now we're kind of lazy about what we stuff in the SAN, ip and hostname
[14:16] <lazyPower> but it supports both entry styles
[14:16] <bloodearnest> lazyPower, so, the 2 uses are different enough to warrent different approach, and different interfaces, I suspext
[14:17] <lazyPower> oh for sure
[14:17] <lazyPower> self signed certs vs ca signed sergs
[14:17] <lazyPower> s/sergs/certs/
[14:17] <kjackal> jrwren: True. So to have juju on an ipv6 setup we need a translation service to ipv4
[14:17] <lazyPower> mbruzek o/ morning
[14:18] <mbruzek> heyo
[14:18] <lazyPower> mbruzek we're talking about our baby
[14:18] <lazyPower> > re: layer-tls
[14:18] <mbruzek> is my baby ugly?
[14:18] <lazyPower> its our ugly babby
[14:18] <mbruzek> say it isn't so!
[14:18] <lazyPower> :P nah
[14:18] <lazyPower> bloodearnest was just riffing about how we can make it more useful to more ppl
[14:18] <bloodearnest> http://bazaar.launchpad.net/~bloodearnest/charms/trusty/x509-cert/trunk/view/head:/lib/selfsigned.py
[14:19] <bloodearnest> is what we want, in terms of self signed cert
[14:19] <lazyPower> they need a deb package of easyrsa. aparently fetching it from where we're grabbing it is basically out of sorts
[14:19] <bloodearnest> the DNS/IP thing is an openssl/gotls thing
[14:19] <lazyPower> bloodearnest we can do these alt_names no prob
[14:20] <bloodearnest> gotls is stricter and wants proper SANs
[14:20] <bloodearnest> cool
[14:20] <lazyPower> right on
[14:21] <lazyPower> so, matt and i are spiking on k8s this week
[14:21] <lazyPower> if you want to file some bugs @ the repo for layer-tls we can start planning and try to get it on our board
[14:21] <bloodearnest> so, I think what we want is a subordinate charm, so we can configure multiple certs for one service (e.g. apache, haproxy)
[14:21] <bloodearnest> lazyPower, ack
[14:22] <mbruzek> we talked about a subordinate charm that is a good idea, I wonder if both can be built from the same layer so we don't have to maintain two different codebases
[14:23] <bloodearnest> mbruzek, you can build 2 charms from 1 layer?
[14:24] <mbruzek> yes
[14:24] <mbruzek> bloodearnest: I would see a subordinate layer that imports tls, and just has metadata that makes it a subordinate
[14:24] <mbruzek> Then add the functionality you and lazypower were discussing to the tls-layer
[14:24] <bloodearnest> I suspect the interface types will be different (1 for peer negotiation, 1 for simple path communication)
[14:25] <bloodearnest> right
[14:25] <lazyPower> path of least resistance
[14:26] <mbruzek> bloodearnest: The subordinate could have a tls provides relation and or requires, and you would have to extend our tls interface which *only* deals with the peer relation at this point
[14:26] <bloodearnest> right
[14:26] <mbruzek> Again those could be done in the reusable tls layer and interface
[14:26] <mbruzek> Your subordinate layer would be extremely small, just making it a subordinate and using the provided functionality
[14:27] <bloodearnest> so they'd be differentiated on relation type (provides, peer) rather than name?
[14:27] <mbruzek> bloodearnest: yes
[14:27] <bloodearnest> wfm
[14:27] <mbruzek> the beauty of layers!
[14:27] <mbruzek> reusable components
[14:28] <bloodearnest> so, the issue is, how best to generate certs, preferably using just system packages
[14:28] <bloodearnest> or python deps
[14:28] <bloodearnest> the python version above works fine in xenial, fwiw, python-cryptography is in main
[14:29] <bloodearnest> but not trusty :(
[14:29] <mbruzek> well the current tls layer uses easyrsa (as lazypower) pointed out, if that is not sufficient you can suggest alternate methods.
[14:30] <mbruzek> In Juju 2.0 you can in the metadata.yaml specify what release your charm supports.
[14:30] <bloodearnest> any method is fine, as long as it's a) vendored or b) system packaged
[14:30] <bloodearnest> grabing from git is a no-no
[14:31] <lazyPower> bloodearnest how about resources?
[14:31] <bloodearnest> that would work also
[14:31] <bloodearnest> but require some manual prepping
[14:31] <lazyPower> what if easyrsa were exposed as a charm resource, you the deployment engineer stuff @ resources in your model-controller, and when you deploy boom its all offline.
[14:31] <bloodearnest> nicer if it could work OOTB
[14:32] <bloodearnest> the self signed stuff is really only for devel
[14:33] <bloodearnest> plus, we are a ways away from being on 2.0
[14:33] <mbruzek> bloodearnest: I used easyrsa from github because it had some bug fixes I needed. if you can get the repo one working submit a pull request for that.
[14:33] <bloodearnest> mbruzek, ok, I will try that
[14:35] <mbruzek> easy-rsa latest github release 3.0.1, vs the latest easy-rsa in Xenial is 2.2.2
[14:36] <mbruzek> I know you have rules against github, but I question if our rules move at the speed of modern software
[14:36] <bloodearnest> mbruzek, not my rules
[14:37] <mbruzek> bloodearnest: I know
[14:38] <mbruzek> my point is you are 4 releases behind upstream, and the rules are supposed to be for "security", I would want the latest release if I were doing it for myself.
[14:38] <mbruzek> It would be great if we could create a snap of the latest release and put that in the charm.  If snaps can be trusted like archive
[14:39] <lazyPower> i have this appliance docker image i use to print some certs sometimes
[14:39] <bloodearnest> mbruzek, can't we just vendor it in some other form
[14:39] <lazyPower> caveate: you install docker on every host you want to generate certs
[14:39] <bloodearnest> it's just a cli wrapper around openssl cli,
[14:40] <bloodearnest> right?
[14:40] <lazyPower> pretty much
[14:40] <lazyPower> but its based on busybox so its stupid small
[14:40] <lazyPower> thats the only saving grace here
[14:45] <bloodearnest> lazyPower, mbruzek: easy-rsa 3.0.1 is 140k of text files (40k of docs)
[14:45] <bloodearnest> seems reasonable to vendor into the layer?
[14:45] <mbruzek> kilobytes?
[14:45] <bloodearnest> yep
[14:46] <mbruzek> bloodearnest: yeah I think we are OK there, I would get worried about gb
[14:46] <mbruzek> bloodearnest: you were going to change it to the package manager anyway... I am sure that one is less kilobytes right?
[14:47] <bloodearnest> probably about the same
[14:48] <mbruzek> bloodearnest: I get it, our current layer does it wrong by grabbing from github.
[14:48] <bloodearnest> I think w/o docs and extras, you're talking about 60kb
[14:49] <cory_fu> kjackal: I don't know if we'd need a translation layer really, we just need to audit charms and start testing them in IPv6 environments to ensure they're coded to be IPv6 aware.  That said, your error seems like Juju couldn't connect to the charm store via IPv6 which seems even more fundamental than charms supporting IPv6.
[14:49] <bloodearnest> mbruzek, not objectivelywrong, perhaps, but wrong for us :)
[14:49] <cory_fu> I wonder if rick_h_ can chime in on whether there are issues between Juju and the charm store in IPv6
[14:49] <mbruzek> bloodearnest: so help us fix it so it is _more_ useful
[14:49] <bloodearnest> mbruzek, on it
[14:49] <rick_h_> cory_fu: yes! :)
[14:50] <cory_fu> Yes you can chime in, or yes there are issues?  ;)
[14:50] <rick_h_> cory_fu: was just talking with kjackal about this in another channel and asked him to kick off an email because we expect problems with the store and the charms in them to be honest
[14:50] <cory_fu> Ah, I see
[14:50] <rick_h_> cory_fu: the charmstore is fronted by apache2 with SSL terminitaion and only works on IPV4, we'll have to work with IS on how to add IPV6 support
[14:50] <rick_h_> cory_fu: but there's a bigger issue as to what/how charms would work?
[14:51] <rick_h_> can wordpress, exposed, work with IPV6 ootb behind haproxy?
[14:51] <Gil> hi lazyPower thanks.  never had any problems when deploying juju-gui on trusty.  When I try deploy on liberty, I get this:  "ERROR cannot resolve charm URL "cs:wily/juju-gui": charm not found".  in my environments.yaml I have "default-series: wily" in the maas section.  I needed wily and liberty to deploy this successfully:  https://jujucharms.com/u/openstack-charmers-next/openstack-lxd
[14:51] <rick_h_> same with all services that can be exposed, how many don't support binding to an IPV6 addr
[14:51] <cory_fu> True
[14:51] <rick_h_> Gil: do you need the GUI running on wily?
[14:51] <rick_h_> Gil: are you colocating it with another Wily services or something?
[14:52] <jrwren> rick_h_: and the inverse too, a cloud may be ipv4 on CGN or private IP, but support ipv6 public addresses
[14:52] <rick_h_> jrwren: yea, there's a whole can of worms here we've not worked through to my knowledge
[14:53] <bloodearnest> mbruzek, so, the bugfixes you needed are not in 3.0.1, correct?
[14:54] <mbruzek> bloodearnest: I don't recall what version was needed, but the github version fixed the error I was getting
[14:54] <bloodearnest> right
[14:54] <bloodearnest> I will attempt a PR to use a vendored version
[14:55] <Gil> rick_h my main goal is to work with the nova lxd which is why I'm deploying that bundle. yes, i'd like to run the juju-gui on the liberty deploy if it's possible.
[14:57] <rick_h_> Gil: right, but you only need wily GUI if it's on a wily host. You can deploy the trusty juju-gui onto liberty without a problem
[14:57] <rick_h_> Gil: the things you deploy don't all have to be on the same series.
[14:59] <Gil> ok gtk.  when I deployed the bundle: https://jujucharms.com/u/openstack-charmers-next/openstack-lxd it errored out and complained about the charms not matching the series so I went to "all -wily".  So what I need to do then is change in environments.yaml back to "default-series: trusty" I guess which I will try now.
[15:01] <lazyPower> Gil - that or juju deploy trusty/juju-gui
[15:05] <lazyPower> tvansteenburgh - got a moment for a quick review? https://github.com/juju-solutions/jujubox/pull/2
[15:05] <tvansteenburgh> lazyPower: yeah gimme a min and i'll take a look
[15:06] <Gil> juju deploy --to 0 cs:trusty/juju-gui; gives:  Added charm "cs:trusty/juju-gui-48" to the environment. + ERROR cannot assign unit "juju-gui/1" to machine 0: series does not match
[15:07] <lazyPower> ah thats because the state-server is wily, gotchya.
[15:07] <lazyPower> i didnt think you were colocating
[15:08] <tvansteenburgh> lazyPower: won't this break 1.25 users? maybe we should put these changes in a 2.0 branch?
[15:08] <lazyPower> tvansteenburgh - its for :devel flavored
[15:08] <lazyPower> this doesn't change :latest
[15:08] <tvansteenburgh> right, but this will eventually become latest i expect
[15:08] <lazyPower> one 2.0 lands
[15:08] <tvansteenburgh> and then we'll have nothing for 1.25
[15:09] <lazyPower> current :latest: will move to a tag for 1.25
[15:09] <tvansteenburgh> roger
[15:09] <lazyPower> and :dev will supplant :latest, and :dev moves to whatever is in the :devel ppa
[15:09] <tvansteenburgh> lgtm then
[15:10] <lazyPower> sweet \o/ progressss
[15:10] <tvansteenburgh> lazyPower: yeah thanks for doing that
[15:10] <lazyPower> https://hub.docker.com/r/jujusolutions/jujubox/builds/bke9qasy38rcy4s98ve2c9o/
[15:11] <lazyPower> we were solid with no modifications for 8 months
[15:11] <lazyPower> thats kind of impressive man. it wasn't until beta-1 landed that i had to dig in here and change some things
[15:14] <lazyPower> hey rick_h_ - when i'm bootstrapping with 2.0 beta-1,  i get that env vars make it simple but is there an option for me to pass --config=path/to/aws.yaml to get my cloud keys?
[15:14] <lazyPower> the cloud credentials file i use for create-model dont seem to work for bootstrap :|
[15:15] <rick_h_> lazyPower: yea, you have to write out a .local/share/juju/credentials.yaml file with named credentials in it
[15:15] <rick_h_> lazyPower: will get you an example in a sec
[15:15] <lazyPower> ta
[15:16] <Gil> "In order to deploy a cs: Trusty charm to an alternate series machine, the charm must be locally branched to a <series>/<charm-name> directory, then juju deployed from that local repo." from link https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/791
[15:17] <Gil> is that what I would need to do at this point?
[15:17] <lazyPower> thats an option, or re-bootstrap with a different default-series
[15:45] <Gil> lazyPower changed environments.yaml to "default-series: trusty" then bootstrapped and successfully (as expected) deployed juju-gui.  But then when the bundle "juju-deployer -c https://api.jujucharms.com/charmstore/v4/~openstack-charmers-next/bundle/openstack-lxd-50/archive/bundle.yaml -S -d" is deployed get "Added charm "cs:~openstack-charmers-next/wily/ceph-osd-15" to the environment. + ERROR cannot assign unit "ceph-osd/0" to machine 0:
[15:46] <lazyPower> Gil you'll have to modify the bundle to change the placement of ceph-osd
[15:46] <Gil> some solutions would be 1 extra machine for juju-gui or just use the local repo method
[15:46] <lazyPower> you'll need to co-locate it with another wily based service
[15:46] <Gil> ah
[15:48] <beisner> jamespage, fyi - updated the syncs and gh repos yesterday and they're syncing ok.   https://github.com/openstack-charmers/migration-tools/blob/master/charms.txt    +lxd +ceph-osd +percona-cluster
[15:49] <jamespage> beisner, ack - have the git review ready to push again
[15:49] <beisner> sweet
[15:50] <beisner> +ceph-mon that is
[15:50] <beisner> osd was already good
[15:50] <jamespage> yah
[16:21] <ChrisHolcombe> can any unit do a leader set or can only the leader perform that?
[16:29] <roadmr> ChrisHolcombe: "Only the leader can write to the bucket"
[16:29] <ChrisHolcombe> roadmr, darn i was hoping you weren't going to say that haha
[16:30] <roadmr> ChrisHolcombe: sorry :) straight from the docs: https://jujucharms.com/docs/1.25/authors-charm-leadership
[16:31] <ChrisHolcombe> roadmr, ah yeah i missed that line.  thanks :)
[18:02] <jamespage> beisner, hey - could you take a look at https://code.launchpad.net/~james-page/charms/trusty/neutron-gateway/tox8-test-refactor/+merge/286933
[18:02] <jamespage> needed for migration - also some prep for my neutron explosion branches
[18:05] <jamespage> beisner, there was some fairly nasty lack of isolation between unit tests...
[18:05] <jamespage> mainly due to massaging of CONFIG_FILES directly - moved to deepcopy + modification now
[18:07] <beisner> jamespage, yah i've been had by copies of dicts in py too.  deepcopy ftw.
[18:16] <beisner> jamespage, want to flip `make test` to the tox method on that?
[18:16] <beisner> and lint
[18:53] <thedac> bdx: cargonza tells me you have an active issue that needs looking at?
[18:53] <bdx> thedac: hey, yeah, do you mind?
[18:53] <thedac> no, what's up?
[18:54] <bdx> I have been experiencing an issue where my nova-metadata-api seems to become unavailable after service restart ...
[18:54] <bdx> my instances can talk to 169.254.169.254 initiall after creating tenant networks
[18:55] <bdx> following that, if I restart the api-metadata service or reboot the node 169.254.169.254 becomes unavailable to the instances
[18:56] <thedac> hmm, Could be MTU. Do these hosts have jumbo frames on?
[18:56] <thedac> Espectially after a reboot
[18:56] <thedac> especially
[18:57] <bdx> ok, I'm not using jumbo frames. The issue presents itself w/o instance reboot
[18:58] <thedac> So that is my first suggestion. We definitely see problems with metadata when using default MTU of 1500. If at all possible set it on the neutron-gateway and the nova-compute nodes, restart the nova-api-metadata service and check
[18:58] <bdx> thedac: after some introspection, I'm not seeing the 169.254.169.254 address or interface on my compute nodes ...
[18:58] <bdx> ip netns list show only qrouter-283684d1-6e4d-4704-a72e-6fe6acc8e9a6
[18:59] <thedac> bdx: they don't it is a special address space
[18:59] <thedac> It uses multicast
[19:00] <bdx> ok, gotcha. Do you know of ways to test for its existance from outside of the instance?
[19:01] <thedac> The best test is on the instnace. But verify nova-api-metadata is listening on 8775
[19:01] <thedac> let me double check that port. That is off the top of my head
[19:03] <bdx> thedac: also, I'm not seeing the metadata api service show itself here http://paste.ubuntu.com/15182540/
[19:05] <thedac> Yeah, it does not show up in the service list. So check it on neutron-gateway or if you are doing metadata on the compute nodes check there. 8775 is correct
[19:06] <bdx> sudo service nova-api-metadata status
[19:06] <bdx> nova-api-metadata start/running, process 432790y
[19:06] <bdx> y
[19:06] <thedac> ok
[19:06] <thedac> bdx: And this shows up in console-logs as failed access to metadata correct?
[19:07] <bdx> yes.
[19:08] <bdx> thedac: `ps aux | grep metadata` -> http://paste.ubuntu.com/15182600/
[19:09] <thedac> ok, so I am still thinking MTU
[19:09] <bdx> it seems there is a wealth of metadata processes running
[19:09] <bdx> ok
[19:09] <thedac> You can test this by running ping with larger and larget packets sizes on the qrouter netns
[19:09] <bdx> entirely, ok
[19:09] <thedac> ping -s 1472 I think
[19:10] <bdx> `sudo ip netns exec q-router<#> ping -s 1472 169.254.169.254` ?
[19:11] <thedac> yes
[19:11] <thedac> and then with 1473 or higher
[19:11] <thedac> oh, sorry
[19:11] <thedac> no the IP of the instance
[19:11] <thedac> not the 169.254 address
[19:12] <bdx> ohh.. ping an instance?
[19:12] <thedac> yes
[19:12] <thedac> bdx: and regardless our best practice advice is to use jumbo frames in all openstack deploys
[19:15] <bdx> thedac: good to know
[19:16] <bdx> thedac: my pings are successful
[19:16] <thedac> with > 1472?
[19:16] <bdx> yea
[19:16] <thedac> ok let's check /var/log/nova/nova-api-metadata.log for any tracebacks
[19:17] <bdx> I'll check again ... my logs have been clean though
[19:17] <bdx> ok
[19:17] <bdx> so
[19:18] <bdx> ERROR oslo.messaging._drivers.impl_rabbit [req-85055fc4-7de0-46c9-8cc2-119c0eda3430 - - - - -] AMQP server on  is unreachabl
[19:19] <bdx> rabbit was actually my initial suspect because I am seeing stale notifications in the rabbit queue
[19:19] <thedac> It is always rabbit ;)
[19:19] <bdx> but hadn't seen any errors yet ..
[19:20] <thedac> lovely, ok so sounds like a rabbitmq problem. From a networking perspective can you nc -vz $RABBIT_IP 5672 from the nova-api-metadata host?
[19:20] <thedac> Then we can check rabbitmq-server logs
[19:20] <bdx> nc -vz 10.16.100.59 5672
[19:20] <bdx> Connection to 10.16.100.59 5672 port [tcp/amqp] succeeded!
[19:21] <thedac> ok, let's hope on the rabbit instance and check logs
[19:21] <thedac> s/hope/hop but also hope
[19:23] <bdx> ok, just launched an instance, that failed communicating with 169.254.169.254, rabbit logs show -> accepting AMQP connection <0.3547.1> (10.16.100.133:39614 -> 10.16.100.59:5672)
[19:24] <thedac> Is rabbit clustered or singleton?
[19:24] <bdx> singleton
[19:24] <thedac> ok
[19:25] <thedac> You might keep the tail on the rabbit log and restart the nova-api-metadata service and neutron-metadata service and see what we get
[19:25] <bdx> ok
[19:25] <bdx> omp
[19:25] <thedac> It could have been temporary failure
[19:27] <bdx> yea, I got a bunch of warning reports for about a second
[19:28] <bdx> rabbit seems to be talkin to both services
[19:28] <thedac> What were the warning messages
[19:28] <thedac> ?
[19:28] <bdx> =WARNING REPORT[19:28] <bdx> closing AMQP connection <0.19995.0> (10.16.100.157:42079 -> 10.16.100.59:5672):
[19:28] <bdx> connection_closed_abruptly
[19:29] <thedac> That could have been the stop of the metadata service depending on timing
[19:29] <thedac> so you might test another instance deploy
[19:29] <thedac> And watch the nova-api-metadata log as well as the rabbit log
[19:29] <bdx> it was ... rabbit logs got spammed with that at the time I restart
[19:29] <bdx> on it
[19:33] <bdx> thedac: yea, no errors in any logs
[19:33] <thedac> ok, fingers crossed for the instance
[19:34] <bdx> neutron-api, neutron-gateway, nova-cloud-controller, nova-compute
[19:34] <bdx> all show no errors
[19:34] <bdx> instance gets stuck reaching out for metadata while booting
[19:35] <thedac> ok, so I am going to keep pushing the MTU issue. metadata is suseptible to it.
[19:35] <bdx> I can use the config drive as a workaround to get my user-data on to my instances for the time being ... I just feel this is really fragile though
[19:36] <bdx> thedac: so .... If I create two new tenant networks, the instances get metadata just fine
[19:36] <bdx> that are deployed to the new nets
[19:36] <thedac> oh?
[19:36] <bdx> yea
[19:36] <bdx> or
[19:36] <bdx> If I neutron net-delete
[19:36] <bdx> and recreate the tenant networks that are affected, metadata works again
[19:37] <thedac> hmm, ok, that is interesting
[19:37] <bdx> right
[19:37] <thedac> do they subsequently stop working or work indefinitely?
[19:38] <thedac> after a re-create?
[19:38] <bdx> thedac: metadata works until I restart the respective services, then stops working until I destroy and recreate again
[19:39] <bdx> thedac: I'm suspicious this might be a permissions thing ...
[19:39] <thedac> ok, and is this liberty?
[19:39] <bdx> yea
[19:39] <bdx> is the 169.254.169.254 a unix socket?
[19:39] <thedac> I'll see if I can recreate this and get back to you
[19:40] <bdx> thanks
[21:25] <cory_fu> kjackal, lazyPower: https://github.com/juju-solutions/layer-basic/pull/37
[21:26] <lazyPower> cory_fu - how can i not include some of the project meta files from base like Makefile and such?
[21:26] <lazyPower> i can override that in layer.yaml right?
[21:27]  * lazyPower makes a note to look at the builder readme
[21:28] <cory_fu> lazyPower: I don't think there's a way to say "remove this file" (bcsaller might be able to correct me), other than just overriding it with an empty file (which isn't the same as deleting it)
[21:28] <lazyPower> ah, ok.
[21:28] <cory_fu> lazyPower: Maybe we need an "excludes" section in layer.yaml?
[21:28] <lazyPower> I think thats a swell idea
[21:29] <cory_fu> What is your use-case for that, though?
[21:29] <lazyPower> just so i can say " excludes: readme, makefile, hacking.md - things like that so when i assemble my charm, if i dont have a hacking.md file, i dont have one floating around for one of the runtime layers
[21:30] <cory_fu> Why would you not want those files, though?
[21:31] <cory_fu> You definitely need a README
[21:31] <cory_fu> And I can't see why you wouldn't also want a HACKING.md and Makefile
[21:31] <magicaltrout> i sorta agree with that, i built a charm the other day and committed a load of shit for no reason other than i didn't notice it was in the output
[21:32] <magicaltrout> admittedly I used bzr ignore, but still, it seems like a valid usecase, or maybe bzr ignore is the way to go! :)
[21:33] <cory_fu> Actually, it looks like layer.yaml might already support an "ignore" list
[21:34] <cory_fu> Yeah, you can give an ignore list that will do what you want
[21:34] <cory_fu> lazyPower: ^
[21:34] <lazyPower> cory_fu - ballin. I guess you found that in charm-tools docs?
[21:35] <cory_fu> And it looks like it will work per-layer, so each layer can ignore things from the layer below (if you stack more than once)
[21:35] <cory_fu> lazyPower: If by "docs" you mean "source"
[21:35] <lazyPower> ah, right
[21:35] <magicaltrout> lol
[21:35] <lazyPower> one and the same, the tome of charm keeper knowledge
[21:35] <cory_fu> Yeah, that should be documented, for sure
[21:36] <lazyPower> ya know cory_fu - i just realized we didnt put in a reference guide to any of the layer stuff
[21:37] <cory_fu> Yeah, that would be a good place for this
[21:37] <cory_fu> There's a lot of functionality built in to layer.yaml alone
[21:37] <lazyPower> can we card this and bring it up later this week?
[21:37]  * lazyPower has k8s stuff thats stale and needs cooked
[21:38] <cory_fu> Where would that card go?
[21:38] <lazyPower> i'll take it and put your face on it
[21:38] <cory_fu> ha
[21:38] <lazyPower> that..sounded way creepier than i intended
[21:38] <cory_fu> Indeed
[21:38] <lazyPower> anywho, incoming notice
[22:51] <lazyPower> so, word to the wise. series in metadata will make the -stable tooling (bundletester, proof) quite angry as i just found out.
[23:07] <bdx> it seems when I deploy my same stack in kilo, then in liberty, my nova-api-metadata changes its location from the dhcp port (kilo), to the network:router_interface_distributed port (liberty). Was this intended? Do you know about it?
[23:07] <bdx> thedac, openstack-charmers:^
[23:07] <thedac> bdx: hey
[23:07] <bdx> thedac: whats up
[23:07] <thedac> I just had a liberty deploy up and was testing. I saw no change.
[23:07] <thedac> bdx: are you doing DVR for this?
[23:07] <bdx> thedac: yeah
[23:07] <bdx> thedac: and local dhcp/metadata
[23:07] <thedac> Ok, that is what I need to test next. I could not re-create the failure. So I will stand up a DVR deploy and keep trying
[23:07] <thedac> right
[23:07] <bdx> thedac: so yea, whats going on here is this -> I deploy my same stack in kilo, then in liberty, nova-api-metadata changes its location from the dhcp port (kilo), to the network:router_interface_distributed port (liberty)
[23:07] <thedac> ok
[23:08] <bdx> So in my spinup bash script for creating tenant networks, I did not know/had not updated the network params to reflect the change of nova-api-metadata
[23:08] <bdx> i.e for kilo -> neutron subnet-update vlan110-subnet \
[23:08] <bdx>   --host_routes type=dict list=true \
[23:08] <bdx>   destination=10.0.20.0/24,nexthop=10.16.110.1 \
[23:08] <bdx>   destination=10.15.0.0/16,nexthop=10.16.110.1 \
[23:08] <bdx>   destination=10.10.0.0/16,nexthop=10.16.110.1 \
[23:08] <bdx>   destination=10.16.100.0/24,nexthop=10.16.110.99 \
[23:08] <bdx>   destination=10.16.111.0/24,nexthop=10.16.110.99 \
[23:08] <bdx>   destination=10.16.112.0/24,nexthop=10.16.110.99 \
[23:08] <bdx>   destination=169.254.169.254/32,nexthop=10.16.110.101
[23:09] <bdx> but for liberty --> neutron subnet-update vlan110-subnet \
[23:09] <bdx>   --host_routes type=dict list=true \
[23:09] <bdx>   destination=10.0.20.0/24,nexthop=10.16.110.1 \
[23:09] <bdx>   destination=10.15.0.0/16,nexthop=10.16.110.1 \
[23:09] <bdx>   destination=10.10.0.0/16,nexthop=10.16.110.1 \
[23:09] <bdx>   destination=10.16.100.0/24,nexthop=10.16.110.99 \
[23:09] <bdx>   destination=10.16.111.0/24,nexthop=10.16.110.99 \
[23:09] <bdx>   destination=10.16.112.0/24,nexthop=10.16.110.99 \
[23:09] <bdx>   destination=169.254.169.254/32,nexthop=10.16.110.99
[23:10] <thedac> bdx: ok, so is that working for you when you changed the route nexthop?
[23:10] <bdx> thedac: yea
[23:11] <thedac> ok, great. I'll figure out why things changed.
[23:12] <bdx> thedac: an interesting difference --> in kilo, when you update your subnet, you must include the destination,nexthop for the 169.254 address because the host route for metadata is not automatically appended to the list of new host routes
[23:13] <thedac> I was going to ask. We do not add the 169.254 route in our testing.
[23:13] <bdx> so in kilo, I can update my subnet, and lose my 169.254 static route if I do not add it to the update
[23:14] <bdx> in liberty, the nexthop/destination for metadata is appended to static routes automatically
[23:15] <bdx> unless you override it by specifying a user defined route for 169.254 as I was
[23:15] <bdx> wow
[23:15] <thedac> got it so, may be a bug in kilo
[23:16] <bdx> totally
[23:17] <bdx> thanks for your help on this
[23:18] <thedac> no problem