[13:07] <pmatulis> morning
[13:31] <Icey> howdy
[13:42] <Icey> jamespage do you  know if I can get juju + aws to automatically attach ebs drives for ceph to use as a block device?
[13:42] <jamespage> Icey, probably
[13:42] <jamespage> Icey, andrew is the right person to ask
[13:44] <marcoceppi> Icey: you'll want to use the experimental storage stuff axw has been working on
[13:51] <lathiat> Hi Folks.. can anyone help me with what's happening here? Hit this twice today in a setup that was previously working fine.. destroyed and re-created my environment (MAAS), deploying first service juju-gui to the same machine i bootstrapped onto (which is libvirt) .. stuck on "filter.go:137 tomb: dying -> leadership failure: leadership manager stopped" - there are some earlier errors about i/o errors/losing communication - full log; http://lathi.at/fil
[13:52] <lathiat> not entirely sure which of the errors is more related to the actual issue and what that is
[13:59] <marcoceppi> lathiat: didn't get teh full log link, could you supply it again?
[13:59] <lathiat> full log; http://lathi.at/files/juju-leadership.txt
[14:00] <lathiat> i am guessing the real issue is the EOF stuff much earlier but i'm a bit lost with it from there
[14:00] <lathiat> maybe related https://bugs.launchpad.net/juju-core/+bug/1493123
[14:00] <mup> Bug #1493123: Upgrade in progress reported, but panic happening behind scenes <landscape> <landscape-release-29> <upgrade-juju> <juju-core:Fix Released by ericsnowcurrently>
[14:00] <mup> <juju-core 1.24:Fix Released by ericsnowcurrently> <juju-core 1.25:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1493123>
[14:01] <marcoceppi> lathiat: that's odd, I've not seen that
[14:01] <lathiat> thing is i ran into this issue just before, but iw as also getting an error about machine 7 not existing, that i had previously force destroyed, assumed i had corrupted something.. decided to try and start over and hitting the same thing now.
[16:34] <bdx> beisner, lazyPower: deploying trusty-kilo-ha with next charms eliminated 99% percent of the problems I was experiencing :-)
[16:35] <beisner> hi bdx good to hear - we're feverishly working on the release process this wk.
[16:36] <bdx> beisner, lazyPower: Good to hear! There is one remaining issue I can't seem to get around that still exists out of the issues I was experiencing when deploying kilo-trusty-ha with trunk charms
[16:36] <bdx> that is, I can't query the keystone vip api endpoint....
[16:36] <beisner> bdx  grain of salt:  next charms not recommended for production as they are generally in active dev
[16:37] <beisner> bdx well that would be problematic ;-)
[16:37] <bdx> beisner: totally.
[16:37] <beisner> bdx, enabling keystone ssl by chance?
[16:38] <bdx> beisner: I'm totally down, what reasoning have you, if any?
[16:38] <bdx> besides security
[16:38] <bdx> ha
[16:39] <beisner> bdx, just wondering as the client connections get trickier.  if not specifically needed, it's less hassle (and less secure of course) to use the default non-ssl.
[16:39] <beisner> so nvm me
[16:39] <bdx> totally, ok
[16:40] <bdx> you have never experienced this?
[16:40] <beisner> bdx, gotta run.  if you'd like input, i'd start with inspecting your sanitized novarc / openstackrc file or env vars + keystone --debug catalog  + keystone --debug token-get.
[16:40] <beisner> that info might give insight
[16:41] <beisner> + juju stat --format tabular   :)
[16:42] <beisner> also might do a sanity check that the services are all running, and that the ips are in place in each unit's nic.
[16:42] <beisner> o/
[16:42] <bdx> beisner: charmconf.yaml <- http://paste.ubuntu.com/12886818/
[16:42] <bdx> juju status --format tabular <- http://paste.ubuntu.com/12886819/
[16:44] <bdx> im not worried about the dashboard relation, as I am currently troubleshooting it as it depends on the keystone api as well
[16:46] <bdx> which is why I think its giving me grief and status shows "Incomplete relations: identity "
[16:47] <bdx> beisner: oooh just saw ^^
[16:47] <bdx> nice, thanks
[16:47] <bdx> later
[16:55] <beisner> bdx yeah, that's the new workload status.  gives a lot better feedback as to what is going on throughout the deployment steps, and through managing the thing longer term.
[17:28] <jog> mgz, I made updates to https://code.launchpad.net/~jog/juju-ci-tools/centos_deploy_stack/+merge/275135
[17:31] <mgz> jog: landit
[17:31] <jog> mgz, thanks
[18:44] <bdx> jamespage, marcoceppi, gnuoy, beisner, lazyPower: After toying with failing openha deploys, most of the issues I was experiencing have been resolved. I have found the primary issue(s) that still exist in next charms concerning ha deploys....the issue is that service charms do not get keystone vip in their .conf files, also keystone endpoints get created for non vip service endpoints
[18:44] <bdx> ^^ resolved in next branches*
[18:46] <bdx> After manually making the needed modifications to the endpoints in the keystone.endpoints table and correcting each of the charms configs to include the keystone vip endpoint, I have a working ha stack
[18:46] <bdx> !
[18:46] <bdx> yea!
[18:48] <thedac> bdx: none of the service charms get the keystone vip? Or is there a specific charm? Also do you have the bundle you are deploying from so I can see?
[18:48] <bdx> I'll file bugs on these issues, It would be nice to see these things fixed in the 15.10 release so as those of us looking for HA stacks have a somewhat stable answere, instead of heading into the 15.10 release with HA borked still
[18:48] <thedac> bdx: fantastic. Let me know when those bugs are filed
[18:48] <bdx> thedac: juju status --format tabular <- http://paste.ubuntu.com/12887639/
[18:49] <bdx> thedac: deployer.yaml <- http://paste.ubuntu.com/12887642/
[18:49] <thedac> thanks
[18:50] <bdx> thedac: thats correct, none of the service charms get the keystone vip
[18:50] <bdx> also keystone.endpoints has all non vip entries
[18:51] <thedac> ok, I'll take a look today
[18:51] <bdx> thedac: awesome! thanks!
[19:05] <bdx> thedac, openstack-charmers, core: https://bugs.launchpad.net/charms/+source/keystone/+bug/1508575
[19:05] <mup> Bug #1508575: Keystone DB gets all non vip endpoints + openstack service conf files get keystone non vip <ha> <keystone> <openstack> <server> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1508575>
[19:05] <bdx> boom
[19:12] <thedac> bdx: thanks
[19:13] <bdx> thedac: NP, thanks for looking into this!
[19:29] <jamespage> hello bdx
[19:29] <jamespage> bdx, we're at bit late for 15.10 release for any more bugs today (as its tomorrow)
[19:30] <jamespage> bdx, I am keen to understand the problems you're having - as thedac and other have noted, we've run our own internal QA cloud for 1.5 years in a HA deployment through three openstack series upgrades
[19:45] <jamespage> bdx, I added a comment to that bug - really need to see the output of "sudo crm status" on any of the haclustered services
[20:10] <bdx> jamespage: heres the output of "sudo crm status" on a keystone node -> http://paste.ubuntu.com/12888152/
[20:11] <jamespage> bdx, ok - that looks fine
[20:11]  * jamespage thinks
[20:12] <jamespage> bdx, could you do the following and pastebint the output
[20:12] <bdx> of course :-)
[20:13] <jamespage> bdx, juju run --unit keystone/0 "relation-ids ha"
[20:13] <jamespage> and then
[20:13] <bdx> jamespage, ha:81
[20:13] <jamespage> juju run --unit keystone/0 "relation-get -r <id from previous command> - keystone-hacluster/0"
[20:14] <jamespage> bdx, for that next one you'll need to use ha:81 and the unit name for the paired hacluster unit
[20:14] <bdx> clustered: "yes"
[20:14] <bdx> private-address: 10.16.100.72
[20:15] <jamespage> bdx, well again that looks ok - next link...
[20:15] <jamespage> clustered = yes is the good bit there
[20:16] <bdx> jamespage: let me note, a) this is a repeated issue across 15x deploys of trunk and next, and b) on every deploy, the service clusters form without error for each service
[20:17] <jamespage> bdx, yeah - just puzzled as to why thats not propagating out correctly
[20:18] <jamespage> bdx, the clustered=true triggers a re-run of relation hooks where things need to be changed, and the code that determines endpoint resolution should detect the same thing and start using the VIP's
[20:19] <bdx> jamespage: ok, good, how does this happen "the code that determines endpoint resolution should detect the same thing and start using the VIP" --> the vip isnt the same as private-address: 10.16.100.72
[20:19] <jamespage> bdx, lets see what keystone is propagating
[20:20] <jamespage> bdx, there is an endpoint resolver in charmhelpers that figures that out consistently taking into account cluster status and configuration
[20:20] <bdx> I have modified all of my endpoints and .conf files to resolve the issue, and also as a proof of concept
[20:20] <jamespage> suffice to say with split networks, its gets quite hairy, but your deployment is not doring that
[20:20] <jamespage> bdx, you should categorically not need todo that
[20:21] <bdx> jamespage: where is the "endpoint resolver in charmhelpers"? if you don't mind?
[20:22] <jamespage> bdx, http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/openstack/ip.py#L106
[20:23] <jamespage> bdx, can you also do: juju run --unit keystone/1 'relation-ids identity-service'
[20:23] <jamespage> bdx, the log data from /var/log/juju/unit-keystone-1.log would also be useful
[20:23] <bdx> identity-service:41
[20:23] <bdx> identity-service:45
[20:23] <bdx> identity-service:57
[20:23] <bdx> identity-service:59
[20:23] <bdx> identity-service:70
[20:23] <bdx> identity-service:89
[20:23] <jamespage> bdx, ok and now
[20:24] <jamespage>  juju run --unit keystone/1 'relation-get -r identity-service:89 - keystone/1'
[20:24] <bdx> jamespage: /var/log/juju/unit-keystone-4.log <- http://paste.ubuntu.com/12888244/
[20:25] <bdx> private-address: 10.16.100.72
[20:27] <jamespage> bdx, that's very light
[20:27] <jamespage> bdx, ok can I see this (need to know which unit is leader)
[20:27] <bdx> jamespage: I don't see how resolve_address could return the vip .....
[20:27] <jamespage> bdx, L132 should be in path
[20:29] <jamespage> bdx, juju run --service keystone 'is-leader'
[20:29] <bdx> jamespage: and also, keep in mind that I have destroyed two keystone units and re-add-units ....in case you are wonder why the log is light, and also the extra ids for past units...also I don't have debug, or verbose logging on....grrr ...my bad
[20:29] <bdx> jamespage: if not net_addr: will never execute ....
[20:29] <jamespage> bdx, oh the relation data was light, not the log
[20:29] <bdx> oh
[20:30] <jamespage> bdx, it will - if config is unset, None gets returned
[20:31] <jamespage> so not net_addr will equal True in that case
[20:31] <bdx> jamespage: what config must be unsetH?
[20:32] <jamespage> bdx, os-XXX-network
[20:32] <bdx> unset*
[20:32] <bdx> omg
[20:32] <jamespage> it has no default
[20:32] <bdx> jesus
[20:33] <jamespage> where XXX in public, internal, admin
[20:34] <bdx> this is totally my bad... I should of read into that weeks ago
[20:36] <bdx> jamespage: MAJOR revision to any and all docs concerning HA deploy to include ^^
[20:37] <bdx> I should of gotten to the bottom of this earlier on my own, by investigating, but ...thank god
[20:38] <bdx> jamespage: thanks for your help getting to the bottom of this
[20:39] <jamespage> bdx, have we?
[20:39] <jamespage> bdx, got to the bottom of this?
[20:39] <jamespage> just eating dinner as well biab
[20:39] <bdx> jamsepage: yes! You must leave the os-xxx-network unset for vip endpoints to get set anywhere!!!!
[20:40] <bdx> jamespage, openstack-charmers: that is the missing piece! you all have been keeping secretssssss!
[20:41] <bdx> not really though....I could of found it:-/
[20:41] <bdx> :-)
[20:49] <beisner> woot!
[20:50] <bdx> jamespage, beisner: thanks for your help concerning this
[20:51] <beisner> actually thedac is just uses aliases:  jamespage and beisner
[20:51] <beisner> jusssst kidding.
[20:51] <thedac> heh
[20:51] <bdx> thedac: ^^
[20:51] <thedac> bdx: fwiw, I just ran a test over lunch that proves this point. Services do get keystone's vip
[20:52] <jamespage> bdx, erm that's not quite true
[20:52] <bdx> jamespage: which?
[20:53] <jamespage> bdx, you can use vips with configurations that also use os-XXX-network
[20:53] <jamespage> configuration options
[20:53] <jamespage> bdx, vip can be a single VIP or a space delimited list, if you are splitting endpoints across networks
[20:53] <bdx> jamespage: concerning the resolve_address function, I don't see how that could happen....?
[20:53] <jamespage> bdx, L134
[20:54] <jamespage> for vip in vips:
[20:54] <jamespage>     check if in network for endpoint type
[20:54] <jamespage>     if it is, use this one
[20:54] <jamespage> basically
[20:54] <bdx> oooh I see
[20:54] <jamespage> bdx, looking at http://paste.ubuntu.com/12887642/
[20:55] <jamespage> I can't see that you are setting is os-XXX-network config options for the keystone charm
[20:56] <bdx> jamespage: shoot...your right....
[20:58] <bdx> jamespage: I must disclose ....initially 1 of the keystone endpoints was set to the vip in the database on my last deploy  .... the keystone admin endpoint of http://10.16.100.34:35357/v2.0
[20:59] <bdx> jamespage: every other endpoint was not set to the vip including the other keystone endpoints
[21:06] <jamespage> bdx, the charm would have blindley configured that anyway
[21:09] <jamespage> bdx, did you figure out which is the lead keystone unit? I really want to see the juju log file fromthat one
[21:15] <bdx> jamespage: here is the keystone log from the leader: http://paste.ubuntu.com/12888588/
[21:18] <bdx> unit-keystone-1.log*
[21:39] <bdx> jamespage, beisner, thedac: I propose I redeploy, and this time I will give the services 30mins to settle and ensure clusters form before I add any relations.... this could rule out any possibility of timing issues with clusters not being fully formed when relations are made.
[21:40] <bdx> thedac: Did you use juju-deployer in a once through to deploy all services and relations sequentially in your test?
[21:40] <thedac> bdx: yes
[21:41] <thedac> and I am testing one of our ha oneshot bundles right now. I'll let you know
[21:41] <bdx> thedac: sweeet!
[21:45] <bdx> thedac: are you deploying ha services on containers?
[21:47] <bdx> to containers*
[21:47] <thedac> let me confirm.
[21:48] <thedac> ah, no actually, so that may not be a valid test
[21:48] <thedac> I'll test with your bundle.
[21:50] <bdx> thedac: nice
[22:12] <bdx> thedac, jamespage, beisner: So...I haven't set the param 'vip_iface', hence I am assuming the default of 'eth0'. Seeing as I am deploying these services to containers, the primary interface is not 'eth0', but 'juju-br0'. This is a redherring to me, and looks like the ha-relation-joined hook could be affected.
[22:13] <bdx> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/keystone/next/view/head:/hooks/keystone_hooks.py#L515
[22:15] <thedac>   vip_iface:
[22:15] <thedac>     type: string
[22:15] <thedac>     default: eth0
[22:15] <thedac>     description: |
[22:15] <thedac>       Default network interface to use for HA vip when it cannot be
[22:15] <thedac>       automatically determined.
[22:15] <thedac> so you may be on to something there
[22:19] <bdx> thedac, jamespage, beisner: WAAALAAA
[22:19] <bdx> thedac, jamespage, beisner: >>> import netifaces
[22:19] <bdx> >>> netifaces.interfaces()
[22:19] <bdx> ['lo', 'eth0', 'lxcbr0']YGT
[22:20] <bdx> no juju-br0!!!!!
[22:21] <bdx> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/network/ip.py#L156
[22:23] <thedac> bdx: so you might test with vip_iface set to juju-br0
[22:26] <bdx> thedac: entirely....what I'm pointing out....is that netifaces.interfaces() does not recognize the juju-br0!
[22:27] <bdx> which would implicate the call to netifaces.interfaces() in network/ip.py as the culprit
[22:31] <thedac> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/keystone/next/view/head:/hooks/keystone_hooks.py#L515  get_iface_for_address *OR* vip_iface. I think setting vip_iface and vip_cider will fix this.
[22:36] <bdx> thedac: as example if "get_one()" returns "one" and "get_two()" returns "two"
[22:37] <bdx> thedac: and you have "one_or_two = (get_one() or get_two())"
[22:37] <bdx> thedac: one_or_two == "one"
[22:38] <bdx> thedac: so even if the 'vip_iface' is set it would still not return the correct iface
[22:38] <thedac> sorry, I am trying desperatly to get something up and running to actually validate this. But if get_iface_for_address does not retrun an address because juju-br0 is not in netifaces.interfaces() then the or would work. If it does return you are right.
[22:41] <bdx> thedac: entirely
[22:43] <thedac> Looking at lines 145-183 looks like it will return None http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/network/ip.py#L156
[23:08] <bdx> thedac: totally
[23:08] <bdx> thedac: I'm redeploying with 'vip_iface' configured to juju-br0
[23:08] <thedac> great, fyi, you may also need vip_cidr set