[14:32] <jcastro> heya lazyPower
[14:32] <jcastro> https://bugs.launchpad.net/charms/+bug/1353535
[14:32] <jcastro> can you look at this when you get a chance?
[15:31] <lazyPower> jcastro: on it
[15:33] <jcastro> lazyPower, <3
[15:42] <jcastro> marcoceppi, lazyPower: also, can you guys rope in adeuring with that vagrant port discussion?
[15:42] <jcastro> sinzui tells me he can help us fix
[15:43] <lazyPower> adeuring: have a look at https://github.com/juju/juju/issues/470 when you've got time.
[15:45] <adeuring> lazyPower: already working on it. At least a simple fix should be ready tomorrow
[15:45] <lazyPower> nice
[15:45] <lazyPower> adeuring: you're the man. hi5
[15:47] <jcastro> adeuring, thanks man, that's awesome!
[15:48] <adeuring> lazyPower: thanks :) Bur right now I've just changed the port used for the gui. but vagrant also supports a kind of "port collision detection and resolution". It would be nice to use that too -- but I have no clue how the guest machine could see what its port config is. And that would be needed if we want to enable the collision detection
[15:48] <lazyPower> adeuring: I'm somewhat familiar with what you're talkign about - i'm pretty sure its a mapping. You give it options to use on the host, the port on the guest is always static.
[15:49] <lazyPower> so the host could be say, 8000, 8001, or 9090 - you pass it that array and it attempts. if fails it cycles to the next port. once its exhausted all options it bails out.
[15:50] <lazyPower> the idea is you dont know what the host environment has occupied, but the guest is always expecting to use the same port.
[15:50] <adeuring> lazyPower: the problem is: the server on port 6080 redirects to 8001 (or some othoer port in the furutre). And this redirecting server needs to know what the currently used port is
[15:51] <lazyPower> this sounds like a job for configuration management, and a tuneable config option.
[15:52] <adeuring> lazyPower: perhaps. My main problem: As I understand it, vagrant can select a new port on each run of "vagrant up". So the redirecting server needs to know what the currently selected of for the main gui server is
[15:53] <lazyPower> this is all guest based configuration. The HOST is the only port that's likely to change in this scenario right?
[15:53] <adeuring> in other words: some process on the guest needs get information how is was configured by the host
[15:54] <lazyPower> hmm... maybe i'm not looking at this correctly
[15:54] <lazyPower> adeuring: i'll wait for your patch submission - you said likely tomorrow?
[15:55] <lazyPower> sub me to the MP please. I'd love to look this over so I know whats changing.
[15:55] <adeuring> lazyPower: right. Two patches actually: One for lp:jujuredirector/quickstart, the other for the build scripts
[15:55] <lazyPower> ok. Thanks for the heads up adeuring. I'll keep an eye out for the e-mails when it happens.
[16:05] <rbasak> marcoceppi: could you review my answer in http://askubuntu.com/questions/506647/juju-and-openstack-manual-provisioning/507690#507690 please? I think it's accurate, but I want to make sure.
[16:11] <frobware> I keep running into issues with keystone and agent-state-info: 'hook failed: "shared-db-relation-changed"'. Further debug-log stuff: http://pastebin.ubuntu.com/7971540/  Any clues as to why that now fails? I say now because for some hours this afternoon everything has been fine.
[16:13] <mfa298> frobware: I've had similar issues when deploying a HA cluster but doing a juju resolved --retry keystone/<instance> on the failed one after the primary has finished all it changes seems to fix it.
[16:14] <mfa298> My understanding is when setting up the relationships only one instance can do some of the work (setting up the DB). If the other instances try and get that config before those parts are setup you get an error
[16:14] <frobware> mfa298: I can juju ssh keystone/0 and run the command in the output without error
[16:15] <frobware> mfa298: so maybe some impatience in my script: I have all the deploy and add-relation's run without any intervening sleeps...
[16:15] <mfa298> even with pauses you might get errors.
[16:16] <frobware> mfa298: so repeatedly run 'resolved' should resolve this?
[16:19] <mfa298> If you've deployed 2 instances of keystone, added HA cluster then add the relationship to mysql. You'll find that only one keystone instance creates the DB etc (I think that's the oldest instance) but both  will try and get the DB config. If the 2nd instance tries to get the config before the DB is created (very likely to happen) then it gets an error
[16:19] <frobware> mfa298: no HA here. just single instances.
[16:20] <mfa298> ah ok, the shared-db error seemed to imply you had some sort of HA. But I'm by no means an expert (I've been trying to get it working with HA).
[16:20] <frobware> mfa298: I was curious as to why ssh'ing and running the commands always succeeds, yet any invocation of --retry yields the error in the pastebin
[16:22] <mfa298> I'm not sure about that. I've only seen errors like that with the HA bits
[16:22] <frobware> mfa298, this is with: 1.18.1-trusty-armhf.  I could try 1.20.
[16:26] <frobware> mfa298: ah well, so it's resolved. I've been banging on the --resolved action for about 30 mins prior to talking here. Seems a long time but 'status' reports 'started'. Thanks anyway.
[16:28] <frobware> mfa298: heh, so pretty much all services now report the same error. I'm confused...
[16:29] <frobware> mfa298: Once you enter a 'agent-state: error' do you have to resolve manually or does juju retry automatically?
[16:30] <mfa298> I dont think juju does anything else until something is prodded - but I don't know enough to be sure what actions can lead to things being fixed.
[16:31] <mfa298> I've mostly just used resolved --retry and if that fails try and work out what config is missing, destroy the environment and start again.
[16:32] <mgz> you can also destroy-machine --force as a next step after resolved which is less of a big hammer than taking down the whole env
[16:34] <mfa298> for me at this point it's a good test of the documentation and how repeatable things are.
[16:41] <mfa298> as a side note what't the best way to report bits missing in the documentation (e.g. needing to set secret for openstack-dashboard when doing HA)
[16:42] <frobware> mgz, mfa298: for the some of the other things that are failing I get "Access denied for user 'glance'@'192.168.2.251' (using password: YES)".
[16:43] <frobware> And I wonder how relevant this still is: http://www.tikalk.com/alm/solution-mysql-error-1045-access-denied-userlocalhost-breaks-openstack
[18:31] <marcoceppi> rbasak: great answer!
[18:42] <rbasak> marcoceppi: thanks! Just wanted to check it was accurate. I'm not really a charmer - you know about the experimental stuff I've been thinking about though.
[18:42] <rbasak> Not had time to look at that again recently :-/
[18:43] <marcoceppi> rbasak: yeah, I gave it a look and had an email drafted a while ago. Got really busy and forgot to reply. In short: I love it
[18:44] <rbasak> marcoceppi: I made some more progress since then. I really want to sort it out when I can.
[18:44] <rbasak> I've convinced myself that it'll work, so I need to do a more refined PoC next.
[18:57] <jose> did anything happen to the channel topic?
[20:13] <lazyPower> jose: it was 'jorged'
[20:13] <jose> well, expected