=== iceyec_ is now known as iceyec [07:52] Not sure if this is the right place to ask, but hopefully it's a good start. Is anyone able to help with problems I'm having with metadata services and shared_secret_keys not being passed around properly? Or can point me somewhere that might be more appropriate to ask? === anthonyf is now known as Guest39525 [10:13] If I use juju set keystone admin-password= once my OpenStack deployment is up, should I expect Horizon to automatically pick this change up? [10:13] Because it doesn't seem to [10:56] I am seeing in the Juju 'all-machines.log' several periods where there are lines like: "machine-0: message repeated 46502 times: [2015-08-25 09:49:53 WARNING juju.lease lease.go:301 A notification timed out after 1m0s.]" where should I start looking? That seems to be causing problems... [12:39] jcastro: hey, you submitting a talk for fossetcon? [13:00] no I have devops days I am submitting to [13:00] do you want to submit this one? [13:06] gnuoy, can you review/land mongodb c-h sync re: Unsupported cloud: source option trusty-liberty/proposed?: https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/sync-fetch-helpers-liberty/+merge/268413 [13:09] beisner, done [13:09] gnuoy, thanks. fyi, pxc is in the same boat. working on a mp now. [13:52] Is anyone else seeing odd failures with juju-deployer? It seems to sometimes 'forget' do deploy a service, and then fails when it tries to add relations to the service it neglected to deploy. [13:53] Maybe receiving out of date information when introspecting the environment? [13:53] so lets say I got persnickety and turned off all of my EC2 vm's associated with my juju environment lastnight, when i powered them on this morning and fetched the new bootstrap node IP and updated my .jenv file with the proper bootstrap ip address, does anyone have suggestions as to why juju 1.24.4 cannot seems to get a response? the daemons are running on the state server [13:53] juju just seems to not be able to reach the node, and i'm suspect that the provider state file in s3 is the culprit.... [13:54] stub: is this a local charm declaration or a charmstore service? [13:54] you can telnet on port ? [13:54] g3naro: let me try that, 1 sec [13:54] lazyPower: These are local charms [13:54] if response, then maybe port is up but service is not responsive, try restart of agent on one box if possibe [13:55] g3naro: yeah service is up but not responding, and there's a relevant security group rule [13:55] i'm calling schenanigans [13:55] hmm [13:55] stub: I'm not sure why that would be the case. does juju deploy the service just fine manually when you tell it to deploy local:series/foo? [13:57] lazyPower: Its buried three miles deep in the test suite, so hard to tell. I think it might be seeing a dying service that is taking its time to disappear. I might have to wait harder. [13:58] stub: it would be nice if it had the option to force destroy services eh? [13:58] force destroy env, time-wait until service disappears - so we know its not an issue. [14:01] lazyPower: You can force destroy the machines, then you should always be able to destroy the service. But juju is async, so you still need to wait. I just didn't realize I needed to wait for that too. [14:02] (assuming my guess is right) [14:03] right [14:03] i can see that, getting in a scenario where the service is stlil present in the env but has no units backing it, so you get this weird transient no unit but have a service situation [14:19] jcastro: yeah, I'm checking my schedule and looks like I can go. I think I'll submit a talk for then [14:20] ack. [14:20] jcastro: also, is it fine if I book now? later today the agent will be EOD (belgium based) [14:21] sure, go for it! [14:21] jose: seb has visa issues so he won't make it. *sadface* [14:21] jcastro: are you sure? he said he could get one fine... I'll double check in Spanish with him, maybe he meant something else [14:21] I'll let you know how that goes [14:21] no there's like a 20 day minimum [14:22] next time we will book the hotel like 3 months in advance [14:22] it's more than 20 days and afaik they give you the appointment for the next business day [14:22] oh welp [14:22] btw, if you need me to bring something from peru lmk [14:22] bro i would love 2 hot peru girls [14:23] g3naro: don't think I can take those tu the summit :P [14:23] hehehe worth a shot [14:26] so I am getting on a master node of a Juju setup of a dozen nodes a few dozen thousands to a few millions warnings saying "WARNING juju.lease lease.go:301 A notification timed out after 1m0s. [14:28] per day... Sometimes thousands of those messages per second. And the Juju MongoDB db has perhaps relatedly grown to ~ 800 filles and 200GB. WHere to start looking? [15:05] hi gnuoy, pxc ready for review/land. cleaned up, ch-sync'd, passing. https://code.launchpad.net/~1chb1n/charms/trusty/percona-cluster/liberty-prep/+merge/269210 [15:09] beisner, done [15:09] gnuoy, ta! [15:09] np [15:11] gnuoy, coreycb - with today's mongodb and percona-cluster merges, that completes the Liberty uca ch-sync bit for all of the next charms in all of our tests. Now we're completely clear to find other issues. ;-) [15:14] beisner, nice, step by step.. [15:14] coreycb, yep. [15:14] thanks again guys for the effort on all fronts. [15:18] hi coreycb, in a spot where you can review/land this puppy? https://code.launchpad.net/~ack/charms/trusty/keystone/pause-and-resume/+merge/267931 [15:19] beisner, I took a quick glance but seeing as it's actions, james may want to take a scan [15:21] coreycb, fyi it's parallel to what we've already landed in swift-storage (https://code.launchpad.net/~adam-collard/charms/trusty/swift-storage/add-pause-resume-actions/+merge/268233), so I think it's good to go. [15:22] beisner, ah, in that case... I'll take a closer look [15:22] beisner, I'll look today, tied up in liberty packaging/mentoring atm [15:22] coreycb, much appreciated [15:26] lazyPower, any update on https://github.com/juju/juju/issues/470? [15:28] ah i haven't looked at that in ages O_O [15:29] mattyw: I have no updates, but this looks to be something we just need to tune in the vagrant config? [15:29] swap the port and adjust the python script thats setting up the GUI on boot [15:29] aisrael: issue 470 linked above is in your fifedom if you have a moment to take a look and weigh in [15:30] aisrael: and i swear i'm not passing the buck, just want your opinion as the current liason/maintainer of the vagrant workflow [15:31] lazyPower: btw /topic could do with an update ("Office Hours, here July 30'th") === lazyPower changed the topic of #juju to: Welcome to Juju! || Juju Charmer Summit Sept. 18-19 US Washington DC - Ask us about details || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP === lazyPower changed the topic of #juju to: Welcome to Juju! || Juju Charmer Summit Sept. 18-19 US Washington DC - http://ubunt.eu/KorUSN || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP [15:33] sparkiegeek: thanks :) [15:45] lazyPower: reading [15:56] lazyPower: I'm not sure if that error is still happening. Seeing if I can recreate it now. [15:56] mattyw: Have you re-tested lately? [16:00] aisrael, I haven't - I'm on call reviewer today so I'm just going throught the issues on github and chasing things that might need to be chased [16:01] mattyw: Gotcha. I'll follow up soon, thanks! [16:13] mattyw: commented, but don't have perms to close. I'm unable to recreate. I believe this has been fixed within the vagrant image since the bug was opened. [16:43] === mrjazzcat is now known as mrjazzcat-afk === lifeless1 is now known as lifeless === natefinch is now known as natefinch-afk [21:05] coreycb, upgrade action mp complete and all tests pass: https://code.launchpad.net/~ddellav/charms/trusty/cinder/upgrade-action/+merge/269247 [21:05] ddellav, cool, I'll take a look tomorrow [21:06] :) [21:15] niedbalski: ping, mind a quick PM? === scuttlemonkey is now known as scuttle|afk [21:16] also, marcoceppi, http://review.juju.solutions/review/2196 needs clearing from the revq === blr_ is now known as blr === scuttle|afk is now known as scuttlemonkey