[06:09] <trevorj> Hey guys, how's the status on the HA juju charms for grizzly openstack going? Last time I tested it out, there were a couple things I had to change to get grizzly a-workin for a certain packages, but they worked very well ;)
[06:09] <trevorj> s/a certain/a few/
[07:43] <AskUbuntu> juju, dnsmasq and `.localdomain` | http://askubuntu.com/q/281628
[08:29] <AskUbuntu> Juju not seeing the MaaS slaves... at least not after some time? | http://askubuntu.com/q/281640
[11:13] <kirminas> Hey , maybe some of you could help. Thanks in advance http://askubuntu.com/questions/281640/juju-not-seeing-the-maas-slaves-at-least-not-after-some-time
[12:36] <kirminas>  Hey , maybe some of you could help. Thanks in advance http://askubuntu.com/questions/281640/juju-not-seeing-the-maas-slaves-at-least-not-after-some-time
[13:27] <sidnei> hazmat: environment snapshot alternative?
[14:03] <wedgwood> is the openstack provider expected to be working in juju-core?
[14:10] <wedgwood> I ask because I get an error when I try to bootstrap: 'error: secret-key: expected nothing, got "<the key>"'
[16:21] <wedgwood> mramm: is the openstack provider working in juju-core?
[16:22] <mramm> wedgwood: The basic openstack support is in, so you can deploy charms, add relationships, etc
[16:23] <wedgwood> mramm: when I try to bootstrap: 'error: secret-key: expected nothing, got "<the key>"'
[16:23] <wedgwood> I've copied over my (working) config from pyju
[16:23] <mramm> the config file has changed somewhat
[16:23] <mramm> and go juju does not accept some of the python keys
[16:24] <wedgwood> ah ok. I spit out the example, but I'm not sure I see where to plug in some of the values
[16:25] <mramm> ok
[16:25] <mramm> did you spit it out with juju generate-config -w
[16:25] <mgz> wedgwood: for this particular case, your issue is probably that keypair auth isn't in yet
[16:25] <mgz> you can use userpass auth for now
[16:26] <wedgwood> mgz: do you know if juju-core has been used against canonistack?
[16:27] <mramm> wedgwood: yes, it has
[16:27] <mramm> that was the first testing platform
[16:27] <mramm> there are some issues with not having available public IP addresses there
[16:28] <sidnei> wedgwood: last i heard it still requires a public ip address per machine, which makes it unusable with canonistack
[16:28] <mramm> there is a workaround
[16:28] <evilnickveitch> wedgwood, there is a funny glitch in the parsing that will give you an error for supplying the key value, even if you choose "userpass"
[16:29] <wedgwood> mramm: perhaps avoiding expose?
[16:29] <mramm> it is more than that, since juju core actually communicates between the agent and the server on the public ip addresses
[16:30] <wedgwood> anyone have notes? I'd like to give it a try.
[16:31] <wedgwood> if AWS is a path of less resistence, I can get started there, but I'd like to start trying things out in openstack
[17:33] <Erik_> Trying to deploy wordpress.  /var/lib/juju/units/wordpress-1/charm/hooks/config-changed failed with exit code 1.  The output of hook scripts get logged anywhere?
[17:40] <Erik_> I think `unit-get private-address` is failing.  Anyone know how I can lookup the CLIENT_ID so I can run `unit-get` from within the container?
[17:41] <sarnold> Erik_: check /var/log/juju -- iirc, there are logs in there..
[18:46] <ev> is there a known issue with pyjuju where it forever sits with "instance-id: pending" without actually creating an instance in openstack (lcy01)?
[18:46] <ev> It seems no matter how many times I try to create a three node cassandra cluster, this happens for third node. This wasn't always the case, though.
[18:47] <ev> and there it is: ProviderInteractionError: Unexpected 413: '{"overLimit": {"message": "SecurityGroupLimitExceeded: Quota exceeded, too many security groups.", "code": 413}}'
[18:47] <ev> apols, clearly that's on my end
[19:02] <sidnei> ev: indeed, but i think the failure mode could be improved. i also had similar issues where i had api quota exceeded or some other error that caused the instance to end up in ERROR state but show as pending in pyjuju.
[19:03] <ev> sidnei: *nods*
[19:03] <sidnei> by api quota exceeded i mean openstack's api rate limiting, where if you do too many api calls in sequence it forces you to back off
[19:05] <ev> oh interesting. I wasn't even aware that existed.
[19:06] <ev> I've always associated the error state with the canonistack node running out of free memory :)
[19:08] <sidnei> if you're a light user you might notice it, but if you do add-unit -n 15 or something more abusive. ;)
[19:18] <ev> heh
[19:19] <ev> canonistack tends to fall over long before I approach 15 units. It's great fun trying to run lp:error-tracker-deployment (~12 units).
[19:19] <sidnei> lol