[11:46] <johnmce> Hi, I'm upgrading my test openstack installation to Juno, and I'm stuck at trying to get Keystone re-installed. Having failed to update the existing Keystone LXCs in place, I decided to scrap them a create a couple of replacements.
[11:46] <johnmce> In my config file I'm specifying "openstack-origin: 'cloud:trusty-juno'". Keystone fails to install in the new LXC container. The first error I dealt with was "juju-log FATAL ERROR: Could not determine OpenStack codename for version 2014.2.
[11:47] <johnmce> I worked around this with an "apt-get update; apt-get upgrade; add-apt-repository -y cloud-archive:juno; apt-get update"
[11:48] <johnmce> however, I'm now getting this error: juju-log FATAL ERROR: Invalid Cloud Archive release specified: trusty-juno
[11:48] <johnmce> I've tried googling that error, but I'm not finding any answers. Can anyone offer any advice?
[12:34] <johnmce> Further to my earlier question, (using Juju and MAAS) when I add keystone, thus spawning LXC containers, how is it that the "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/contrib/openstack/utils.py" file contains no reference to Juno whatsoever, when the same file in the charm I'm using does contain references to Juno?
[12:34] <johnmce> How is it that an older version (icehouse) of this file is being used?
[14:24] <jesk> hi
[14:25] <jesk> did a fresh 14.04 installation with maas and juju 1.20.11 from paa
[14:25] <jesk> i tried bootstrapping with juju bootstrap --upload-tools --debug
[14:25] <jesk> two thing I encountered are that /var/lib/juju/nonce.txt is missing so that i have to manually login on the node and create it
[14:26] <jesk> the next problem I couldnt fixed yet is that:
[14:26] <jesk> 2014-11-08 15:14:24 ERROR juju.cmd supercommand.go:323 exec ["start" "--system" "juju-db"]: exit status 1 (start: Unknown job: juju-db)
[14:26] <jesk> 2014-11-08 12:39:19 ERROR juju.provider.common bootstrap.go:122 bootstrap failed: subprocess encountered error code 1
[14:26] <jesk> Stopping instance...
[14:26] <jesk> any help appreciated
[15:38] <jose> anyone else having troubles with ec2?
[15:43] <jrwren> what kind of ec2 troubles?
[15:55] <jose> jrwren: I'm getting the following:     agent-state-info: 'cannot run instances: No default subnet for availability zone:
[15:55] <jose>       ''us-east-1e''. (InvalidInput)'
[16:07] <jose> jrwren: any ideas about what may be going on?
[16:38] <jrwren> jose: yes. A bug was recently closed on that.
[16:38] <jrwren> jose: check your VPC
[16:38] <jrwren> jose: make sure you have a subnet for us-east-1e in your default VPC
[16:39] <jose> jrwren: there was no subnet for us-east-1e, but I created it and it's still giving me the error
[16:39] <jrwren> jose: did you remove the machine and add a new one?
[16:39] <jose> jrwren: yep. lemme destroy my env and re-bootstrap
[17:00] <tvansteenburgh> jose: axw told me you can work around that bug by deleting the default VPC in your region (assuming you don't need it); beware that it can't be undone w/o help from AWS support)
[17:00] <tvansteenburgh> jose: or wait for 1.21 beta1 which will contain the fix
[17:00] <jose> tvansteenburgh: I was trying to run some tests and I can only create 3 machines (machine 0 and two services), so I guess I'll have to wait
[17:01] <jose> now, what happens if I delete my VPC? supposedly I won't be able to create machines
[17:03] <jrwren> yeah, I don't recommend deleting the default VPC
[17:04] <jrwren> I deleted mine and I couldn't create machines.
[17:04] <jrwren> I don't know how to tell juju to use a non default FPC
[17:06] <jose> I hope 1.21-beta1 is released soon. kinda annoyinh
[17:06] <jose> annoying*
[17:07] <tvansteenburgh> supposedly coming early next week
[17:14] <jose> cool then. will hold my tests.
[17:29] <jrwren> jose: did you get the error again?
[17:29] <jose> jrwren: bleh, bundletester paused waiting for me to input my password for 00-setup and I didn't know
[17:30] <jose> it's running now, let's check...
[17:32] <jose> jrwren: same error
[17:35] <jrwren> jose: same AZ?
[17:35] <jose> jrwren: yep
[17:35] <jose> us-east-1e
[17:37] <jrwren> i moved to us-west-2
[17:38] <jrwren> but it sure would be nice to find a better work around.
[17:42] <jrwren> jose: do you have awscli at your disposal?
[17:42] <jrwren> jose: apt-get install awscli and then aws configure and use same creds as that juju environment.
[17:43] <jose> ack!
[17:43] <jrwren> jose: if you could do that and pastebin the output of aws ec2 describe-subnets
[17:46] <jose> jrwren: http://paste.ubuntu.com/8887559/
[17:55] <jrwren> jose: well heck is if I know. it says there us-east-1e subnet has 4000+ available.
[17:55] <jose> yeah, I'll have to wait until 1.21-beta1 :P
[17:55] <jose> thanks for your help, though! :)
[18:33] <johnmce> Further to my earlier question, (using Juju and MAAS) when I add keystone, thus spawning LXC containers, how is it that the "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/contrib/openstack/utils.py" file contains no reference to Juno whatsoever, when the same file in the charm I'm using does contain references to Juno?
[18:36] <johnmce> Can anyone point me in the direction of any documentation that explains how my target LXC machine receives the charm, so that I can work out where on earth it's getting this outdated charm from?