[00:00] <wallyworld> then you copy the generated files up the the public bucket
[00:00] <wallyworld> hatch: i just tried juju --version and it works for me, what version of juju do you think you have installed?
[00:00] <lamont> error: flag provided but not defined: --version
[00:00] <lamont> 1.12.0-0ubuntu1~ubuntu12.04.1~juju1
[00:00] <hatch> wallyworld: I JUST did it this morning, following the 'install guide'
[00:00] <lamont> anyway, afk for a couple of hours
[00:00] <hatch> trying to see what the typical user would see
[00:01] <hatch> lamont: thanks for confirming :)
[00:01] <wallyworld> lamont: ok, ping us in #juju-dev if you want any more input
[00:01] <wallyworld> hatch: i'm running from source, but "juju --version" worked  for me
[00:02] <sarnold> hatch: the python version and the go version have different --version vs version behavior. that's almost a decent way to tell which one you have already.... :)
[00:02] <sarnold> wallyworld: (oh, has that been fixed?)
[00:02] <wallyworld> i didn't realise there was a difference
[00:02] <wallyworld> i've only ever run go juju
[00:02] <hatch> oh that's odd :)
[00:02] <sarnold> ah :))
[00:03] <wallyworld> ian@wallyworld:~$ juju --version
[00:03] <wallyworld> 1.15.0-raring-amd64
[00:03] <hatch> sarnold: thanks for clearing that up
[00:03] <sarnold> wallyworld: neat! yay :)
[00:03] <hatch> 1.12.0-precise-amd64
[00:03] <hatch> here
[00:03] <hatch> on stable
[00:03] <wallyworld> ah ok. there's been lots of fixes post 1.12 i think
[00:03] <wallyworld> we are looking to release 1.14 next week
[00:04] <hatch> well then....someone should update stable :P
[00:04] <wallyworld> for now, there's a 1.13.3
[00:04]  * hatch stokes fire
[00:04] <wallyworld> hatch: agree, i think that will happen
[00:04] <wallyworld> each even number release is considered stable
[00:04] <hatch> but really though thanks - it looks like the go version doesn't handle the -- on 1.12.0
[00:04] <wallyworld> appears so, sorry
[00:04] <hatch> but if it works fine on your version then I won't file a bug
[00:05] <wallyworld> ok
[00:06] <hatch> this happened when running through the Local Configuration setup docs
[00:06] <hatch> it says `juju generate-config --show` which failed
[00:06] <hatch> just fyi
[00:07] <wallyworld> hmmm. seems the docs may need some love then
[00:08] <hatch> I'm guessing that it was just done with a more recent version
[00:08] <hatch> unless even yours doesn't do --show :)
[00:45] <marcoceppi_> lamont: still having issues with private cloud?
[01:14] <wallyworld> marcoceppi_: i think he's afk for a bit, but i got his public bucket url sorted, now he needs to generate image metadata
[01:15] <marcoceppi_> wallyworld: cool
[01:41] <AskUbuntu> Juju e MAAS get error in apt | http://askubuntu.com/q/344064
[06:39] <kenn> Question about the bridge node. Due to budget constraints I can only run a single instance in the cloud. Are there any reasons why I wouldn't want to deploy my services to that first machine 0, or anything I should be aware of when doing that? I've done it locally so it's possible to do.
[07:37] <AskUbuntu> Can I deploy juju on Eucalyptus | http://askubuntu.com/q/344137
[08:08] <fwereade_> jcastro, marcoceppi_: if either of you are around, I'd appreciate advice re resolving doc conflicts caused (at least partly) by header/footer changes
[08:09] <fwereade_> jcastro, marcoceppi_: eg "take your tree, copy in $files from trunk, run tools/build.py, commit, merge trunk in"
[09:35] <AskUbuntu> help: bootstrap error | http://askubuntu.com/q/344168
[13:17] <jcastro> fwereade_: evilnickveitch is your guy there
[13:18] <jcastro> fwereade_: I am pretty sure he strips all that out and regens the footer/header anyway
[13:18] <fwereade_> jcastro, thanks, I chatted to him
[14:09] <jcastro> marcoceppi_: also ... review queu!
[14:09] <marcoceppi_> jcastro: yup :\
[15:02] <evilnickveitch> jcastro, fwereade_ I just posted to the list about the new, super-easy way of creating pages I just sorted out this morning...
[15:03] <jcastro> k
[15:13] <fwereade_> evilnickveitch, cool, thanks
[15:20] <bloodearnest> heya all - am having some problems with juju-deployer (0.2.3) hanging after deploying all services, but before adding relations
[15:21] <bloodearnest> Ctrl-C'ing gives me a tb: https://pastebin.canonical.com/97357/
[15:21] <bloodearnest> same tb every time
[15:23] <bloodearnest> had this happen on 2 different raring machines, on both lxc and openstack (canonistack) envs
[15:23] <bloodearnest> any pointers to fix?
[15:31] <jcastro> http://pad.ubuntu.com/7mf2jvKXNa
[15:31] <jcastro> T minus 30 minutes until the Charm Call!
[15:51] <jcastro> 10 minutes until the juju charm call!
[16:09] <mattyw> jcastro, is there a link to just watch the hangout?
[16:10] <marcoceppi_> mattyw: ubuntu-on-air.com
[16:10] <marcoceppi_> mattyw: http://ubuntuonair.com/
[16:11] <mattyw> marcoceppi_, of course, thanks
[16:48] <Chor> hi there
[17:15] <kurt_> Can you guys tell me if its correct that a deployment fails when there is an existing configuration file for a charm?  I'm seeing this when having destroyed keystone, then trying to redeploy it.  Maybe an adjunct questions is when a service such as keystone is destroyed, should it clean up it's configuration files cleanly?
[17:24] <marcoceppi> kurt_: if you destroy a service, then try to deploy it again, I believe you get an error about service already deployed
[17:24] <marcoceppi> kurt_: is that the error you're getting?
[17:25] <kurt_> marcoceppi: no, the problem, specifically with keystone was that the /etc/keystone/keystone.conf is left behind
[17:26] <kentb> does anyone have a *fairly current* and recommended set of steps for deploying openstack charms with maas & juju, especially with quantum-gateway.  For whatever reason, with quantum-gateway in the mix, keystone authentication with quantum gets hosed. every. freaking. time.
[17:26] <marcoceppi> kurt_: that's a different issue
[17:26] <sarnold> early charms might expect the unit to be destroyed when the service is unconfigured / terminated / etc. I'm not surprised it didn't do a great job of cleaning up
[17:27] <kurt_> kentb: I've been working on this for some time.  I hope to have something out in the near future
[17:28] <kurt_> there are several guides out there, but you have to be patient and figure it out
[17:29] <kentb> kurt_: yeah, that's the part that's about burned up (patience).  I'll also take whatever you have in the meantime. The quantum-gateway piece is the one that I just can't seem to crack.
[17:30] <kurt_> marcoceppi: this appears to be some conflict in removing 2013.1.2-0ubuntu2~cloud0 and it's need for /etc/keystone/keystone.conf
[17:30] <kurt_> kentb: yes, I've ran in to this too.  you have to plan your deployment topology very carefully
[17:31] <kurt_> kentb: have you seen jamespage's excellent guide? https://wiki.ubuntu.com/ServerTeam/OpenStackHA
[17:33] <kentb> kurt_: yep. I've worked off of that many times.  I don't have enough nodes for the HA part, so, I've tried to make do with 6 physical machines, each with at least two nics.
[17:34] <kurt_> kentb: I'm doing mine completely on VMs
[17:35] <kurt_> kentb: and you are running in to one of the challenges I am facing as well.  I'm consolidating a lot of services to fewer nodes.
[17:35] <kentb> kurt_: yeah, I'm wondering if putting too much on one machine might be hurting me
[17:36] <kentb> (with juju-core)
[17:36] <kurt_> these guys test with a small amount of nodes, so they have managed to figure it out
[17:36] <kurt_> if there was one thing I wish were out there (hint hint jcastro)
[17:36] <kurt_> it would be a minimal install blueprint
[17:37] <kurt_> I hope to figure that out on my own with what knowledge I have
[17:37] <kentb> me too!
[17:37] <kurt_> and eventually I am going to create a guide with all of this info - but I'm out in the wild right now
[17:38] <kentb> join the club :)  I feel like I'm really close.
[17:38] <kurt_> keep checking in to this channel.  Since we are on the same track, we should share ideas
[17:39] <kentb> will do!
[17:39] <kentb> and I agree
[17:39] <kurt_> these guys do appreciate the bugs you find, so keep the info flowing
[17:39] <kentb> definitely!
[17:40] <jcastro> kurt_: you mean for openstack?
[17:40] <kurt_> yes sir
[17:40] <jcastro> yeah
[17:40] <kentb> yep
[17:40] <jcastro> adam_g: I'm supposed to talk to you about an openstack bundle actually
[17:40] <kentb> quantum-gateway is kicking my butt
[17:40] <kurt_> that is my numero uno goal right now
[17:41] <kurt_> to get a working openstack deployment on VMs with the minimum install
[17:41] <kurt_> but not the "virtual" method
[17:42] <kurt_> I would love to have a "scalable" blueprint working with maas, juju, juju-gui and openstack
[17:42] <marcoceppi> kurt_: There are a few deployer configs out there for that
[17:43] <kurt_> marcoceppi: yup, but I've not been successful yet in making one work
[17:43] <kurt_> I've gotten close, but no cigar
[17:43] <kentb> same here
[17:43] <jcastro> yeah, so arosales told me this morning that adam_g's been working on a working bundle
[17:43] <marcoceppi> kurt_: kentb thanks for sticking with it. We're working on making this a very strong story in the near future
[17:44] <kurt_> marcoceppi: I know you guys are.  I believe you feel this is an important and compelling user case and story
[17:44] <kurt_> that's why I've been working hard on this
[17:44] <kentb> marcoceppi: my pleasure. I'm learning a *ton*  I'm also working with a big OEM on a whitepaper on how to do this on their hardware.
[17:45] <kentb> and I drew the openstack straw :)
[17:47] <kurt_> marcoceppi: can you share those deployer configs you spoke of to see if there is anything I don't know about?
[17:48] <marcoceppi> adam_g:  where's be most recent version of the openstack deployers? they still in the deployer repo?
[17:48]  * marcoceppi knows you're working with jcastro on this as well
[17:48] <jcastro> I only found out today
[17:48] <jcastro> but I am keen on getting my hands on whatever he has, heh
[17:48]  * marcoceppi thinks we all are ;)
[17:48] <adam_g> marcoceppi, in lp:openstack-ubuntu-testing, juju-deployer   -c etc/deployer/deployments.cfg  -l
[17:49] <kurt_> are these similar to devstack type things?
[17:49] <adam_g> those are all sort of specific to our lab, using custom charm branches and some lab-specific config
[17:49] <adam_g> gimme a minute and ill put together a vanilla one for a simple deployment
[17:49] <kurt_> adam_g: nice one, thanks mate
[17:50] <kentb> woohoo
[17:50] <marcoceppi> kurt_: so juju-deployer, if you're not aware, is a means to standup complicated juju environments
[17:50] <jcastro> adam_g: hey so, I am thinking make a simple one, post it on the list, and ask for feedback
[17:50] <marcoceppi> it's just a yaml file that can be used with juju-deployer to deploy, configure, and relate services
[17:50] <adam_g> jcastro, yeah, i need to put up a wiki that documents this. i have a WI for it and hope to get to it soon.
[17:50] <kurt_> marcoceppi: I saw that information for the first time at the bottom of jamespage's manifesto
[17:52] <kurt_> marcoceppi: is the juju-deployer discussed at length anywhere?
[17:54] <jcastro> hazmat: maybe it's time to post on the list about deployer as well
[17:54] <kurt_> I would love to read about what it does and how it does it
[17:54] <adam_g> jcastro, http://paste.ubuntu.com/6093510/
[17:55] <adam_g> jcastro, this is a vanilla openstack + ceph. ceph node is single node (with no redundnacy), swift is single storage node as well
[17:55] <jcastro> is this that deployments.cfg or is this a new thing?
[17:56] <kurt_> adam_g: how many nodes is this?  is it single-node?
[17:56] <adam_g> kurt_, every service in its own machine
[17:56] <jcastro> ok so you could add-unit to this?
[17:57] <kurt_> how about the quantum-networking - is there a local.yaml or something it is referencing?
[17:58] <jcastro> 2013-09-11 13:57:55 Deployment name must be specified. available: openstack-services', 'precise-grizzly', 'raring-grizzly')
[17:58] <jcastro> which one do I use?
[17:58] <kurt_> I think both kentb and myself have managed to get pretty far, but have not been able to get a complete working set up because of the networking
[17:58] <kentb> yeah, that backfires almost every time...there's something screwy with keystone in the end product that I'm not sure what broke
[17:59] <jcastro> adam_g: where do you want the wiki page to be? I can start documenting this
[17:59] <kurt_> kentb: I don't think keystone is the problem
[17:59] <kurt_> at least in my set ups it wasn't
[18:00] <kurt_> it was the ability to assign an IP and spin up vm's from horizon
[18:00] <kentb> kurt_: really?  What were you hitting?  For me, I was always getting a 401 error if I tried to do anything with quantum.
[18:00] <kurt_> which was probably due to my networking being incorrect
[18:00] <adam_g> jcastro, im not sure. somewhere near the current openstack HA wiki page? dont have URL handy
[18:01] <jcastro> jujuclient.EnvError: <Env Error - Details:
[18:01] <jcastro>  {   u'Error': u'invalid entity name or password',
[18:01] <jcastro>     u'ErrorCode': u'unauthorized access',
[18:01] <adam_g> jcastro, https://help.ubuntu.com/community/UbuntuCloudInfrastructure
[18:01] <jcastro> I get this kind of stuff when I try to run deployer on that pastebin'ed bundle
[18:02] <jcastro> https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle for now
[18:04] <jcastro> adam_g: ok, so now, next step is what to do wrt. openstack-service, precise-grizzly, or raring-grizzly
[18:05] <jcastro> I have an environment up and running
[18:06] <jcastro> juju-deployer -c openstack.cfg  -elocal precise-grizzly
[18:06] <jcastro> this seems right? Looks like it's working
[18:06] <kurt_> jcastro: are you able to access horizon and spin up VMs?
[18:06] <jcastro> it's firing up right now
[18:06] <jcastro> gimme like 5 minutes
[18:06] <kurt_> kk
[18:07] <jcastro> juju-deployer -v -c openstack.cfg  -elocal precise-grizzly
[18:07] <adam_g> jcastro, good luck using local provider :)
[18:08] <jcastro> adam_g: I just wanted to get the syntax for the command down, etc.
[18:08] <adam_g> ah, right
[18:08] <adam_g> jcastro, openstack-services is the base deployment, precise-grizzly just inherits and sets series and the config to install grizzly
[18:11] <jcastro> adam_g: can you add some info here? https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle
[18:11] <adam_g> jcastro,  on a call atm. i will, cant promise its going to happen this week tho
[18:11] <jcastro> k
[18:11] <jcastro> do you think this will fire up on like hpcloud or something?
[18:12] <jcastro> I'd like to see it work at least once!
[18:12] <adam_g> not sure
[18:13] <jcastro> but it works on MAAS?
[18:13] <kurt_> adam_g: if you could include some info around how the networking is handled (i.e. IP/CIDRs, interfaces, public ranges, etc) , I would be grateful
[18:17] <kurt_> jcastro: did VM spin up?
[18:17] <jcastro> a bunch of containers spun up
[18:17] <jcastro> fails on swift-storage-z1
[18:18] <kurt_> anything interesting in debug-log that's an easy fix?
[18:19] <jcastro> http://imgur.com/Bh114KN
[18:19] <jcastro> this is the 2nd time I'm trying it, I'll check debug log
[18:19] <jcastro> failing on the install hook
[18:21] <kurt_> it looks like most of them are stuck deploying
[18:21] <jcastro> yeah, a bunch of them are turning green now
[18:21] <kurt_> nice
[18:22] <kurt_> can you pastebin your install hook error for swift?
[18:22] <jcastro> ok so deployer showed an error
[18:23] <jcastro> but the unit came up just fine
[18:23] <jcastro> and I think I found a bug in nova-compute though
[18:23] <jcastro> adam_g: on the nova-compute charm: http://pastebin.ubuntu.com/6093626/
[18:24] <jcastro> should I report that as a bug?
[18:24] <kentb> ah! that might be what's killing me too...my nova-compute instance was DOA and libvirt was all messed up as one of the symptoms
[18:24] <adam_g> jcastro, full log?  i believe thats one of the many issues you'll hit doing this in containers
[18:26] <jcastro> http://paste.ubuntu.com/6093637/
[18:35] <kurt_> adam_g: wasn't log filling issue fixed in 1.13?
[18:35] <kurt_> (juju 1.13)
[18:35] <kentb> kurt_: yep...hasn't come back for me since updating
[18:36] <kentb> bug killed my bootstrap node within a few hours
[18:36] <kurt_> yes, mine too in 1.12
[20:06] <kentb> ok. so if an instance is stuck in 'dying' state is there a good way to nuke it?  I can't terminate the machine b/c we're indefinitely stuck there. Please tell me I don't have to destroy-environment and start over (using juju-core 1.13.3-1-1737).
[20:06] <kentb> the agent-state is 'error' with a hook-failure during config-changed
[20:13] <kentb> nm I ran juju resolved ceph and then that allowed me to kill it
[20:25] <thumper> jcastro: ping
[20:26] <jcastro> yo!
[20:26] <thumper> jcastro: got a few minutes for a hangout?
[20:27] <jcastro> yeah let me finish something
[20:27] <jcastro> ~10 min?
[20:27] <thumper> sure
[20:44] <kurt_> kentb: juju resolved <service>
[20:45] <kurt_> kentb:  juju resolved <service>
[20:45] <kurt_> I just went over this with marcoceppi yesterday
[20:46] <marcoceppi> kurt_: yes, we need to update the documentation for this
[20:46] <kurt_> Can anyone tell me if there is default username/password for console only access for nodes?  I don't have ssh access
[20:47] <marcoceppi> kurt_: everything is done via ssh
[20:47] <kurt_> I'm hosed them
[20:47] <marcoceppi> kurt_: you can try ubuntu with the password ubuntu
[20:47] <kurt_> then
[20:47] <kurt_> that doesn't work
[20:47] <marcoceppi> then I don't think so
[20:48] <marcoceppi> We try to avoid default passwords and users, because that's a vulnerability
[20:48] <kurt_> macroceppi: if I am trying to add an interface after the fact to node?  I keep messing up my routing when I do it
[20:48] <kurt_> sorry, that wasn't clear
[20:48] <kurt_> is there an easy way to add an interface to a node once it's been deployed?
[20:49] <kurt_> I appear to keep screwing up my routing
[20:49] <kurt_> this is on maas btw
[20:49] <kurt_> not that that matters
[20:52] <kentb> kurt_: yep, that unclogged it. thanks!
[20:53] <kurt_> kentb: good stuff
[20:55] <jcastro> hey thumper
[20:56] <marcoceppi> kurt_: no, you can't add interfaces after the fact unless you use upgrade-charm
[20:56] <kurt_> upgrade-charm?
[20:57] <marcoceppi> kurt_: yes. So if you're developing a charm locally, and you deploy using --repository and local:, and you later ADD a relation/interface to the metadata.yaml, you can run `juju upgrade-charm --repository ... <service>` to upgrade the charm and register the new relation/interface
[20:58] <marcoceppi> kurt_: chaning interfaces/relations or removing them can be quite dangerous if you don't first remove all relations
[20:59]  * marcoceppi is working on the upgrade-charm docs atm
[20:59] <kurt_> so, do I have to entertain the idea of completely statically assigning my maas installation to make openstack work as in jamepage's doc?
[21:02] <kurt_> I am sure I could avoid using the vip parameter, but does that only apply to HA situations?  Or is it a real virtual IP that can be assigned on top of juju's administrative IP?
[21:11] <kurt_> Looks like this is going to be a problem too:  https://bugs.launchpad.net/juju/+bug/1188126
[21:11] <_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured <canonistack> <openstack> <serverstack> <juju:New> <juju-core:Triaged> <https://launchpad.net/bugs/1188126>
[23:34] <freeflying> how many constraints from juju python has been implemented in juju-core?