[00:00] then you copy the generated files up the the public bucket [00:00] hatch: i just tried juju --version and it works for me, what version of juju do you think you have installed? [00:00] error: flag provided but not defined: --version [00:00] 1.12.0-0ubuntu1~ubuntu12.04.1~juju1 [00:00] wallyworld: I JUST did it this morning, following the 'install guide' [00:00] anyway, afk for a couple of hours [00:00] trying to see what the typical user would see [00:01] lamont: thanks for confirming :) [00:01] lamont: ok, ping us in #juju-dev if you want any more input [00:01] hatch: i'm running from source, but "juju --version" worked for me [00:02] hatch: the python version and the go version have different --version vs version behavior. that's almost a decent way to tell which one you have already.... :) [00:02] wallyworld: (oh, has that been fixed?) [00:02] i didn't realise there was a difference [00:02] i've only ever run go juju [00:02] oh that's odd :) [00:02] ah :)) [00:03] ian@wallyworld:~$ juju --version [00:03] 1.15.0-raring-amd64 [00:03] sarnold: thanks for clearing that up [00:03] wallyworld: neat! yay :) [00:03] 1.12.0-precise-amd64 [00:03] here [00:03] on stable [00:03] ah ok. there's been lots of fixes post 1.12 i think [00:03] we are looking to release 1.14 next week [00:04] well then....someone should update stable :P [00:04] for now, there's a 1.13.3 [00:04] * hatch stokes fire [00:04] hatch: agree, i think that will happen [00:04] each even number release is considered stable [00:04] but really though thanks - it looks like the go version doesn't handle the -- on 1.12.0 [00:04] appears so, sorry [00:04] but if it works fine on your version then I won't file a bug [00:05] ok [00:06] this happened when running through the Local Configuration setup docs [00:06] it says `juju generate-config --show` which failed [00:06] just fyi [00:07] hmmm. seems the docs may need some love then [00:08] I'm guessing that it was just done with a more recent version [00:08] unless even yours doesn't do --show :) [00:45] lamont: still having issues with private cloud? [01:14] marcoceppi_: i think he's afk for a bit, but i got his public bucket url sorted, now he needs to generate image metadata [01:15] wallyworld: cool [01:41] Juju e MAAS get error in apt | http://askubuntu.com/q/344064 === medberry is now known as med_ === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === CyberJacob|Away is now known as CyberJacob [06:39] Question about the bridge node. Due to budget constraints I can only run a single instance in the cloud. Are there any reasons why I wouldn't want to deploy my services to that first machine 0, or anything I should be aware of when doing that? I've done it locally so it's possible to do. === rogpeppe1 is now known as rogpeppe [07:37] Can I deploy juju on Eucalyptus | http://askubuntu.com/q/344137 [08:08] jcastro, marcoceppi_: if either of you are around, I'd appreciate advice re resolving doc conflicts caused (at least partly) by header/footer changes [08:09] jcastro, marcoceppi_: eg "take your tree, copy in $files from trunk, run tools/build.py, commit, merge trunk in" === defunctzombie is now known as defunctzombie_zz === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [09:35] help: bootstrap error | http://askubuntu.com/q/344168 === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === kentb-out is now known as kentb [13:17] fwereade_: evilnickveitch is your guy there [13:18] fwereade_: I am pretty sure he strips all that out and regens the footer/header anyway [13:18] jcastro, thanks, I chatted to him [14:09] marcoceppi_: also ... review queu! [14:09] jcastro: yup :\ === freeflying is now known as freeflying_away [15:02] jcastro, fwereade_ I just posted to the list about the new, super-easy way of creating pages I just sorted out this morning... [15:03] k [15:13] evilnickveitch, cool, thanks [15:20] heya all - am having some problems with juju-deployer (0.2.3) hanging after deploying all services, but before adding relations [15:21] Ctrl-C'ing gives me a tb: https://pastebin.canonical.com/97357/ [15:21] same tb every time [15:23] had this happen on 2 different raring machines, on both lxc and openstack (canonistack) envs [15:23] any pointers to fix? [15:31] http://pad.ubuntu.com/7mf2jvKXNa [15:31] T minus 30 minutes until the Charm Call! [15:51] 10 minutes until the juju charm call! === BradCrittenden is now known as bac [16:09] jcastro, is there a link to just watch the hangout? [16:10] mattyw: ubuntu-on-air.com [16:10] mattyw: http://ubuntuonair.com/ [16:11] marcoceppi_, of course, thanks === marcoceppi_ is now known as marcoceppi [16:48] hi there [17:15] Can you guys tell me if its correct that a deployment fails when there is an existing configuration file for a charm? I'm seeing this when having destroyed keystone, then trying to redeploy it. Maybe an adjunct questions is when a service such as keystone is destroyed, should it clean up it's configuration files cleanly? [17:24] kurt_: if you destroy a service, then try to deploy it again, I believe you get an error about service already deployed [17:24] kurt_: is that the error you're getting? [17:25] marcoceppi: no, the problem, specifically with keystone was that the /etc/keystone/keystone.conf is left behind [17:26] does anyone have a *fairly current* and recommended set of steps for deploying openstack charms with maas & juju, especially with quantum-gateway. For whatever reason, with quantum-gateway in the mix, keystone authentication with quantum gets hosed. every. freaking. time. [17:26] kurt_: that's a different issue [17:26] early charms might expect the unit to be destroyed when the service is unconfigured / terminated / etc. I'm not surprised it didn't do a great job of cleaning up [17:27] kentb: I've been working on this for some time. I hope to have something out in the near future [17:28] there are several guides out there, but you have to be patient and figure it out [17:29] kurt_: yeah, that's the part that's about burned up (patience). I'll also take whatever you have in the meantime. The quantum-gateway piece is the one that I just can't seem to crack. [17:30] marcoceppi: this appears to be some conflict in removing 2013.1.2-0ubuntu2~cloud0 and it's need for /etc/keystone/keystone.conf [17:30] kentb: yes, I've ran in to this too. you have to plan your deployment topology very carefully [17:31] kentb: have you seen jamespage's excellent guide? https://wiki.ubuntu.com/ServerTeam/OpenStackHA === CyberJacob is now known as CyberJacob|Away [17:33] kurt_: yep. I've worked off of that many times. I don't have enough nodes for the HA part, so, I've tried to make do with 6 physical machines, each with at least two nics. [17:34] kentb: I'm doing mine completely on VMs [17:35] kentb: and you are running in to one of the challenges I am facing as well. I'm consolidating a lot of services to fewer nodes. [17:35] kurt_: yeah, I'm wondering if putting too much on one machine might be hurting me [17:36] (with juju-core) [17:36] these guys test with a small amount of nodes, so they have managed to figure it out [17:36] if there was one thing I wish were out there (hint hint jcastro) [17:36] it would be a minimal install blueprint [17:37] I hope to figure that out on my own with what knowledge I have [17:37] me too! [17:37] and eventually I am going to create a guide with all of this info - but I'm out in the wild right now [17:38] join the club :) I feel like I'm really close. [17:38] keep checking in to this channel. Since we are on the same track, we should share ideas [17:39] will do! [17:39] and I agree [17:39] these guys do appreciate the bugs you find, so keep the info flowing [17:39] definitely! [17:40] kurt_: you mean for openstack? [17:40] yes sir [17:40] yeah [17:40] yep [17:40] adam_g: I'm supposed to talk to you about an openstack bundle actually [17:40] quantum-gateway is kicking my butt [17:40] that is my numero uno goal right now [17:41] to get a working openstack deployment on VMs with the minimum install [17:41] but not the "virtual" method [17:42] I would love to have a "scalable" blueprint working with maas, juju, juju-gui and openstack [17:42] kurt_: There are a few deployer configs out there for that [17:43] marcoceppi: yup, but I've not been successful yet in making one work [17:43] I've gotten close, but no cigar [17:43] same here === CyberJacob|Away is now known as CyberJacob [17:43] yeah, so arosales told me this morning that adam_g's been working on a working bundle [17:43] kurt_: kentb thanks for sticking with it. We're working on making this a very strong story in the near future [17:44] marcoceppi: I know you guys are. I believe you feel this is an important and compelling user case and story [17:44] that's why I've been working hard on this [17:44] marcoceppi: my pleasure. I'm learning a *ton* I'm also working with a big OEM on a whitepaper on how to do this on their hardware. [17:45] and I drew the openstack straw :) [17:47] marcoceppi: can you share those deployer configs you spoke of to see if there is anything I don't know about? [17:48] adam_g: where's be most recent version of the openstack deployers? they still in the deployer repo? [17:48] * marcoceppi knows you're working with jcastro on this as well [17:48] I only found out today [17:48] but I am keen on getting my hands on whatever he has, heh [17:48] * marcoceppi thinks we all are ;) [17:48] marcoceppi, in lp:openstack-ubuntu-testing, juju-deployer -c etc/deployer/deployments.cfg -l [17:49] are these similar to devstack type things? [17:49] those are all sort of specific to our lab, using custom charm branches and some lab-specific config [17:49] gimme a minute and ill put together a vanilla one for a simple deployment [17:49] adam_g: nice one, thanks mate [17:50] woohoo [17:50] kurt_: so juju-deployer, if you're not aware, is a means to standup complicated juju environments [17:50] adam_g: hey so, I am thinking make a simple one, post it on the list, and ask for feedback [17:50] it's just a yaml file that can be used with juju-deployer to deploy, configure, and relate services [17:50] jcastro, yeah, i need to put up a wiki that documents this. i have a WI for it and hope to get to it soon. [17:50] marcoceppi: I saw that information for the first time at the bottom of jamespage's manifesto [17:52] marcoceppi: is the juju-deployer discussed at length anywhere? [17:54] hazmat: maybe it's time to post on the list about deployer as well [17:54] I would love to read about what it does and how it does it [17:54] jcastro, http://paste.ubuntu.com/6093510/ [17:55] jcastro, this is a vanilla openstack + ceph. ceph node is single node (with no redundnacy), swift is single storage node as well [17:55] is this that deployments.cfg or is this a new thing? [17:56] adam_g: how many nodes is this? is it single-node? [17:56] kurt_, every service in its own machine [17:56] ok so you could add-unit to this? [17:57] how about the quantum-networking - is there a local.yaml or something it is referencing? [17:58] 2013-09-11 13:57:55 Deployment name must be specified. available: openstack-services', 'precise-grizzly', 'raring-grizzly') [17:58] which one do I use? [17:58] I think both kentb and myself have managed to get pretty far, but have not been able to get a complete working set up because of the networking [17:58] yeah, that backfires almost every time...there's something screwy with keystone in the end product that I'm not sure what broke [17:59] adam_g: where do you want the wiki page to be? I can start documenting this [17:59] kentb: I don't think keystone is the problem [17:59] at least in my set ups it wasn't [18:00] it was the ability to assign an IP and spin up vm's from horizon [18:00] kurt_: really? What were you hitting? For me, I was always getting a 401 error if I tried to do anything with quantum. [18:00] which was probably due to my networking being incorrect [18:00] jcastro, im not sure. somewhere near the current openstack HA wiki page? dont have URL handy [18:01] jujuclient.EnvError: { u'Error': u'invalid entity name or password', [18:01] u'ErrorCode': u'unauthorized access', [18:01] jcastro, https://help.ubuntu.com/community/UbuntuCloudInfrastructure [18:01] I get this kind of stuff when I try to run deployer on that pastebin'ed bundle [18:02] https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle for now [18:04] adam_g: ok, so now, next step is what to do wrt. openstack-service, precise-grizzly, or raring-grizzly [18:05] I have an environment up and running [18:06] juju-deployer -c openstack.cfg -elocal precise-grizzly [18:06] this seems right? Looks like it's working [18:06] jcastro: are you able to access horizon and spin up VMs? [18:06] it's firing up right now [18:06] gimme like 5 minutes [18:06] kk [18:07] juju-deployer -v -c openstack.cfg -elocal precise-grizzly [18:07] jcastro, good luck using local provider :) [18:08] adam_g: I just wanted to get the syntax for the command down, etc. [18:08] ah, right [18:08] jcastro, openstack-services is the base deployment, precise-grizzly just inherits and sets series and the config to install grizzly [18:11] adam_g: can you add some info here? https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle [18:11] jcastro, on a call atm. i will, cant promise its going to happen this week tho [18:11] k [18:11] do you think this will fire up on like hpcloud or something? [18:12] I'd like to see it work at least once! [18:12] not sure [18:13] but it works on MAAS? [18:13] adam_g: if you could include some info around how the networking is handled (i.e. IP/CIDRs, interfaces, public ranges, etc) , I would be grateful [18:17] jcastro: did VM spin up? [18:17] a bunch of containers spun up [18:17] fails on swift-storage-z1 [18:18] anything interesting in debug-log that's an easy fix? [18:19] http://imgur.com/Bh114KN [18:19] this is the 2nd time I'm trying it, I'll check debug log [18:19] failing on the install hook [18:21] it looks like most of them are stuck deploying [18:21] yeah, a bunch of them are turning green now [18:21] nice [18:22] can you pastebin your install hook error for swift? [18:22] ok so deployer showed an error [18:23] but the unit came up just fine [18:23] and I think I found a bug in nova-compute though [18:23] adam_g: on the nova-compute charm: http://pastebin.ubuntu.com/6093626/ [18:24] should I report that as a bug? [18:24] ah! that might be what's killing me too...my nova-compute instance was DOA and libvirt was all messed up as one of the symptoms [18:24] jcastro, full log? i believe thats one of the many issues you'll hit doing this in containers [18:26] http://paste.ubuntu.com/6093637/ [18:35] adam_g: wasn't log filling issue fixed in 1.13? [18:35] (juju 1.13) [18:35] kurt_: yep...hasn't come back for me since updating [18:36] bug killed my bootstrap node within a few hours [18:36] yes, mine too in 1.12 === defunctzombie_zz is now known as defunctzombie [20:06] ok. so if an instance is stuck in 'dying' state is there a good way to nuke it? I can't terminate the machine b/c we're indefinitely stuck there. Please tell me I don't have to destroy-environment and start over (using juju-core 1.13.3-1-1737). [20:06] the agent-state is 'error' with a hook-failure during config-changed [20:13] nm I ran juju resolved ceph and then that allowed me to kill it [20:25] jcastro: ping [20:26] yo! [20:26] jcastro: got a few minutes for a hangout? [20:27] yeah let me finish something [20:27] ~10 min? [20:27] sure [20:44] kentb: juju resolved [20:45] kentb: juju resolved [20:45] I just went over this with marcoceppi yesterday [20:46] kurt_: yes, we need to update the documentation for this [20:46] Can anyone tell me if there is default username/password for console only access for nodes? I don't have ssh access [20:47] kurt_: everything is done via ssh [20:47] I'm hosed them [20:47] kurt_: you can try ubuntu with the password ubuntu [20:47] then [20:47] that doesn't work [20:47] then I don't think so [20:48] We try to avoid default passwords and users, because that's a vulnerability [20:48] macroceppi: if I am trying to add an interface after the fact to node? I keep messing up my routing when I do it [20:48] sorry, that wasn't clear [20:48] is there an easy way to add an interface to a node once it's been deployed? [20:49] I appear to keep screwing up my routing [20:49] this is on maas btw [20:49] not that that matters [20:52] kurt_: yep, that unclogged it. thanks! [20:53] kentb: good stuff [20:55] hey thumper [20:56] kurt_: no, you can't add interfaces after the fact unless you use upgrade-charm [20:56] upgrade-charm? [20:57] kurt_: yes. So if you're developing a charm locally, and you deploy using --repository and local:, and you later ADD a relation/interface to the metadata.yaml, you can run `juju upgrade-charm --repository ... ` to upgrade the charm and register the new relation/interface [20:58] kurt_: chaning interfaces/relations or removing them can be quite dangerous if you don't first remove all relations [20:59] * marcoceppi is working on the upgrade-charm docs atm [20:59] so, do I have to entertain the idea of completely statically assigning my maas installation to make openstack work as in jamepage's doc? [21:02] I am sure I could avoid using the vip parameter, but does that only apply to HA situations? Or is it a real virtual IP that can be assigned on top of juju's administrative IP? [21:11] Looks like this is going to be a problem too: https://bugs.launchpad.net/juju/+bug/1188126 [21:11] <_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob === kentb is now known as kentb-out === freeflying_away is now known as freeflying === CyberJacob is now known as CyberJacob|Away === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [23:34] how many constraints from juju python has been implemented in juju-core?