=== CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === thumper is now known as thumper-afk [05:20] hola, if when i bootstrap (openstack provider) i got the following: [05:20] juju.errors.ProviderInteractionError: Unexpected 404: '{"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}' [05:20] hwo can i know wich resources it cannot find ? === koolhead11|away is now known as koolhead11 === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === axw_ is now known as axw === defunctzombie is now known as defunctzombie_zz === thumper-afk is now known as thumper [10:42] melmoth_: can you access to nova's api [10:54] freeflying, it s fixed, there was a missing / in my keystone endpoint [10:54] thanks though :) [11:10] melmoth_: np :) [13:10] hazmat: you around? [13:32] marcoceppi, yup [13:33] hey hazmat, could you create a ppa in ~juju called "tools", I think we're going to depricate the pkgs ppa in favor of a generic tools ppa for charm-tools, juju-deployer, etc [13:36] marcoceppi, sounds good [13:36] marcoceppi: did you guys get around to fixing up memcached? [13:37] I think, once that's moved over, we can start a discussion on the list about removing all the other ppas that are no longer needed. I think stable, devel, and tools would be what's left over ultimately [13:37] jcastro: whos the other guys in that sentance? [13:38] weren't you talking to pavel or something about it? [13:38] you mentioned it in passing at the g+ hangout [13:38] marcoceppi, done [13:38] jcastro: ah, he has some merge requests to memcached and haproxy I think [13:38] I believe they were merged [13:39] hazmat: thanks! [13:39] marcoceppi: oh ok, so as far as you know it works with wordpress now? [13:40] jcastro: that's another thing, that's not uploaded yet. I'm still wrapping up Amulet for the release this week [13:40] marcoceppi, we should get a copy of ahasenack's build recipes for jujuclient/jujudeployer to populate [13:40] hazmat: ack, for sure [13:41] hazmat: I can swich the target ppa in the recipes, or you can create new recipes, I'm ok with either [13:42] ahasenack, if your up for switching the target that sounds good for now.. do the recipe packaging branches need bumping on version increments of the underlying packages? if so i'd like to move them to a group branch account. [13:43] hazmat: they do need that, in the debian/changelog file [13:43] marcoceppi: looks like your comment to the VPN endpoint bumped it to the end of the queue line. :-/ [13:43] jcastro: I know, I have a few charms on my short list for review this week [13:43] ahasenack: you're experienced with juju now, you should consider joining ~charmers and helping us review incoming charms! [13:43] that being the top of the list [13:44] hazmat: I think it's best you copy the recipe and the packaging branch, I'm not in ~juju and can't upload to that ppa, and it sounds betterto have the branch and recipe owner by a group, not a person [13:44] jcastro: I'm not so sure, I don't even have a charm of my own yet [13:45] ok [13:45] ahasenack: let me know when you start one. :) [13:45] https://code.launchpad.net/~ahasenack/+recipe/python-jujuclient-daily and https://code.launchpad.net/~ahasenack/+recipe/juju-deployer-daily [13:45] hazmat: ^^^ [13:45] packaging branchs are in the recipes [13:45] branches [13:45] jcastro: :) [13:46] ahasenack, will do, thanks again for packaging these up [13:46] the websocket client has no recipe, as I would have to first mirror it in LP, by creating a LP project, and then have the recipe [13:47] i've been tempted to just include it in the jujuclient .. its a signle module. [14:05] marcoceppi: I'm not being successful in asking juju to use my custom image, like those two askubuntu questions from yesterday also weren't [14:05] marcoceppi: I sent an email to the list [14:06] ahasenack: saw that, thanks. I'll keep an eye on the list in hopes of answering those two questions [14:07] marcoceppi: it's also blocking me in further openstack tests, due to bug #1188126 [14:07] <_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured [14:07] my custom image has a workaround, but I can't launch it with juju [14:07] so, kaput [14:08] on to something else [14:08] ahasenack: So, I wonder if it's a recent bug. With 1.11.1 I was able to upload custom image metadata to az3 of hp cloud which doesn't have any juju-dist information [14:09] marcoceppi: I don't even know where to download it [14:09] marcoceppi: I tried juju-dist/, and streams/ directly [14:09] so, I have juju-dist/streams/v1/stuff [14:09] and streams/v1/stuff [14:09] completely ignored [14:10] maybe the product-streams service from keystone catalog is overriding that, and that's the bug [14:10] ahasenack: So with az3 I've got a juju-dist bucket I created with a streams/v1/... directory [14:10] ahasenack: but I dont' think hp cloud has product-streams, that's something I can't confirm. [14:10] marcoceppi: "keystone catalog" doesn't work against it? [14:10] ahasenack: I haven't tried [14:10] ok [14:12] I wish I had more time to play with this problem this week. If you dont' get an answer by Monday I may try to poke at it for a bit [14:13] https://bugs.launchpad.net/juju-core/+bug/1185143 would help if it were fixed [14:13] <_mup_> Bug #1185143: bootstrap -v needs to show the swift/s3 action [15:01] marcoceppi, so some discussion and the desire seems to be push the tools directly into the core ppas ie (devel and stable) [15:02] hazmat: my concern with that is we don't have a very clear release cadence on most of the tools. So they're basically all "devel" [15:03] heya all - is there any way to make 'juju ssh / "command"' return output from the command? [15:03] marcoceppi, true, but the key distinction might be that it works with the juju in the same ppa [15:03] seems to swallow it by default [15:05] hey marcoceppi [15:05] so brandon put the wrong jorgecastro in the github group but you appear to be in there, we need to redirect jujutools.github.com to the right place [15:06] jcastro: I have no idea how to do that. Let me just see if i can add you [15:07] jcastro: what's your gh username? [15:07] castrojo [15:07] If you can add me I can handle it [15:08] jcastro: added [15:09] ta [16:08] jcastro: ping [16:08] pong [16:08] jcastro: just working on juju set --default to simply set an option to its default value [16:09] jcastro: background is to also use juju set option= to set an empty string [16:09] jcastro: but this would also lead to a change of of the charm configs [16:10] so are you asking if we should do that or if we're doing that already? [16:10] jcastro: today empty strings remove a value, so no empty strings [16:10] jcastro: yep, you got it ;) [16:10] jcastro: I don't wonna break compatibility [16:11] I think that's list material there [16:11] see what other charmers think [16:12] jcastro: ok, will do [16:12] jcastro: thx [16:12] TheMue: I think quite a few charms would probably break if you could "unset" a configuration value [16:13] marcoceppi: yeah, I feared that [16:15] marcoceppi: unsetting a config value should cause it to revert to its default value from the charm hook's point of view [16:16] rogpeppe: I don't think I grasp the scope of this change then. I look forward to the list post for clarification [16:20] marcoceppi: set option= today leads to a reset to the default. with the introduction of set --default option it isn't needed anymore and set option= can be used to set option to an empty string (not possible today) [16:21] TheMue: good point [16:21] marcoceppi: you can already unset a configuration value [16:21] TheMue: Oh, interesting. I don't think that'll have much of an impact actually. Since, and I may be wrong, I was under the impression that set option= in pyjuju set it to an empty string and not hte default [16:21] marcoceppi: but if there's a string config value with a default that's non-empty, you can't currently set it to empty [16:22] marcoceppi: ha ha [16:22] marcoceppi: that's what we want to make it do! [16:22] rogpeppe: yep [16:22] thinking on uptime of our services, I see juju upgrade-juju in juju-core. I'm about to test a deployment from ppa -> trunk... is a simple juju-core upgrade-juju all I need for a seamless upgrade that will leave my running units intact? [16:22] marcoceppi: i think that it's possible that *was* the py juju behaviour, but i'm not sure [16:22] rogpeppe: right! TheMue I'd email the list just for general awareness but I don't see many people putting up a stink about this :) [16:23] rogpeppe: I could pull out pyjuju and test, but I don't need anymore pain for today ;) [16:23] marcoceppi: ;) [16:23] blackboxsw: i'd upgrade to 1.12 before upgrading to trunk [16:23] blackboxsw: what version are you using currently? [16:23] * rogpeppe needs to write an email to juju-dev about that [16:23] 1.11-4 [16:23] * rogpeppe goes off to do that [16:24] * TheMue too [16:24] blackboxsw: i advise downloading https://launchpad.net/juju-core/1.12/1.12.0/+download/juju-core_1.12.0-1.tar.gz [16:24] 1.11.4 more or less is 1.12. I'm not sure of the nuances for upgrading juju in place to trunk with upload tools, etc [16:24] blackboxsw: then building that and upgrade-juju to that (using --upload-tools) [16:25] rogpeppe: so you need to run upgrade-juju --upload-tools? [16:25] * marcoceppi adds this to list of things we need to document [16:25] marcoceppi: hmm, actually, perhaps it's easier than that [16:25] will do I saw https://code.launchpad.net/~fwereade/juju-core/fix-upgrade-carnage/+merge/173972 which looks like it address a similar upgrade path issue [16:25] but I think that was 1.10 that was a problem [16:26] okay in either case. I'll give both a whirl as its a dev deployment anyway... will report on the success of 1.11.4-1514~raring too [16:26] blackboxsw: according to the release notes, minor version increments should work 1.11.1 -> 1.11.2, etc [16:26] blackboxsw: Yeah, I'd be interested in your experience with the upgrade process [16:27] blackboxsw: if you're using an environment with a public tools bucket, you should be able to do juju upgrade-juju --version 1.12.0 [16:27] blackboxsw: then wait for all the units and machines to report 1.12.0 as their version [16:28] blackboxsw: then juju upgrade-juju to a later version [16:28] blackboxsw: e.g. current trunk [16:28] ahh, and if I use --upload-tools? [16:28] blackboxsw: that should be ok *after* you've upgraded to 1.12 [16:28] makes sense [16:28] thx [16:28] blackboxsw: because 1.12 has some specific code (hacks) in it that propagate some information that 1.10 didn't propagate [16:29] s/propagate/propagates/ [16:29] blackboxsw: i've just removed those hacks from trunk because they were making things hard [16:30] blackboxsw: which means that any upgrade path from 1.10 needs to go through 1.12 to make things work ok [16:30] blackboxsw: i've just tested that it works ok [16:30] ahh got it ok [16:36] jcastro: I feel a bit silly [16:36] wrt to the subordiante discussion [16:37] jcastro: so, the implicit relation works pretty straight forwardly. If no previous interfaces match, and there's a juju-info interface with a scope:container it'll deploy the subordinate to the other matching service in the add-relation command [16:38] juju add-relation wordpress subordinate should "just work" unless there's another matching relation, in which case `juju add-relation wordpress subordinate:juju-info` should suffice [16:49] is there anyway to find out from a juju bootstrap node why machines are stuck in pending status? === natefinch is now known as natefinch-lunch [16:56] dreverri: which provider are you using? [16:56] ec2 [16:57] I am using the OS X client if that matters [16:57] dreverri: probably unlikely to be the case, but did you check with the ec2 api that the machines have been started successfully? [16:57] they have not been started [16:57] I only see the bootstrap node [16:58] dreverri, probably best is to login to the bootstrap node and inspect the provisioning agent log [16:58] in /var/log/juju [16:58] can juju tell me the public address of the bootstrap node? [16:59] or just grab it from aws console? [16:59] dreverri, juju status should have it [16:59] juju status is only showing the deployed machines in pending [16:59] perhaps I broke something [16:59] in my config [16:59] dreverri, machine 0 should be running [17:00] dreverri, else juju status wouldn't work [17:00] machine o refers to the first unit of the deployed service [17:00] dreverri, the bootstrap node is provisioned from the client, subsequent ones are done by code running on that bootstrap node [17:00] dreverri, can you pastebin your juju status output [17:01] http://pastebin.com/sxqvPs71 [17:05] @hazmat any thoughts? [17:06] dreverri, that's quite strange [17:07] rogpeppe, ^ is that status even possible.. what's the client even connecting to [17:07] dreverri, yeah. get the addr from the console, i'm very curious to see the machine/juju log from that machine [17:08] ok; I'll grab that in a sec [17:08] thank you [17:08] it looks like status reports based on the db state only, not provider queries, and in this case the db doesn't have normal provider machine state stored === natefinch-lunch is now known as natefinch === defunctzombie_zz is now known as defunctzombie === CyberJacob|Away is now known as CyberJacob [17:46] Hey, can anyone help me figure out why juju is telling me that it can't find the precise image? [17:48] bryanmoyles: something with simplestreams data I suspect, is it pyjuju or juju-core? Also, which cloud? [17:48] juju-core, on a private openstack installation. I've setup the proper access via the swift ACL to allow public access but I still get the same error [17:49] I have the images in juju-dist/tools/IMAGE, I'm going to try to move it to the root level of the container, I just feel like my structure is off somewhere [17:49] hm, yeah, that's not going to work like that [17:49] are they supposed to be called "juju-1.10.0-precise-amd64.tgz " ? [17:49] Why wouldn't it work? [17:49] there are two things you need, tools and simplestreams [17:50] juju-dist/tools is for the tools, those tarballs, not images [17:50] Oh okay, I don't have simple streams, is that a juju init command? [17:50] no, it's way more complicated than that, I'm also fighting it at the moment with a private cloud [17:50] bryanmoyles: it's what juju uses to lookup the image id [17:50] bryanmoyles: see if you have juju image-metadata command [17:51] I actually do have that, I have that setup in /streams, not simplestreams, sorry [17:51] I do have an image-metadata file as well [17:51] ok, so the theory is that you have to upload those two json files it creates to swift [17:52] now, I'm not sure about where exactly. I *think* to juju-dist/ [17:52] http://collabedit.com/9tg7m [17:52] so you would have juju-dist/streams/v1/ [17:52] right, that's exactly where I have them, but where do the actual images belong? [17:52] I did that and it's not working for me, but it might be because my cloud does publish product-streams in the keystone catalog and that a bug is preventing me from overriding that [17:52] bryanmoyles: the images are in openstack proper, glance [17:53] bryanmoyles: you supposedly did a glance image-list to get the id of the image you want to use [17:53] ohhh [17:53] juju image-metadata -a amd64 -e http://10.103.8.1:5000/v2.0 -i d7e2ea12-cb50-4687-b5e1-d90f0656164a -n openstack -r RegionOne -s precise [17:53] right [17:53] that's the command I ran, so I need to have first created an image and put that image's ID in place of the d7* ? [17:54] well, yes, what is d7e2ea12-... if you didn't do that? [17:54] straight from a blurb on the web, I made a very large assumption there haha, one sec let me try that with the image id for the os I uploaded [17:55] yeah :) [17:58] hem, I'm still getting the same error, why does it complain about " no "precise" images" when the meta data has given it an image id? Is there something I need to do on the image itself to identify that it's a "precise" version of ubuntu? [17:59] not that I know of, but of course, if it's not precise things might break as soon as it's launched [18:00] but it should be found and attempted to launch [18:00] so you regenerated those two json files and uploaded them again to juju-dist/? [18:00] do you also have public-bucket-url set in environments.yaml for this env? [18:00] well here's the question, as an image I uploaded my own version of a ubuntu12.04 iso, how would I use their 2MB .tgz files as "images"? Yes to both of your questions :) [18:01] I have a glance command line to import images [18:01] download a file like this: ubuntu-12.04-server-cloudimg-amd64-disk1.img [18:02] glance image-create \ [18:02] --container-format bare \ [18:02] --disk-format qcow2 \ [18:02] --is-public True \ [18:02] --name ${name%.*} \ [18:02] --file "$image_file" > /dev/null [18:02] file is the .img one, name is whatever you want [18:03] http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img should be suitable, right? [18:03] yes [18:03] k one sec, downloading and trying your command [18:03] then get the image id with glance list-images [18:03] and use that in the metadata command [18:04] I didn't need to specify -e, it grabbed that from the environment (I had that openrc.sh sourced before) [18:04] hm, try not using -n [18:04] that will prefix the files with that string, I don't think it's right [18:05] -n openstack, I mean, in the metadata command [18:05] the files should be index.json and imagemetadata.json [18:05] so try dropping -e AND -n? or just -n? [18:07] mostly n [18:07] if you have openrc sourced, -e shouldn't be necessary either [18:07] maybe you got the wrong value for it, for example [18:08] is that the keystone endpoint? [18:08] looks like it === defunctzombie is now known as defunctzombie_zz [18:10] Yeah it's the keystone url, just got the image uploaded, trying the image-metadata again [18:13] 2013-08-01 18:13:18 INFO juju tools.go:52 environs: filtering tools by series: precise [18:13] 2013-08-01 18:13:18 INFO juju tools.go:75 environs: picked newest version: 1.10.0 [18:13] 2013-08-01 18:13:19 ERROR juju supercommand.go:234 command failed: cannot start bootstrap instance: no "precise" images in RegionOne with arches [amd64 i386] [18:14] That error confuses me, when I'm explicitly telling it what image to use [18:14] Is it possible that this is because juju's tools are for 11.10 and I'm using an 12.04 image? [18:16] no, I don't think it's about tools [18:16] can you paste the two json files that were generated? [18:16] 11.10? really? that's been out of support for almost three months.. [18:17] http://collabedit.com/9tg7m [18:17] I have both files pasted into there [18:17] bryanmoyles: that one still has the openstack prefix from -n [18:17] line 19 [18:18] was the index file named index.json, or openstack-index.json? [18:18] juju image-metadata -a amd64 -e http://10.103.8.1:5000/v2.0 -i 97967ab3-9312-493e-8487-e78c2d822ac9 -r RegionOne -s precise [18:18] I don't think it knows how to lookup anything other than index.json [18:18] oh goodness [18:18] lol, I just uploaded the old files, I never realized new ones were created along side, one second lol [18:18] just rm -f .juju/*.json [18:18] thank you for dummy proofing me :-P [18:19] :) [18:19] new-host-4:~ bryanmoyles$ rm -rf ~/.juju/*.json [18:19] new-host-4:~ bryanmoyles$ juju image-metadata -a amd64 -e http://10.103.8.1:5000/v2.0 -i 97967ab3-9312-493e-8487-e78c2d822ac9 -r RegionOne -s precise [18:19] uploading now === defunctzombie_zz is now known as defunctzombie [18:20] barge, same error, pasting the new file contents [18:21] ok [18:21] is this because I don't have a "release" name? [18:21] hm, relase is empty [18:21] I wonder [18:22] try editing the file before the upload, put precise in there [18:22] mine is also empty [18:22] * ahasenack tries [18:22] I tried that, still didn't work, I wish juju -v was more verbose [18:23] me too, I wanted to see from where it is fetching the simplestreams data [18:23] maybe there is a silly 404 happening in there [18:23] I tried tcpdumping the traffic, and in my case it was actually peeking at my index.json file [18:24] but it gave up for some reason, never loaded the other file which is what has the image id [18:25] so did you also get stumped at the stage that I'm at? [18:26] wait, what the heck is this ip? 10.103.8.1 [18:26] I never put that there, is juju making a bad assumption? [18:26] or is that just the region's ip once juju is on that device [18:32] sorry, was on the phone [18:33] bryanmoyles: that ip is your -e parameter [18:33] darn it, another terrible assumption lol [18:34] these stream json files definitely go in juju-fist (public bucket), not the control bucket right? [18:35] right, juju-dist [18:35] well, I don't know about "definitely" [18:35] is this where you ultimately got stuck, or were you able to get past this error? [18:36] not past th eeror yet, but I just got a tip I'm trying [18:38] bryanmoyles: got it to work! [18:38] wow! how? [18:38] bryanmoyles: so did you fix -e? [18:38] I used localhost instead of 10., not sure if I should have that be the local or external ip [18:39] otherwise, I would need to change it with 192.168.1.201 [18:39] bryanmoyles: in my case, the url from -e and the one in environments.yaml had a tiny difference [18:39] bryanmoyles: a trailing slash (/) [18:39] :5000/v2/ versus :500/v2 [18:39] should it have one or be without it? [18:39] :5000/v2/ versus :5000/v2 [18:39] doesn't matter, it has to be the same [18:39] so should the -e IP be localhost, or the IP from the machine hosting juju? [18:39] it has to be the same in .juju/environments.yaml, in the index.json file and in the OS_AUTH_URL shell environment variable [18:40] bryanmoyles: it's the keystone auth url from your openstack cloud [18:40] bryanmoyles: do you have a openrc.sh file or something that you source so you can run nova, glance, etc, commands [18:40] ? [18:40] kk, btw I can confirm that mine are also different [18:40] I believe I do on the machine ". openrc" [18:40] bryanmoyles: ok, so source that file, don't specify -e in image-metadata [18:41] bryanmoyles: and check that the one from openrc is identical to the one in environments.yaml [18:41] bryanmoyles: the image-metadata command will grab the one from the environment if you don't specify -e [18:41] so I need to install juju-core on the openstack machine? [18:41] no [18:42] bryanmoyles: do env | grep OS_AUTH_URL [18:42] oh k one sec [18:44] looks like I don't have an openrc.sh, could have sworn I did [18:44] OS_AUTH_URL=http://localhost:5000/v2.0 [18:45] that doesn't look right [18:45] go to horizon, login, grab the openrc file from there (api credentials) === defunctzombie is now known as defunctzombie_zz [18:46] That's in the Admin panel? [18:47] in the project one iirc [18:47] on th left [18:47] you should use a regular user, not admin [18:47] is it okay to use admin for now just to get the hang of things? [18:47] I sound like a sudo (ab)user [18:48] probably [18:48] kk downloaded the file [18:48] see what it has for OS_AUTH_URL, just to check it's not localhost [18:48] export OS_AUTH_URL=http://192.168.1.201:5000/v2.0 [18:48] ok, that looks better [18:49] is that what you have in .juju/environments.yaml too? as auth-url:? [18:49] new-host-4:~ bryanmoyles$ cat ~/.juju/environments.yaml | grep auth [18:49] auth-url: http://192.168.1.201:5000/v2.0 [18:49] ok [18:49] do you have public-bucket-url in environments.yaml too? [18:50] public-bucket-url: http://192.168.1.201:8080/v1/AUTH_67de617c62d0475eb23d82f5c021f866/juju-dist/ === defunctzombie_zz is now known as defunctzombie [18:50] drop juju-dist from that [18:50] does that look right? Should I have juju-fist in there? [18:50] ok [18:50] bryanmoyles: do this [18:50] bryanmoyles: keystone catalog | less [18:50] bryanmoyles: look for Service: object-store [18:51] bryanmoyles: grab its publicURL [18:51] bryanmoyles: and use that as public-bucket-url [18:51] mind the slashes [18:51] http://192.168.1.201:8080/v1/AUTH_67de617c62d0475eb23d82f5c021f866 [18:51] exactly like that? [18:51] yes [18:51] bryanmoyles: is that in keystone catalog like that? [18:51] yeah, in the block for object-store [18:51] ok [18:51] BY GOLLY! [18:52] 2013-08-01 18:51:54 INFO juju provider.go:781 environs/openstack: started instance "4cf7253b-b06f-404d-ab77-e1cc925d69dc" [18:52] 2013-08-01 18:51:56 INFO juju supercommand.go:236 command finished [18:52] yay [18:52] the impossible happened [18:52] wow man, how would you rate yourself 1 - 10 on openstack? [18:52] hm [18:52] 6 [18:52] many things I don't know about it [18:54] Tears to my eyes to see an instance running! [18:56] so when it launches these instances, can I ssh right in (granted I have a security group established)? ie. do the juju cloud instances fully bootstrap? [18:58] bryanmoyles: the bootstrap instance has no deployed service per se, you shouldn't need to ssh into it [18:59] bryanmoyles: the fun begins now with juju deploy commands [18:59] oh! [18:59] duh! so just try "juju deploy wordpress" per se? [18:59] bryanmoyles: bootstrap is the coordinator [18:59] yes [19:00] let me try this [19:00] should I be able to curl 10.11.12.2 and see a wordpress page from the openstack machine? [19:01] not yet, wordpress needs a database, mysql [19:01] then you need to relate them (juju add-relation wordpress mysql) [19:01] and then you can hit the wordpress ip after all that happened [19:01] juju deploy wordpress takes quite a while, should that be the case? [19:01] (no expose?) [19:01] you might need expose too, yes [19:01] okay, let me find that walkthrough guide for the hello wordpress example [19:01] bryanmoyles: it will download stuff from the internet, if it's taking too long maybe internet access is blocked? [19:02] hmm, should be from that machine [19:02] bryanmoyles: you can ssh into the wordpress unit after deploy and debug things [19:02] wordpress unit? [19:02] the launches juju instance? [19:02] I think my deploys on amazon ec2 took ~five minutes? [19:02] bryanmoyles: juju deploy deployes a service and one copy of it, which we call unit [19:02] bryanmoyles: that becomes wordpress/0 [19:02] bryanmoyles: that will get its own cloud instance [19:03] so what was the point of juju-openstack-machine-0, just to make sure it worked? [19:03] bryanmoyles: with its own ip. You can ssh into it and look around [19:03] bryanmoyles: the machine 0 is the bootstrap node, it's needed to coordinate the deployments [19:03] bryanmoyles: it's also the api endpoint that your juju commands use [19:03] ah alright, is it a scary thing to CTRL C the juju deploy? [19:03] no, but just the deploy command is quick [19:04] it's a request, when the command returns it doesn't mean the deployment is complete [19:04] bryanmoyles: run juju status to check things [19:04] 2013-08-01 19:04:25 INFO juju provider.go:117 environs/openstack: opening environment "openstack" [19:04] 2013-08-01 19:04:26 INFO juju open.go:68 state: opening state; mongo addresses: ["10.11.12.2:37017"]; entity "" [19:04] just stalling there [19:04] bryanmoyles: you might have a network problem, you need to be able to reach the instances that you bring up [19:05] so I should add a route on my machine to proxy to the openstack instance? [19:05] I don't know how your cloud was deployed, sorry [19:05] try sshing into the nova compute node and reach that address from there, or into quantum-gateway (if using quantum networking), and try from there [19:06] 2013-08-01 19:05:42 ERROR juju open.go:88 state: connection failed, will retry: dial tcp 10.11.12.2:37017: operation timed out [19:06] or the cloud controller actually, i think the net is reachable from there [19:06] you can just telnet into that address and port to see if it connects [19:06] pinging from the controller node returns a "No route to host" even though the subnet is masked to br100 properly [19:06] or plain ssh on port 22 [19:09] when I connect to the console via openstack, it's just a black screen, perhaps the instance is stalled? [19:09] [ 303.551013] [ 209] 0 209 3288 135 0 -17 -1000 udevd [19:09] [ 303.551737] Kernel panic - not syncing: Out of memory and no killable processes... [19:09] oh man [19:10] how do you specify a flavor for the bootstrap node on juju? [19:10] owww [19:10] it's running nano right now, must be too small [19:11] I don't see anything in the imagemetadata.json specifying a flavor [19:13] default-instance-type: m1.small [19:13] is in the sample for juju's docs page, but juju -v shows that it's deprecated [19:30] bryanmoyles: try a constraint [19:30] bryanmoyles: after bootstrap, juju set-constraint mem=2048M [19:30] or set-constraints, I don't remember [19:30] then deploy again [19:30] or at deploy time, see help docs for juju deploy, it takes parameters [19:30] I just resized the instance manually, but now I think I'm running into hardware limitations, but you guys have gotten me to the point where I can safely try our huge machines instead of my 5 year old laptop as the compute & controller lol [19:31] bryanmoyles: now, i'm not sure how juju finds out about the image sizes, I suppose it uses the openstack api [19:31] s/image sizes/instance sizes/ [19:32] you can also specify the constraint during bootstrap [19:32] but that's usually a waste, since the bootstrap doesn't need a lot of resources. I usually bootstrap with the smallest instance size, then set constraints after that so the newly deployed services get a bigger machine [19:47] noodles775, any chance you have a charm you can share that makes use of ansible? === BradCrittenden is now known as bac === CyberJacob is now known as CyberJacob|Away === gary_poster is now known as gary_poster|away === gary_poster|away is now known as gary_poster