[05:20] <melmoth> hola, if when i bootstrap (openstack provider) i got the following:
[05:20] <melmoth> juju.errors.ProviderInteractionError: Unexpected 404: '{"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}'
[05:20] <melmoth> hwo can i know wich resources it cannot find ?
[10:42] <freeflying> melmoth_: can you access to nova's api
[10:54] <melmoth_> freeflying, it s fixed, there was a missing / in my keystone endpoint
[10:54] <melmoth_> thanks though :)
[11:10] <freeflying> melmoth_: np :)
[13:10] <marcoceppi> hazmat: you around?
[13:32] <hazmat> marcoceppi, yup
[13:33] <marcoceppi> hey hazmat, could you create a ppa in ~juju called "tools", I think we're going to depricate the pkgs ppa in favor of a generic tools ppa for charm-tools, juju-deployer, etc
[13:36] <hazmat> marcoceppi, sounds good
[13:36] <jcastro> marcoceppi: did you guys get around to fixing up memcached?
[13:37] <marcoceppi> I think, once that's moved over, we can start a discussion on the list about removing all the other ppas that are no longer needed. I think stable, devel, and tools would be what's left over ultimately
[13:37] <marcoceppi> jcastro: whos the other guys in that sentance?
[13:38] <jcastro> weren't you talking to pavel or something about it?
[13:38] <jcastro> you mentioned it in passing at the  g+ hangout
[13:38] <hazmat> marcoceppi, done
[13:38] <marcoceppi> jcastro: ah, he has some merge requests to memcached and haproxy I think
[13:38] <marcoceppi> I believe they were merged
[13:39] <marcoceppi> hazmat: thanks!
[13:39] <jcastro> marcoceppi: oh ok, so as far as you know it works with wordpress now?
[13:40] <marcoceppi> jcastro: that's another thing, that's not uploaded yet. I'm still wrapping up Amulet for the release this week
[13:40] <hazmat> marcoceppi, we should get a copy of ahasenack's build recipes for jujuclient/jujudeployer to populate
[13:40] <marcoceppi> hazmat: ack, for sure
[13:41] <ahasenack> hazmat: I can swich the target ppa in the recipes, or you can create new recipes, I'm ok with either
[13:42] <hazmat> ahasenack, if your up for switching the target that sounds good for now..  do the recipe packaging branches need bumping on version increments of the underlying packages? if so i'd like to move them to a group branch account.
[13:43] <ahasenack> hazmat: they do need that, in the debian/changelog file
[13:43] <jcastro> marcoceppi: looks like your comment to the VPN endpoint bumped it to the end of the queue line. :-/
[13:43] <marcoceppi> jcastro: I know, I have a few charms on my short list for review this week
[13:43] <jcastro> ahasenack: you're experienced with juju now, you should consider joining ~charmers and helping us review incoming charms!
[13:43] <marcoceppi> that being the top of the list
[13:44] <ahasenack> hazmat: I think it's best you copy the recipe and the packaging branch, I'm not in ~juju and can't upload to that ppa, and it sounds betterto have the branch and recipe owner by a group, not a person
[13:44] <ahasenack> jcastro: I'm not so sure, I don't even have a charm of my own yet
[13:45] <jcastro> ok
[13:45] <jcastro> ahasenack: let me know when you start one. :)
[13:45] <ahasenack> https://code.launchpad.net/~ahasenack/+recipe/python-jujuclient-daily and https://code.launchpad.net/~ahasenack/+recipe/juju-deployer-daily
[13:45] <ahasenack> hazmat: ^^^
[13:45] <ahasenack> packaging branchs are in the recipes
[13:45] <ahasenack> branches
[13:45] <ahasenack> jcastro: :)
[13:46] <hazmat> ahasenack, will do, thanks again for packaging these up
[13:46] <ahasenack> the websocket client has no recipe, as I would have to first mirror it in LP, by creating a LP project, and then have the recipe
[13:47] <hazmat> i've been tempted to just include it in the jujuclient .. its a signle module.
[14:05] <ahasenack> marcoceppi: I'm not being successful in asking juju to use my custom image, like those two askubuntu questions from yesterday also weren't
[14:05] <ahasenack> marcoceppi: I sent an email to the list
[14:06] <marcoceppi> ahasenack: saw that, thanks. I'll keep an eye on the list in hopes of answering those two questions
[14:07] <ahasenack> marcoceppi: it's also blocking me in further openstack tests, due to bug #1188126
[14:07] <_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured <canonistack> <serverstack> <juju:New> <juju-core:Triaged> <https://launchpad.net/bugs/1188126>
[14:07] <ahasenack> my custom image has a workaround, but I can't launch it with juju
[14:07] <ahasenack> so, kaput
[14:08] <ahasenack> on to something else
[14:08] <marcoceppi> ahasenack: So, I wonder if it's a recent bug. With 1.11.1 I was able to upload custom image metadata to az3 of hp cloud which doesn't have any juju-dist information
[14:09] <ahasenack> marcoceppi: I don't even know where to download it
[14:09] <ahasenack> marcoceppi: I tried juju-dist/, and streams/ directly
[14:09] <ahasenack> so, I have juju-dist/streams/v1/stuff
[14:09] <ahasenack> and streams/v1/stuff
[14:09] <ahasenack> completely ignored
[14:10] <ahasenack> maybe the product-streams service from keystone catalog is overriding that, and that's the bug
[14:10] <marcoceppi> ahasenack: So with az3 I've got a juju-dist bucket I created with a streams/v1/... directory
[14:10] <marcoceppi> ahasenack: but I dont' think hp cloud has product-streams, that's something I can't confirm.
[14:10] <ahasenack> marcoceppi: "keystone catalog" doesn't work against it?
[14:10] <marcoceppi> ahasenack: I haven't tried
[14:10] <ahasenack> ok
[14:12] <marcoceppi> I wish I had more time to play with this problem this week. If you dont' get an answer by Monday I may try to poke at it for a bit
[14:13] <ahasenack> https://bugs.launchpad.net/juju-core/+bug/1185143 would help if it were fixed
[14:13] <_mup_> Bug #1185143: bootstrap -v needs to show the swift/s3 action <debug> <ui> <juju-core:Confirmed> <https://launchpad.net/bugs/1185143>
[15:01] <hazmat> marcoceppi, so some discussion and the desire seems to be push the tools directly into the core ppas ie (devel and stable)
[15:02] <marcoceppi> hazmat: my concern with that is we don't have a very clear release cadence on most of the tools. So they're basically all "devel"
[15:03] <bloodearnest> heya all - is there any way to make 'juju ssh <service>/<unit> "command"' return output from the command?
[15:03] <hazmat> marcoceppi, true, but the key distinction might be that  it works with the juju in the same ppa
[15:03] <bloodearnest> seems to swallow it by default
[15:05] <jcastro> hey marcoceppi
[15:05] <jcastro> so brandon put the wrong jorgecastro in the github group but you appear to be in there, we need to redirect jujutools.github.com to the right place
[15:06] <marcoceppi> jcastro: I have no idea how to do that. Let me just see if i can add you
[15:07] <marcoceppi> jcastro: what's your gh username?
[15:07] <jcastro> castrojo
[15:07] <jcastro> If you can add me I can handle it
[15:08] <marcoceppi> jcastro: added
[15:09] <jcastro> ta
[16:08] <TheMue> jcastro: ping
[16:08] <jcastro> pong
[16:08] <TheMue> jcastro: just working on juju set --default to simply set an option to its default value
[16:09] <TheMue> jcastro: background is to also use juju set <svc> option= to set an empty string
[16:09] <TheMue> jcastro: but this would also lead to a change of of the charm configs
[16:10] <jcastro> so are you asking if we should do that or if we're doing that already?
[16:10] <TheMue> jcastro: today empty strings remove a value, so no empty strings
[16:10] <TheMue> jcastro: yep, you got it ;)
[16:10] <TheMue> jcastro: I don't wonna break compatibility
[16:11] <jcastro> I think that's list material there
[16:11] <jcastro> see what other charmers think
[16:12] <TheMue> jcastro: ok, will do
[16:12] <TheMue> jcastro: thx
[16:12] <marcoceppi> TheMue: I think quite a few charms would probably break if you could "unset" a configuration value
[16:13] <TheMue> marcoceppi: yeah, I  feared that
[16:15] <rogpeppe> marcoceppi: unsetting a config value should cause it to revert to its default value from the charm hook's point of view
[16:16] <marcoceppi> rogpeppe: I don't think I grasp the scope of this change then. I look forward to the list post for clarification
[16:20] <TheMue> marcoceppi: set option= today leads to a reset to the default. with the introduction of set --default option it isn't needed anymore and set option= can be used to set option to an empty string (not possible today)
[16:21] <rogpeppe>  TheMue: good point
[16:21] <rogpeppe> marcoceppi: you can already unset a configuration value
[16:21] <marcoceppi> TheMue: Oh, interesting. I don't think that'll have much of an impact actually. Since, and I may be wrong, I was under the impression that set option= in pyjuju set it to an empty string and not hte default
[16:21] <rogpeppe> marcoceppi: but if there's a string config value with a default that's non-empty, you can't currently set it to empty
[16:22] <rogpeppe> marcoceppi: ha ha
[16:22] <rogpeppe> marcoceppi: that's what we want to make it do!
[16:22] <TheMue> rogpeppe: yep
[16:22] <blackboxsw> thinking on uptime of our services, I see juju upgrade-juju in juju-core. I'm about to test a deployment from ppa -> trunk...  is a simple juju-core upgrade-juju all I need for a seamless upgrade that will leave my running units intact?
[16:22] <rogpeppe> marcoceppi: i think that it's possible that *was* the py juju behaviour, but i'm not sure
[16:22] <marcoceppi> rogpeppe: right! TheMue I'd email the list just for general awareness but I don't see many people putting up a stink about this :)
[16:23] <marcoceppi> rogpeppe: I could pull out pyjuju and test, but I don't need anymore pain for today ;)
[16:23] <TheMue> marcoceppi: ;)
[16:23] <rogpeppe> blackboxsw: i'd upgrade to 1.12 before upgrading to trunk
[16:23] <marcoceppi> blackboxsw: what version are you using currently?
[16:23]  * rogpeppe needs to write an email to juju-dev about that
[16:23] <blackboxsw> 1.11-4
[16:23]  * rogpeppe goes off to do that
[16:24]  * TheMue too
[16:24] <rogpeppe> blackboxsw: i advise downloading https://launchpad.net/juju-core/1.12/1.12.0/+download/juju-core_1.12.0-1.tar.gz
[16:24] <marcoceppi> 1.11.4 more or less is 1.12. I'm not sure of the nuances for upgrading juju in place to trunk with upload tools, etc
[16:24] <rogpeppe> blackboxsw: then building that and upgrade-juju to that (using --upload-tools)
[16:25] <marcoceppi> rogpeppe: so you need to run upgrade-juju --upload-tools?
[16:25]  * marcoceppi adds this to list of things we need to document
[16:25] <rogpeppe> marcoceppi: hmm, actually, perhaps it's easier than that
[16:25] <blackboxsw> will do I saw https://code.launchpad.net/~fwereade/juju-core/fix-upgrade-carnage/+merge/173972 which looks like it address a similar upgrade path issue
[16:25] <blackboxsw> but I think that was 1.10 that was a problem
[16:26] <blackboxsw> okay in either case. I'll give both a whirl as its a dev deployment anyway... will report on the success of 1.11.4-1514~raring too
[16:26] <marcoceppi> blackboxsw: according to the release notes, minor version increments should work 1.11.1 -> 1.11.2, etc
[16:26] <marcoceppi> blackboxsw: Yeah, I'd be interested in your experience with the upgrade process
[16:27] <rogpeppe> blackboxsw: if you're using an environment with a public tools bucket, you should be able to do juju upgrade-juju --version 1.12.0
[16:27] <rogpeppe> blackboxsw: then wait for all the units and machines to report 1.12.0 as their version
[16:28] <rogpeppe> blackboxsw: then juju upgrade-juju to a later version
[16:28] <rogpeppe> blackboxsw: e.g. current trunk
[16:28] <blackboxsw> ahh, and if I use --upload-tools?
[16:28] <rogpeppe> blackboxsw: that should be ok *after* you've upgraded to 1.12
[16:28] <blackboxsw> makes sense
[16:28] <blackboxsw> thx
[16:28] <rogpeppe> blackboxsw: because 1.12 has some specific code (hacks) in it that propagate some information that 1.10 didn't propagate
[16:29] <rogpeppe> s/propagate/propagates/
[16:29] <rogpeppe> blackboxsw: i've just removed those hacks from trunk because they were making things hard
[16:30] <rogpeppe> blackboxsw: which means that any upgrade path from 1.10 needs to go through 1.12 to make things work ok
[16:30] <rogpeppe> blackboxsw: i've just tested that it works ok
[16:30] <blackboxsw> ahh got it ok
[16:36] <marcoceppi> jcastro: I feel a bit silly
[16:36] <marcoceppi> wrt to the subordiante discussion
[16:37] <marcoceppi> jcastro: so, the implicit relation works pretty straight forwardly. If no previous interfaces match, and there's a juju-info interface with a scope:container it'll deploy the subordinate to the other matching service in the add-relation command
[16:38] <marcoceppi> juju add-relation wordpress subordinate should "just work" unless there's another matching relation, in which case `juju add-relation wordpress subordinate:juju-info` should suffice
[16:49] <dreverri> is there anyway to find out from a juju bootstrap node why machines are stuck in pending status?
[16:56] <sidnei> dreverri: which provider are you using?
[16:56] <dreverri> ec2
[16:57] <dreverri> I am using the OS X client if that matters
[16:57] <sidnei> dreverri: probably unlikely to be the case, but did you check with the ec2 api that the machines have been started successfully?
[16:57] <dreverri> they have not been started
[16:57] <dreverri> I only see the bootstrap node
[16:58] <hazmat> dreverri, probably best is to login to the bootstrap node and inspect the provisioning agent log
[16:58] <hazmat> in /var/log/juju
[16:58] <dreverri> can juju tell me the public address of the bootstrap node?
[16:59] <dreverri> or just grab it from aws console?
[16:59] <hazmat> dreverri, juju status should have it
[16:59] <dreverri> juju status is only showing the deployed machines in pending
[16:59] <dreverri> perhaps I broke something
[16:59] <dreverri> in my config
[16:59] <hazmat> dreverri, machine 0 should be running
[17:00] <hazmat> dreverri, else juju status wouldn't work
[17:00] <dreverri> machine o refers to the first unit of the deployed service
[17:00] <hazmat> dreverri, the bootstrap node is provisioned from the client, subsequent ones are done by code running on that bootstrap node
[17:00] <hazmat> dreverri, can you pastebin your juju status output
[17:01] <dreverri> http://pastebin.com/sxqvPs71
[17:05] <dreverri> @hazmat any thoughts?
[17:06] <hazmat> dreverri, that's quite strange
[17:07] <hazmat> rogpeppe, ^ is that status even possible.. what's the client even connecting to
[17:07] <hazmat> dreverri, yeah. get the addr from the console, i'm very curious to see the machine/juju log from that machine
[17:08] <dreverri> ok; I'll grab that in a sec
[17:08] <dreverri> thank you
[17:08] <hazmat> it looks like status reports based on the db state only, not provider queries, and in this case the db doesn't have normal provider machine state stored
[17:46] <bryanmoyles> Hey, can anyone help me figure out why juju is telling me that it can't find the precise image?
[17:48] <ahasenack> bryanmoyles: something with simplestreams data I suspect, is it pyjuju or juju-core? Also, which cloud?
[17:48] <bryanmoyles> juju-core, on a private openstack installation. I've setup the proper access via the swift ACL to allow public access but I still get the same error
[17:49] <bryanmoyles> I have the images in juju-dist/tools/IMAGE, I'm going to try to move it to the root level of the container, I just feel like my structure is off somewhere
[17:49] <ahasenack> hm, yeah, that's not going to work like that
[17:49] <bryanmoyles> are they supposed to be called "juju-1.10.0-precise-amd64.tgz	" ?
[17:49] <bryanmoyles> Why wouldn't it work?
[17:49] <ahasenack> there are two things you need, tools and simplestreams
[17:50] <ahasenack> juju-dist/tools is for the tools, those tarballs, not images
[17:50] <bryanmoyles> Oh okay, I don't have simple streams, is that a juju init command?
[17:50] <ahasenack> no, it's way more complicated than that, I'm also fighting it at the moment with a private cloud
[17:50] <ahasenack> bryanmoyles: it's what juju uses to lookup the image id
[17:50] <ahasenack> bryanmoyles: see if you have juju image-metadata command
[17:51] <bryanmoyles> I actually do have that, I have that setup in /streams, not simplestreams, sorry
[17:51] <bryanmoyles> I do have an image-metadata file as well
[17:51] <ahasenack> ok, so the theory is that you have to upload those two json files it creates to swift
[17:52] <ahasenack> now, I'm not sure about where exactly. I *think* to juju-dist/
[17:52] <bryanmoyles> http://collabedit.com/9tg7m
[17:52] <ahasenack> so you would have juju-dist/streams/v1/<json files>
[17:52] <bryanmoyles> right, that's exactly where I have them, but where do the actual images belong?
[17:52] <ahasenack> I did that and it's not working for me, but it might be because my cloud does publish product-streams in the keystone catalog and that a bug is preventing me from overriding that
[17:52] <ahasenack> bryanmoyles: the images are in openstack proper, glance
[17:53] <ahasenack> bryanmoyles: you supposedly did a glance image-list to get the id of the image you want to use
[17:53] <bryanmoyles> ohhh
[17:53] <bryanmoyles> juju image-metadata -a amd64 -e http://10.103.8.1:5000/v2.0 -i d7e2ea12-cb50-4687-b5e1-d90f0656164a -n openstack -r RegionOne -s precise
[17:53] <ahasenack> right
[17:53] <bryanmoyles> that's the command I ran, so I need to have first created an image and put that image's ID in place of the d7* ?
[17:54] <ahasenack> well, yes, what is d7e2ea12-... if you didn't do that?
[17:54] <bryanmoyles> straight from a blurb on the web, I made a very large assumption there haha, one sec let me try that with the image id for the os I uploaded
[17:55] <ahasenack> yeah :)
[17:58] <bryanmoyles> hem, I'm still getting the same error, why does it complain about " no "precise" images" when the meta data has given it an image id? Is there something I need to do on the image itself to identify that it's a "precise" version of ubuntu?
[17:59] <ahasenack> not that I know of, but of course, if it's not precise things might break as soon as it's launched
[18:00] <ahasenack> but it should be found and attempted to launch
[18:00] <ahasenack> so you regenerated those two json files and uploaded them again to juju-dist/?
[18:00] <ahasenack> do you also have public-bucket-url set in environments.yaml for this env?
[18:00] <bryanmoyles> well here's the question, as an image I uploaded my own version of a ubuntu12.04 iso, how would I use their 2MB .tgz files as "images"? Yes to both of your questions :)
[18:01] <ahasenack> I have a glance command line to import images
[18:01] <ahasenack> download a file like this: ubuntu-12.04-server-cloudimg-amd64-disk1.img
[18:02] <ahasenack>         glance image-create \
[18:02] <ahasenack>             --container-format bare \
[18:02] <ahasenack>             --disk-format qcow2 \
[18:02] <ahasenack>             --is-public True \
[18:02] <ahasenack>             --name ${name%.*} \
[18:02] <ahasenack>             --file "$image_file" > /dev/null
[18:02] <ahasenack> file is the .img one, name is whatever you want
[18:03] <bryanmoyles> http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img should be suitable, right?
[18:03] <ahasenack> yes
[18:03] <bryanmoyles> k one sec, downloading and trying your command
[18:03] <ahasenack> then get the image id with glance list-images
[18:03] <ahasenack> and use that in the metadata command
[18:04] <ahasenack> I didn't need to specify -e, it grabbed that from the environment (I had that openrc.sh sourced before)
[18:04] <ahasenack> hm, try not using -n
[18:04] <ahasenack> that will prefix the files with that string, I don't think it's right
[18:05] <ahasenack> -n openstack, I mean, in the metadata command
[18:05] <ahasenack> the files should be index.json and imagemetadata.json
[18:05] <bryanmoyles> so try dropping -e AND -n? or just -n?
[18:07] <ahasenack> mostly n
[18:07] <ahasenack> if you have openrc sourced, -e shouldn't be necessary either
[18:07] <ahasenack> maybe you got the wrong value for it, for example
[18:08] <ahasenack> is that the keystone endpoint?
[18:08] <ahasenack> looks like it
[18:10] <bryanmoyles> Yeah it's the keystone url, just got the image uploaded, trying the image-metadata again
[18:13] <bryanmoyles> 2013-08-01 18:13:18 INFO juju tools.go:52 environs: filtering tools by series: precise
[18:13] <bryanmoyles> 2013-08-01 18:13:18 INFO juju tools.go:75 environs: picked newest version: 1.10.0
[18:13] <bryanmoyles> 2013-08-01 18:13:19 ERROR juju supercommand.go:234 command failed: cannot start bootstrap instance: no "precise" images in RegionOne with arches [amd64 i386]
[18:14] <bryanmoyles> That error confuses me, when I'm explicitly telling it what image to use
[18:14] <bryanmoyles> Is it possible that this is because juju's tools are for 11.10 and I'm using an 12.04 image?
[18:16] <ahasenack> no, I don't think it's about tools
[18:16] <ahasenack> can you paste the two json files that were generated?
[18:16] <sarnold> 11.10? really? that's been out of support for almost three months..
[18:17] <bryanmoyles> http://collabedit.com/9tg7m
[18:17] <bryanmoyles> I have both files pasted into there
[18:17] <ahasenack> bryanmoyles: that one still has the openstack prefix from -n
[18:17] <ahasenack> line 19
[18:18] <ahasenack> was the index file named index.json, or openstack-index.json?
[18:18] <bryanmoyles> juju image-metadata -a amd64 -e http://10.103.8.1:5000/v2.0 -i 97967ab3-9312-493e-8487-e78c2d822ac9 -r RegionOne -s precise
[18:18] <ahasenack> I don't think it knows how to lookup anything other than index.json
[18:18] <bryanmoyles> oh goodness
[18:18] <bryanmoyles> lol, I just uploaded the old files, I never realized new ones were created along side, one second lol
[18:18] <ahasenack> just rm -f .juju/*.json
[18:18] <bryanmoyles> thank you for dummy proofing me :-P
[18:19] <ahasenack> :)
[18:19] <bryanmoyles> new-host-4:~ bryanmoyles$ rm -rf ~/.juju/*.json
[18:19] <bryanmoyles> new-host-4:~ bryanmoyles$ juju image-metadata -a amd64 -e http://10.103.8.1:5000/v2.0 -i 97967ab3-9312-493e-8487-e78c2d822ac9 -r RegionOne -s precise
[18:19] <bryanmoyles> uploading now
[18:20] <bryanmoyles> barge, same error, pasting the new file contents
[18:21] <ahasenack> ok
[18:21] <bryanmoyles> is this because I don't have a "release" name?
[18:21] <ahasenack> hm, relase is empty
[18:21] <ahasenack> I wonder
[18:22] <ahasenack> try editing the file before the upload, put precise in there
[18:22] <ahasenack> mine is also empty
[18:22]  * ahasenack tries
[18:22] <bryanmoyles> I tried that, still didn't work, I wish juju -v was more verbose
[18:23] <ahasenack> me too, I wanted to see from where it is fetching the simplestreams data
[18:23] <ahasenack> maybe there is a silly 404 happening in there
[18:23] <ahasenack> I tried tcpdumping the traffic, and in my case it was actually peeking at my index.json file
[18:24] <ahasenack> but it gave up for some reason, never loaded the other file which is what has the image id
[18:25] <bryanmoyles> so did you also get stumped at the stage that I'm at?
[18:26] <bryanmoyles> wait, what the heck is this ip? 10.103.8.1
[18:26] <bryanmoyles> I never put that there, is juju making a bad assumption?
[18:26] <bryanmoyles> or is that just the region's ip once juju is on that device
[18:32] <ahasenack> sorry, was on the phone
[18:33] <ahasenack> bryanmoyles: that ip is your -e parameter
[18:33] <bryanmoyles> darn it, another terrible assumption lol
[18:34] <bryanmoyles> these stream json files definitely go in juju-fist (public bucket), not the control bucket right?
[18:35] <ahasenack> right, juju-dist
[18:35] <ahasenack> well, I don't know about "definitely"
[18:35] <bryanmoyles> is this where you ultimately got stuck, or were you able to get past this error?
[18:36] <ahasenack> not past th eeror yet, but I just got a tip I'm trying
[18:38] <ahasenack> bryanmoyles: got it to work!
[18:38] <bryanmoyles> wow! how?
[18:38] <ahasenack> bryanmoyles: so did you fix -e?
[18:38] <bryanmoyles> I used localhost instead of 10., not sure if I should have that be the local or external ip
[18:39] <bryanmoyles> otherwise, I would need to change it with 192.168.1.201
[18:39] <ahasenack> bryanmoyles: in my case, the url from -e and the one in environments.yaml had a tiny difference
[18:39] <ahasenack> bryanmoyles: a trailing slash (/)
[18:39] <ahasenack> :5000/v2/ versus :500/v2
[18:39] <bryanmoyles> should it have one or be without it?
[18:39] <ahasenack> :5000/v2/ versus :5000/v2
[18:39] <ahasenack> doesn't matter, it has to be the same
[18:39] <bryanmoyles> so should the  -e IP be localhost, or the IP from the machine hosting juju?
[18:39] <ahasenack> it has to be the same in .juju/environments.yaml, in the index.json file and in the OS_AUTH_URL shell environment variable
[18:40] <ahasenack> bryanmoyles: it's the keystone auth url from your openstack cloud
[18:40] <ahasenack> bryanmoyles: do you have a openrc.sh file or something that you source so you can run nova, glance, etc, commands
[18:40] <ahasenack> ?
[18:40] <bryanmoyles> kk, btw I can confirm that mine are also different
[18:40] <bryanmoyles> I believe I do on the machine ". openrc"
[18:40] <ahasenack> bryanmoyles: ok, so source that file, don't specify -e in image-metadata
[18:41] <ahasenack> bryanmoyles: and check that the one from openrc is identical to the one in environments.yaml
[18:41] <ahasenack> bryanmoyles: the image-metadata command will grab the one from the environment if you don't specify -e
[18:41] <bryanmoyles> so I need to install juju-core on the openstack machine?
[18:41] <ahasenack> no
[18:42] <ahasenack> bryanmoyles: do env | grep OS_AUTH_URL
[18:42] <bryanmoyles> oh k one sec
[18:44] <bryanmoyles> looks like I don't have an openrc.sh, could have sworn I did
[18:44] <bryanmoyles> OS_AUTH_URL=http://localhost:5000/v2.0
[18:45] <ahasenack> that doesn't look right
[18:45] <ahasenack> go to horizon, login, grab the openrc file from there (api credentials)
[18:46] <bryanmoyles> That's in the Admin panel?
[18:47] <ahasenack> in the project one iirc
[18:47] <ahasenack> on th left
[18:47] <ahasenack> you should use a regular user, not admin
[18:47] <bryanmoyles> is it okay to use admin for now just to get the hang of things?
[18:47] <bryanmoyles> I sound like a sudo (ab)user
[18:48] <ahasenack> probably
[18:48] <bryanmoyles> kk downloaded the file
[18:48] <ahasenack> see what it has for OS_AUTH_URL, just to check it's not localhost
[18:48] <bryanmoyles> export OS_AUTH_URL=http://192.168.1.201:5000/v2.0
[18:48] <ahasenack> ok, that looks better
[18:49] <ahasenack> is that what you have in .juju/environments.yaml too? as auth-url:?
[18:49] <bryanmoyles> new-host-4:~ bryanmoyles$ cat ~/.juju/environments.yaml | grep auth
[18:49] <bryanmoyles>     auth-url: http://192.168.1.201:5000/v2.0
[18:49] <ahasenack> ok
[18:49] <ahasenack> do you have public-bucket-url in environments.yaml too?
[18:50] <bryanmoyles>     public-bucket-url: http://192.168.1.201:8080/v1/AUTH_67de617c62d0475eb23d82f5c021f866/juju-dist/
[18:50] <ahasenack> drop juju-dist from that
[18:50] <bryanmoyles> does that look right? Should I have juju-fist in there?
[18:50] <bryanmoyles> ok
[18:50] <ahasenack> bryanmoyles: do this
[18:50] <ahasenack> bryanmoyles: keystone catalog | less
[18:50] <ahasenack> bryanmoyles: look for Service: object-store
[18:51] <ahasenack> bryanmoyles: grab its publicURL
[18:51] <ahasenack> bryanmoyles: and use that as public-bucket-url
[18:51] <ahasenack> mind the slashes
[18:51] <bryanmoyles> http://192.168.1.201:8080/v1/AUTH_67de617c62d0475eb23d82f5c021f866
[18:51] <bryanmoyles> exactly like that?
[18:51] <ahasenack> yes
[18:51] <ahasenack> bryanmoyles: is that in keystone catalog like that?
[18:51] <bryanmoyles> yeah, in the block for object-store
[18:51] <ahasenack> ok
[18:51] <bryanmoyles> BY GOLLY!
[18:52] <bryanmoyles> 2013-08-01 18:51:54 INFO juju provider.go:781 environs/openstack: started instance "4cf7253b-b06f-404d-ab77-e1cc925d69dc"
[18:52] <bryanmoyles> 2013-08-01 18:51:56 INFO juju supercommand.go:236 command finished
[18:52] <ahasenack> yay
[18:52] <ahasenack> the impossible happened
[18:52] <bryanmoyles> wow man, how would you rate yourself 1 - 10 on openstack?
[18:52] <ahasenack> hm
[18:52] <ahasenack> 6
[18:52] <ahasenack> many things I don't know about it
[18:54] <bryanmoyles> Tears to my eyes to see an instance running!
[18:56] <bryanmoyles> so when it launches these instances, can I ssh right in (granted I have a security group established)? ie. do the juju cloud instances fully bootstrap?
[18:58] <ahasenack> bryanmoyles: the bootstrap instance has no deployed service per se, you shouldn't need to ssh into it
[18:59] <ahasenack> bryanmoyles: the fun begins now with juju deploy commands
[18:59] <bryanmoyles> oh!
[18:59] <bryanmoyles> duh! so just try "juju deploy wordpress" per se?
[18:59] <ahasenack> bryanmoyles: bootstrap is the coordinator
[18:59] <ahasenack> yes
[19:00] <bryanmoyles> let me try this
[19:00] <bryanmoyles> should I be able to curl 10.11.12.2 and see a wordpress page from the openstack machine?
[19:01] <ahasenack> not yet, wordpress needs a database, mysql
[19:01] <ahasenack> then you need to relate them (juju add-relation wordpress mysql)
[19:01] <ahasenack> and then you can hit the wordpress ip after all that happened
[19:01] <bryanmoyles> juju deploy wordpress takes quite a while, should that be the case?
[19:01] <sarnold> (no expose?)
[19:01] <ahasenack> you might need expose too, yes
[19:01] <bryanmoyles> okay, let me find that walkthrough guide for the hello wordpress example
[19:01] <ahasenack> bryanmoyles: it will download stuff from the internet, if it's taking too long maybe internet access is blocked?
[19:02] <bryanmoyles> hmm, should be from that machine
[19:02] <ahasenack> bryanmoyles: you can ssh into the wordpress unit after deploy and debug things
[19:02] <bryanmoyles> wordpress unit?
[19:02] <bryanmoyles> the launches juju instance?
[19:02] <sarnold> I think my deploys on amazon ec2 took ~five minutes?
[19:02] <ahasenack> bryanmoyles: juju deploy deployes a service and one copy of it, which we call unit
[19:02] <ahasenack> bryanmoyles: that becomes wordpress/0
[19:02] <ahasenack> bryanmoyles: that will get its own cloud instance
[19:03] <bryanmoyles> so what was the point of juju-openstack-machine-0, just to make sure it worked?
[19:03] <ahasenack> bryanmoyles: with its own ip. You can ssh into it and look around
[19:03] <ahasenack> bryanmoyles: the machine 0 is the bootstrap node, it's needed to coordinate the deployments
[19:03] <ahasenack> bryanmoyles: it's also the api endpoint that your juju commands use
[19:03] <bryanmoyles> ah alright, is it a scary thing to CTRL C the juju deploy?
[19:03] <ahasenack> no, but just the deploy command is quick
[19:04] <ahasenack> it's a request, when the command returns it doesn't mean the deployment is complete
[19:04] <ahasenack> bryanmoyles: run juju status to check things
[19:04] <bryanmoyles> 2013-08-01 19:04:25 INFO juju provider.go:117 environs/openstack: opening environment "openstack"
[19:04] <bryanmoyles> 2013-08-01 19:04:26 INFO juju open.go:68 state: opening state; mongo addresses: ["10.11.12.2:37017"]; entity ""
[19:04] <bryanmoyles> just stalling there
[19:04] <ahasenack> bryanmoyles: you might have a network problem, you need to be able to reach the instances that you bring up
[19:05] <bryanmoyles> so I should add a route on my machine to proxy to the openstack instance?
[19:05] <ahasenack> I don't know how your cloud was deployed, sorry
[19:05] <ahasenack> try sshing into the nova compute node and reach that address from there, or into quantum-gateway (if using quantum networking), and try from there
[19:06] <bryanmoyles> 2013-08-01 19:05:42 ERROR juju open.go:88 state: connection failed, will retry: dial tcp 10.11.12.2:37017: operation timed out
[19:06] <ahasenack> or the cloud controller actually, i think the net is reachable from there
[19:06] <ahasenack> you can just telnet into that address and port to see if it connects
[19:06] <bryanmoyles> pinging from the controller node returns a "No route to host" even though the subnet is masked to br100 properly
[19:06] <ahasenack> or plain ssh on port 22
[19:09] <bryanmoyles> when I connect to the console via openstack, it's just a black screen, perhaps the instance is stalled?
[19:09] <bryanmoyles> [  303.551013] [  209]     0   209     3288      135   0     -17         -1000 udevd
[19:09] <bryanmoyles> [  303.551737] Kernel panic - not syncing: Out of memory and no killable processes...
[19:09] <bryanmoyles> oh man
[19:10] <bryanmoyles> how do you specify a flavor for the bootstrap node on juju?
[19:10] <sarnold> owww
[19:10] <bryanmoyles> it's running nano right now, must be too small
[19:11] <bryanmoyles> I don't see anything in the imagemetadata.json specifying a flavor
[19:13] <bryanmoyles> default-instance-type: m1.small
[19:13] <bryanmoyles> is in the sample for juju's docs page, but juju -v shows that it's deprecated
[19:30] <ahasenack> bryanmoyles: try a constraint
[19:30] <ahasenack> bryanmoyles: after bootstrap, juju set-constraint mem=2048M
[19:30] <ahasenack> or set-constraints, I don't remember
[19:30] <ahasenack> then deploy again
[19:30] <ahasenack> or at deploy time, see help docs for juju deploy, it takes parameters
[19:30] <bryanmoyles> I just resized the instance manually, but now I think I'm running into hardware limitations, but you guys have gotten me to the point where I can safely try our huge machines instead of my 5 year old laptop as the compute & controller lol
[19:31] <ahasenack> bryanmoyles: now, i'm not sure how juju finds out about the image sizes, I suppose it uses the openstack api
[19:31] <ahasenack> s/image sizes/instance sizes/
[19:32] <ahasenack> you can also specify the constraint during bootstrap
[19:32] <ahasenack> but that's usually a waste, since the bootstrap doesn't need a lot of resources. I usually bootstrap with the smallest instance size, then set constraints after that so the newly deployed services get a bigger machine
[19:47] <adam_g> noodles775, any chance you have a charm you can share that makes use of ansible?