[02:23] <dalek49> why does juju require a local charm to be under a "precise" directory?  I'm running raring, and when I put it under "raring" it complains that it can't find anything in "precise"
[02:29] <sarnold> dalek49: does your environment say to deploy on precise or raring?
[03:54] <dalek49> sarnold: it's not specified, does it default to precise?
[09:22] <bladernr`> davecheney: sorry...
[09:23] <bladernr`> davecheney: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
[09:24] <bladernr`> I have two different keypairs… 1 is stored locally and allows me to ssh from my machine to an instance i created and associated with that key.
[09:26] <bladernr`> the other I use when creating other instances and allows me to ssh from my main ec2 instance that runs juju to the newly created instance…  IOW, key1 allws me to ssh from my workstation to ec2_instance_1.  I create a new instance manually and set it to use key2. Key 2 allows me to ssh from ec2 instance_1 to ec2_instance_2
[09:27] <bladernr`> manually, it works fine… what I'm trying to find, now, is the key to set in environments.yaml to tell juju "When you create an instance, associate it with ec2 keypair 'key2'"
[09:28] <bladernr`> davecheney: you mentioned AWS environment variables earlier, and I can find ones to specify the aws API keys so juju can spawn instances, but I can't find one to specify those ec2 keypairs as described in the link I posted
[09:28] <bladernr`> davecheney: sorry, I'm really not trying to be obtuse or intentionally confusing...
[09:28] <bladernr`> it's just working out that way
[09:34] <freeflying> does juju core support export whole deployed environment?
[12:32] <aethelrick> hi all, I've made a change to the postgresql charm that allows you to set a admin_ip configuration option which causes this ip to be added to the pg_hba.conf file... anyone want to check it out for sanity?
[12:33] <aethelrick> I'm not a python programmer... but it seems to work ok for me :)
[12:35] <mthaddon> aethelrick: do you have a merge proposal?
[12:36] <aethelrick> mthaddon, wastha? sorry, I'm new here... I started playing with juju yesterday for the first time
[12:36] <marcoceppi_> aethelrick: Do you have a launchpad account>
[12:36] <aethelrick> marcoceppi_, not yet... I'm sure I can make one though...
[12:37] <mthaddon> aethelrick: if you want to get a change made to the upstream charm, you'll need to create a merge proposal on launchpad and it'll then show up on http://manage.jujucharms.com/review-queue
[12:37] <marcoceppi_> aethelrick: that'd be the first place to start. Once you have a (free) account, you can push your version of the code to launchpad and people can review it
[12:38] <aethelrick> mthaddon, ok, thanks... will figure that out and get back to you when merge proposal is submitted
[12:38] <aethelrick> marcoceppi_, thanks :)
[12:38] <mthaddon> k
[12:57] <kenn> Hi guys, also just starting out with juju. Really like it so far, have experience with Chef on RightScale, this seems more ... sane
[12:58] <kenn> have a question though, I deployed a service which failed during install. I then ran destroy-service so I could deploy it to the machine again, but it's not going away. Is there a way to force the destruction of a service?
[12:59] <aethelrick> ok, I have made a merge request for postgresql
[13:00] <aethelrick> this is the branch I made... https://code.launchpad.net/~richard-asbridge/charms/precise/postgresql/postgresql-admin-ip
[13:01] <marcoceppi_> aethelrick: you were really close, you need to reverse the order
[13:02] <marcoceppi_> aethelrick: you want to merge lp:~richard-asbridge/charms/precise/postgresql/postgresql-admin-ip in to lp:charms/postgresql :)
[13:02] <marcoceppi_> kenn: welcome!
[13:02] <kenn> ah there we go, juju resolved did the trick
[13:02] <marcoceppi_> kenn: when a service is in an error state all future events (including destroy) are queued
[13:03] <kenn> thanks marcoceppi_
[13:03] <marcoceppi_> kenn: you have to resolve the error before continuing
[13:03] <marcoceppi_> kenn: ah, you got it, cool
[13:04] <kenn> marcoceppi, actually just watched the charm schools on Local/LXC Provider, thought your name looked familiar. Great video thanks for that, it really cleared up a lot of small random things for me
[13:05] <marcoceppi> kenn: glad that worked out for you, there were a lot of odd things that popped up during that charm school, so I'm happy you found it helpful
[13:07] <marcoceppi> aethelrick: the merge looks good, I'd just change the default from 'None' to an empty string
[13:51] <Kab> hi im trying to deploy to a vps i have running ubunutu 12.04
[13:52] <jcastro> jamespage: ooh, tell me about this percona xtradb you've got going on
[14:06] <Kab> the urls in juju init are wrong
[14:08] <marcoceppi> Kab: what do you mean the URLs are wrong?
[14:09] <Kab> type in juju init
[14:09] <marcoceppi> Kab: right, I'm familiar with this, which provider are you trying to use?
[14:09] <Kab> they all give https://juju.ubuntu.com/get-started/ url
[14:09] <Kab> local
[14:09] <marcoceppi> jcastro: ^
[14:10] <marcoceppi> Kab: we just had a new website released, looks like they didn't properly 301 redirect the URLs
[14:10] <marcoceppi> Kab: https://juju.ubuntu.com/docs/config-local.html
[14:11] <jcastro> YARGH
[14:11] <jcastro> Kab: which URLs are you running into this?
[14:11] <marcoceppi> Kab: make sure you add the ppa:juju/stable before installing mongodb-server
[14:11] <marcoceppi> jcastro: they're in the juju init output
[14:11] <marcoceppi> jcastro: and all over the internet
[14:11] <jcastro> oh
[14:12] <jcastro> just the root get-started
[14:12] <jcastro> got it
[14:12] <marcoceppi> jcastro: yeah, going to make a tool to prevent this from happening ever again
[14:12] <aethelrick> marcoceppi, hi, was away from desk, I will change the None to an empty string
[14:13] <marcoceppi> aethelrick: no worries, thanks. You'll also want to update the merge proposal, I sent instructions in your current merge request
[14:13] <aethelrick> marcoceppi, thanks :)
[14:13] <jcastro> marcoceppi: make sure it spams the planet when we break urls
[14:14] <marcoceppi> aethelrick: otherwise, the merge looks good. I'm on review this week so I might not get to it today, but I'll be looked at sometime this week
[14:14] <marcoceppi> jcastro: yeah, seriously
[14:16] <kurt_> marcoceppi: has that configuration for single provide been tested with maas?
[14:16] <jcastro> marcoceppi: they're already working on it
[14:16] <marcoceppi> kurt_: what do you mean by single provide been tested with maas?
[14:16] <jcastro> but still ... :-/
[14:17] <kurt_> marcoceppi: local rather
[14:17] <jcastro> marcoceppi: did you see the manual provisioning stuff on the list? I totally missed it until this morning
[14:17] <jcastro> marcoceppi: oh nm, I see you replied
[14:17] <marcoceppi> jcastro: I did!
[14:17] <marcoceppi> kurt_: you don't use the local provider with maas
[14:17] <marcoceppi> maas is it's own provider :)
[14:18] <kurt_> but what about the case of trying to consolidate services when deploying?
[14:18] <kurt_> ie. --to
[14:18] <kurt_> I'm wondering if this is a more elegant solution for that
[14:19] <kurt_> so far I have not had a lot of success trying to consolidate services with 1.12
[14:24] <jamespage> jcastro, active/active mysql :-)
[14:24]  * jamespage disappears again
[14:24] <jcastro> whoosh!
[14:34] <gnuoy> hazmat, hi there, I'm not sure if you're the right person to ask but I have a pretty small mp to lp:juju-deployer/darwin that would be great to get landed if  you get a moment ( https://code.launchpad.net/~gnuoy/juju-deployer/darwin-fix-force-machine/+merge/183613 )
[14:39] <hazmat> gnuoy, noted
[14:39] <gnuoy> thanks
[14:39] <hazmat> gnuoy, at a conference today, but can tackle in this evening
[14:39] <gnuoy> that would be awesome, thank you
[14:41] <arosales> hazmat, should evilnickveitch take pythonhosted.org/juju-deployer for instructions on juju deployer for the juju docs?
[14:42] <hazmat> arosales, there's not much there, src for that is lp:juju-deployer && cd docs
[14:43] <jcastro> marcoceppi: redirect fixed
[14:44] <marcoceppi> jcastro: \o/
[14:47] <arosales> hazmat, ok we'll start with the basics. From there we can get your feedback,
[14:47] <arosales> hazmat, if you have an outline on what you would like the docs to look like evilnickveitch can also build from there.
[14:48] <arosales> jcastro, were you still working on the django workflow for the docs
[14:49] <jcastro> I just mailed the guys
[14:49] <jcastro> bruno is full up on work, waiting to see what patrick says
[14:49] <arosales> wedgwood, would you be interested in some getting some docs to evilnickveitch  for charm helpers?
[14:49] <arosales> wedgwood, it can be a rough outline and evilnickveitch cand word smith
[14:49] <wedgwood> I'm definitely interested...
[14:50] <wedgwood> I'm afraid I won't have time to devote until tomorrow afternoon.
[14:52] <evilnickveitch> wedgwood, that would be cool
[14:52] <arosales> wedgwood, thanks, even if you have a rough outline to evilnickveitch by end of week that would be helpful
[14:53] <arosales> marcoceppi, to confirm are you still working on how to upgrade a charm for the docs?
[14:53] <wedgwood> I'll try my hardest
[14:53] <arosales> wedgwood, thanks
[14:53] <marcoceppi> arosales: yes
[14:54] <arosales> marcoceppi, thanks and also the juju plugin bits too, correct?
[14:54] <marcoceppi> arosales: correct
[14:56] <arosales> marcoceppi, thanks
[15:19] <Kab> the lxc install script is broken
[15:20] <Kab> it does not fully install
[15:37] <marcoceppi> Kab: care to elaborate?
[15:58] <kurt_> marcoceppi: I'm picking up where I left off from about a week and a half ago.  I mentioned I was having problems destroying services after a failed deployment.  I believe you said I need to resolve the error prior to destroying service - is that correct?
[15:59] <marcoceppi> kurt_: that's correct, You can run the destroy command at anytime - it's not until the unit is resolved of it's error that the next events will be processed
[15:59] <kurt_> macroceppi: and if I cannot actually resolve the problem? does it matter?
[16:00] <kurt_> will I still be able to successfully destroy the service?
[16:02] <marcoceppi> kurt_: so you can run `juju resolved` and that just tells juju "pretend like I've resolved this issue"
[16:02] <marcoceppi> kurt_: it doesn't actual validate if you've resolved the issue or not
[16:03] <marcoceppi> it just marks the error as fixed and moves on to the next event
[16:04] <kurt_> marcoceppi: ah ok, so the only requirement is to successfully mark the problem as resolved (assuming in the mongodb) and the service can be successfully destroyed.
[16:05] <kurt_> marcoceppi: a question related to this, when using the gui, does it take care of all of this in the background? or is there some similar manual intervention.
[16:06] <marcoceppi> kurt_: right, by marking it resovled juju just continues execution of queued events, which in this case, is the destroy environment
[16:06] <marcoceppi> kurt_: there is both a "resovled" and "retry" button on the unit screen in gui. The first just runs `juju resolved`, the second runs `juju resolved --retry`
[16:07] <marcoceppi> no further manual intervention is required
[16:08] <kurt_> marcoceppi: ok, thank you.  This is a workflow change from previous versions of juju.  I think it should be documented, or perhaps commented in an error condition or something.  This was a source of confusion.
[16:09] <marcoceppi> kurt_: it's psudeo documented. Previously you could destroy services in an error state
[16:09] <kurt_> it's not clear that you have to resolve the problem before a service can be destroyed
[16:09] <kurt_> yes, why can't that be supported?
[16:09] <kurt_> that would make life easier
[16:10] <marcoceppi> kurt_: that's no longer the case as you see. However, there is this section https://juju.ubuntu.com/docs/charms-destroy.html which says to see the troubleshooting guide
[16:10] <marcoceppi> kurt_: unfortunately that troubleshooting link is broken
[16:12] <kurt_> marcoceppi: it is broken :)
[16:12] <marcoceppi> kurt_: when it shows up again, that'll be fixed
[16:12]  * marcoceppi files a bug
[16:12] <kurt_> marcoceppi: is there a technical reason one should not be able to directly destroy service like before?
[16:13] <marcoceppi> kurt_: it's the way events are processed by newer version of juju
[16:13] <kurt_> is it a problem with state tracking of the service and relations?
[16:13] <kurt_> yeah ok
[16:13] <marcoceppi> kurt_: it's not a problem, but a fix to another issue
[16:13] <marcoceppi> in order to destroy a service you need to fire the stop hook
[16:14] <marcoceppi> kurt_: in earlier versions of juju you destroyed a service it basically just removed it from the juju topology and that was that. No stop hook would fire therefore it didn't matter. To fix that there's a dying state now, which says this unit is to be removed after all queued events have completed
[16:15] <marcoceppi> kurt_: So, if you're in an error state, all hook executions are stopped and queued, including the stop hook and the destroy event
[16:15] <marcoceppi> kurt_: so this "problem" is actually the way juju is supposed to work
[16:15] <marcoceppi> it's just a fix to a long standing bug :)
[16:15] <kurt_> I see.
[16:16] <kurt_> its just an interesting condition to leave things in an un-ending state
[16:17] <kurt_> I guess in a particular use case
[16:17] <marcoceppi> kurt_: no so much unending, everything just pauses when there's an error
[16:18]  * kurt_ thinking
[16:19] <kurt_> marcoceppi: maybe its the terminology "dying" that is getting to me.  that implies action when the process is actually paused.
[16:19] <marcoceppi> kurt_: well it is /dying/ just how long it takes to die is up to you :)
[16:19] <kurt_> marcoceppi: right.  Maybe I'm too caught up in my own confusion.  LOL
[16:21] <kurt_> marcoceppi:  do you have a cached copy of that troubleshooting guide somewhere?
[16:26] <marcoceppi> kurt_: I don't think it ever existed. I'm looking through revision history now
[16:27] <marcoceppi> evilnickveitch: ^
[16:28] <kurt_> thanks
[16:34] <marcoceppi> evilnickveitch: also, where should the charm-tools documentation live? I don't know what section to put it under
[16:36] <evilnickveitch> marcoceppi, I am adding a new section for it and other tools
[16:37] <marcoceppi> evilnickveitch: \o/ cool. I'll just keep adding pages then to add to that section later
[16:38] <evilnickveitch> kurt_, the troubleshooting page isn't live, but marcoceppi  has given you a good summary of what it has to say on the subject
[16:45] <jamespage> jcastro, just answered http://askubuntu.com/questions/343349/installing-ceph-and-ceph-osd-charms-on-same-machine/343883
[16:45] <jamespage> we really need a way to say - "don't do this" - in a way that juju can enforce
[16:47] <kurt_> evilnickveitch: thanks. I will look forward to that working.
[16:51] <X-warrior> If I'm using postgresql charm with volume-ephemeral-storage as false and volume-map  as "{ postgresql/0:  S3 volume-id }". Do I need to attach the volume to instance? Or does the charm/juju will handle it form?
[16:52] <marcoceppi> X-warrior: juju can't do any volume attaching IIRC, you'll need to do it yourself
[16:57] <marcoceppi> evilnickveitch: what would "juju plugins" go under?
[16:58] <evilnickveitch> marcoceppi, I guess the same section as charm tools and amulet?
[16:59] <marcoceppi> evilnickveitch: well this is "how to create plugins" as well as install plugins
[16:59] <marcoceppi> evilnickveitch: not sure if that changes anything
[17:00] <evilnickveitch> marcoceppi, I think for the time being we will group all those things together
[17:00] <marcoceppi> evilnickveitch: cool, what should I prefix the files with? reference-* ?
[17:01] <evilnickveitch> I think "tools-", it will be a new section
[17:15] <fwereade_> marcoceppi, evilnickveitch, jcastro: https://code.launchpad.net/~fwereade/juju-core/docs-splurge/+merge/184833 has quite a lot of new stuff
[17:15] <fwereade_> marcoceppi, evilnickveitch, jcastro: sorry it took so long
[17:16] <jcastro> better late than never!
[17:16] <evilnickveitch> fwereade_, cool! thanks for that, i will look it over tomorrow
[17:35] <X-warrior> is it possible to use a persistent storage on mysql config? Similar to postgresql? I don't think so, but maybe I'm missing something
[17:36] <marcoceppi> X-warrior: not that I'm aware of
[17:53] <X-warrior> marcoceppi: last question for the day (hopefully), is it possible to use elastic ip with juju?
[17:54] <marcoceppi> X-warrior: you can assign an elastic ip to your instances, via the EC2 control panel. It won't affect juju at all. However, there is no way to do this from within juju at this time
[17:54] <marcoceppi> * that I'm aware of
[18:13] <X-warrior> marcoceppi: Does this elastic ip control fits in juju?
[18:13] <X-warrior> I mean, this type of feature is in juju 'scope'?
[18:14] <marcoceppi> X-warrior: I'm not sure if it's something on the road map or not. I know juju can kind of work with floating-ips using OpenStack, but I'm not sure if there are plans to manage elastic ips with ec2
[18:15] <X-warrior> marcoceppi: well I'm willing to add it to juju, but before I start something I would like to check if it is something that could be used inside juju or not, because if it is not, I guess I can go with ec2 control panel
[18:15] <marcoceppi> X-warrior: check with #juju-dev that's where all the core developers hang out
[18:15] <X-warrior>  ty
[18:15] <X-warrior> :D
[18:38] <kurt_> Is 1.13.3 tested against maas yet?  I was trying to figure out where to download
[18:51] <marcoceppi> kurt_: 1.13.3 is in ppa:juju/devel
[18:53] <kurt_> thanks :)
[18:55] <marcoceppi> kurt_: as it stands now all 1.EVEN releases are in juju/stable and all 1.ODD releases are in juju/devel - the versioning follows the linux kernel where odds are devel and evens are stable
[18:56] <kurt_> marcoceppi: thanks.  Don't particular app release levels follow the different ubuntu release levels too?  Like isn't 1.14 for raunchy rabbit or whatever its called?
[18:57] <kurt_> rearing rhinoceros or whatever
[18:57] <sarnold> kurt_: that's only packages in the archive (it'd be juju 0.7-0ubuntu1 in the raring ringtail archive)
[18:57] <sarnold> kurt_: ppas have no rules
[18:58] <marcoceppi> kurt_: right now the only version of juju in raring, 13.04, is 1.10 and that's in backports
[18:58] <marcoceppi> kurt_: we've got 1.14 in saucy, 13.10, but the PPA will move forward regardless of the version in the archives
[18:58] <marcoceppi> well, we have 1.13.3 in saucy, but that is essentially 1.14
[18:58] <kurt_> ok, still learning the release level stuff
[18:58] <sarnold> marcoceppi: hrm, saucy still has pyjuju! https://launchpad.net/ubuntu/+source/juju
[18:59] <marcoceppi> sarnold: don't look at me!
[18:59] <sarnold> jcastro: hey can I give you the funny look? :) ^^^
[18:59] <marcoceppi> sarnold: it looks like juju points to 1.13.3, there's a juju-0.7 in saucy
[18:59] <marcoceppi> but I think that's just for backwards compat?
[19:00] <sarnold> marcoceppi: hrm... I wonder what I'm missing here.
[19:00] <jcastro> yeah
[19:00] <jcastro> .7 iss there for people who want it
[19:00] <jcastro> but apt-get install juju does the right thing
[19:00] <sarnold> oh good :) crisis averted :D
[19:01] <sarnold> thanks guys
[19:01] <jcastro> sarnold: jamespage has handled everything, it's lovely
[19:20] <kurt_> is there a problem with sync-tools in 1.13.3?
[19:21] <kurt_> http://pastebin.ubuntu.com/6089411/
[19:21] <kurt_> s/http://pastebin.ubuntu.com/6089411//
[19:22] <kurt_> http://pastebin.ubuntu.com/6089417/
[19:23] <kurt_> sync-tools don't appear to be downloading correctly
[19:36] <kurt_> nevermind: glitch passed apparently
[20:07] <kurt_> Is there a later version (but somewhat stable version) of the juju-gui I could be testing with?
[20:08] <kurt_> currently working with charm: cs:precise/juju-gui-76
[20:08] <kurt_> or 0.9.0
[22:55] <lamont> 2013-09-10 21:51:28 ERROR juju supercommand.go:235 command failed: no tools available
[22:55] <lamont> clearly, I misseed something simple.
[22:55] <lamont> jcastro: around? ^^
[22:57]  * lamont tries the "juju sync-tools" route
[22:58] <sarnold> lamont: while you're waiting for an expert to weigh in :) I believe I've read that the tools have to be in an s3/swift/etc bucket -- is that configured properly in the environments.yaml?
[22:58] <lamont> sarnold: well.
[22:58] <lamont> this a new everything
[22:58] <lamont> so, "wrong" is a rather likely scenario
[22:59] <sarnold> lamont: hehe :)
[22:59] <lamont> found 0 tools in target; 8 tools to be copied
[22:59] <sarnold> lamont: any luck with sync-tools?
[22:59] <lamont> 2 down, 6 to go
[22:59] <sarnold> promising :)
[23:07] <lamont> error: cannot start bootstrap instance: no "precise" images in co-01 with arches [amd64 i386]
[23:07] <lamont> now to figure out what the magic names are that it looks for
[23:25] <lamont> public-bucket-url: <URL TO JUJU-DIST BUCKET> <-- I would love to know how to construct that URL
[23:30] <thumper> lamont: hey there
[23:31] <thumper> lamont: luckily wallyworld is spending his time making this better
[23:31] <lamont> that will be wonderful later.
[23:31] <lamont> I'm hoping for "finally, it worked" tonight.
[23:31] <thumper> lamont: I think wallyworld may be able to help now if he is around
[23:31] <lamont> and afk for a goodly while, sadly.
[23:31] <wallyworld> lamont: you need to be given the public bucket by the cloud admin. but it is going away soon hopefully
[23:32] <wallyworld> lamont: you get the url by looking at the keystone endpoint
[23:44] <lamont> wallyworld: I am the cloud admin.  can you provide me with clue?
[23:44]  * lamont has about 10 minutes before he disappears again
[23:45] <wallyworld> lamont: the url is the public endpoint url for the world readable container which has been created in order to hold the tools tarballs
[23:46] <wallyworld> lamont: eg for canonistack, it is https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60
[23:47] <wallyworld> does that make sense?
[23:47] <lamont> almost
[23:47] <wallyworld> is this for a new cloud?
[23:47] <lamont> there's the origin of that token, and how to create a public bucket, that remain beyond my experience and understanding
[23:47] <lamont> yes
[23:47] <lamont> private cloud
[23:48] <wallyworld> you can use the swift client to create a container
[23:48] <wallyworld> and then mark it as world readable
[23:48] <wallyworld> i'd have to look up the exact commands
[23:49] <wallyworld> once you have the juju-dist container, you then do swift post -r .r:*,.rlistings juju-dist
[23:50] <wallyworld> is there juju doc somewhere for how to set up an openstack deployment? i'm not across what out tech writer has produced
[23:51] <wallyworld> but basically you need to set up a swift account, create a juju-dist container, make it world readable, and then add tools tarballs to a tools sub contaner
[23:51] <lamont> I've found at least 2 docs claiming to describe at least bits of it, without actually working correctly for me.
[23:51] <wallyworld> :-(
[23:51] <lamont> swift stat juju-dist gives me the account AUTH_${string} and
[23:51] <lamont> Container: juju-dist
[23:51] <lamont>   Objects: 3
[23:51] <lamont>     Bytes: 1139
[23:51] <lamont>  Read ACL: .r:*,.rlistings
[23:52] <wallyworld> that looks ok i think
[23:52] <lamont> if I am silly enough to think that https://swift.$mumble/v1/AUTH_$string should give me more than a 401 when I smack it with wget, well, I get the 401
[23:52] <wallyworld> if you do a keystone catalog you can see the full url
[23:53] <wallyworld> that will give a 401 i think, you need to type the url of an object in the container
[23:53] <wallyworld> i think wget of the top level does give a 401
[23:54] <wallyworld> there should be files in juju-dist called "tools/juju-blah.tar.gz"
[23:54] <wallyworld> wget on those should work
[23:55] <wallyworld> for canonistack, this is wget able - https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60/juju-dist
[23:55] <wallyworld> as are the tools therein
[23:56] <hatch> hey guys I'm getting an error with a newly installed juju-core from /stable when I try to `juju --version` it says `error: flag provided but not defined: --version`
[23:56] <hatch> it's on 12.04
[23:56] <lamont> so... if the account is AUTH_nnnnn and juju-dist is the container, and it has tools/juju-1.12.0-precise-amd64.tgz, then the url is?
[23:56] <lamont> assmuing https://swift.foo.com/v1/
[23:57] <wallyworld> lamont: the public-bucket-url would be https://swift.foo.com/v1/AUTH_nnnnn
[23:57] <wallyworld> see the canonistack example above
[23:58] <lamont> error: cannot start bootstrap instance: no "precise" images in co-01 with arches [amd64]
[23:58] <wallyworld> lamont: you now need to set up some image metadata so juju knows the image id to use
[23:58] <wallyworld> there is a tool for that
[23:59] <wallyworld> juju metadata generate-image i think
[23:59] <wallyworld> you need to know the image id you wsnt to use for amd64 precise
[23:59] <wallyworld> and then you run the above tool. see the --help