[06:33] <freeflying> what is the appropriate name of the bootstrap node?
[06:34] <davecheney> freeflying: that is what we call it
[06:35] <freeflying> davecheney, bootstrap node?
[06:35] <davecheney> yup
[06:35] <davecheney> sometimes it's called the state server
[06:35] <davecheney> but that isn't very accurate
[06:35] <freeflying> davecheney, cool, no matter its python version or go version :0
[06:35] <freeflying> :)
[06:36] <davecheney> freeflying: we did keep _some_ things the same :)
[06:36] <freeflying> davecheney, nice approach I' d say
[06:37] <freeflying> davecheney, especially make our lives easier when write documents :D
[11:40] <varud> Anybody have experience dealing with the following:        agent-state-info: 'hook failed: "config-changed"'
[11:41] <varud> It's a chronic problem I've been experiencing after reboots on a local juju installation both on precise and raring
[12:14] <marcoceppi> varud: yes, it means that a hook was executed but it failed during execution (exited with a status > 0)
[12:15] <marcoceppi> varud: you can run `juju resolved --retry <unit>` to re-run the hook again. If it continues to error then you can either ignore it or re-deploy the service (ignore it with `juju resolved <unit>` (without --retry))
[12:31] <varud> thanks, trying that out now
[14:17] <rick_h> featured for flag bearer!
[14:17] <rick_h> jcastro: ^
[14:17]  * rick_h is catching up on the video
[14:18] <marcoceppi> rick_h: thanks!
[14:18] <rick_h> marcoceppi: yea, we've got the manual feature to mark as 'featured' and they're shown at the top of the gui. Great place to put flag bearers
[14:19] <marcoceppi> rick_h: I think featured and flag bearer are slightly different
[14:19] <marcoceppi> rick_h: in the end we decided not to display flagbearer in the gui
[14:19] <marcoceppi> a flag bearer may or may not be featured
[14:19] <jcastro> I will certainly feature any charm we flagbear
[14:19] <rick_h> marcoceppi: yea, understood
[14:19] <mattgriffin1> #join #ubuntu-uds-servercloud-2
[14:41] <X-warrior> 'juju -v add-relation postgresql test-charm'... returns me 'error: no relations found'. My metadata.yaml has requires: database: interface: postgresql. What is wrong?
[14:52] <mattgriffin1> jcastro: watching video for Hangout for Flag Bearer Charms. sorry i missed it… busy morning. re: percona xtrabackup.. i'm still trying to get internal resources to assist
[14:52] <marcoceppi> X-warrior: can you pastebin your metadata.yaml file?
[14:52] <X-warrior> yes I can
[14:52] <X-warrior> just a sec
[14:52] <marcoceppi> X-warrior: np
[14:54] <X-warrior> http://pastebin.com/WUNCxJfx
[14:54] <jcastro> mattgriffin: no worries, thanks for the follow up!
[14:55] <marcoceppi> X-warrior: if you review the postgresql's metadata file, http://bazaar.launchpad.net/~charmers/charms/precise/postgresql/trunk/view/head:/metadata.yaml, it provides a pgsql interface. You'll need to make sure your interfaces match. So instead of "postgresql" as the interface, use pgsql
[14:55] <marcoceppi> X-warrior: charms can provide/require multiple relations over multiple interfaces. Interfaces are the only thing* juju cares about when matching a relation
[14:56] <marcoceppi> * unless there's an ambiguous interface match, in which case you'll need to provide the corrosponding relation endpoint
[14:57] <mattgriffin> jcastro: np
[14:59] <X-warrior> does database and db the same? or should I change it on my charm to db instead of database?
[15:01] <marcoceppi> X-warrior: that naming doesn't matter so much
[15:02] <X-warrior> ok
[15:02] <marcoceppi> X-warrior: you can have it be database, db, one-thousand-suns; it's only used to name hooks within your charm and to potentially remove ambiguousness from relations
[15:02] <X-warrior> and to get postgresql working, I MUST add a directory-path to it? I see the requires: persistent-storage: interface: directory-path
[15:07] <stub> X-warrior: No, in this case requires is optional :-/ The naming there isn't the best.
[15:07] <marcoceppi> X-warrior: no, requires and provides are a bit of a misnomer
[15:07] <marcoceppi> X-warrior: all relations are inheriently optional
[15:08] <X-warrior> it would be nice to have a distinction from requires and optional... so you could have then separated since there are some relations that probably are mandatory to get the service working (example: mysql to wordpress)
[15:10] <marcoceppi> X-warrior: the charm should be able to operate at any time without any or all relations added
[15:10] <marcoceppi> any caveats need to be in the README
[15:15] <X-warrior> ty
[15:15] <X-warrior> :D
[16:49] <X-warrior> How could I use a private git repository on install? I saw the vanilla example, but if the repository is closed there are some problems with keys and stuff.
[16:52] <marcoceppi> X-warrior: you'd have to have config options to provide authentication methods for that repo
[16:53] <marcoceppi> X-warrior: so either an SSH key that you can provide, or a user/pass combo for the repo, etc
[17:10] <X-warrior> marcoceppi: is it possible to pass parameters on deploy? I mean, the install hook will need this user/pass information... but I don't want to hard code it on charm, so I would like to pass it as parameter. Since the service is not running yet, I can't use 'set' I guess
[17:12] <X-warrior> I can see the deploy --config option, but with that I will need to hard code the user/password on config.yaml file.
[17:19] <sidnei> X-warrior: this config.yaml you pass to deploy --config is not the config.yaml of the charm itself, but a local .yaml file with a different structure
[17:19] <X-warrior> yeap
[17:22] <mrsolo> hi how do i force destroy a machine?
[17:22] <sidnei> mrsolo: juju terminate-machine, but it must have no services on it anymore
[17:24] <mrsolo> hm destroy-unit won't do?
[17:24] <X-warrior> sidnei: how could I access this config vars inside a hook? `juju get service name`?
[17:24] <X-warrior> iirc destroy-unit is to destroy units created by add-unit
[17:29] <sidnei> mrsolo: remove-unit removes a specific unit from a service, if that was the last unit in a machine you can then terminate-machine
[17:29] <sidnei> X-warrior: config-get name
[17:30] <sidnei> X-warrior: the variable *has* to be defined in the service's config.yaml, think of that as the 'schema' for your config, which defines a default value and the type of the config key
[17:30] <mrsolo> sidnei, thanks..
[17:30] <mrsolo> just did remove service, does it take a long time for the service to be removed?
[17:31] <X-warrior> http://pastebin.com/2sqw4FL4
[17:31] <X-warrior> like this?
[17:31] <X-warrior> and then git-key=`config-get git-key`  ?
[17:32] <sidnei> X-warrior: yup
[17:33] <X-warrior> I will try
[17:33] <X-warrior> ty
[17:34] <marcoceppi> mrsolo: does it have an error in juju status?
[17:34] <marcoceppi> it doesn't take that long
[17:38] <mrsolo> MACscr, no error.. the machine is listed as dying  so i wonder if it got into the state that database entry remove is not possible
[17:38] <mrsolo> https://juju.ubuntu.com/docs/troubleshooting.html#die <- i got into this state.. and that link is broken..hah
[17:39] <mrsolo> http://nopaste.info/6faf012158.html
[17:40] <marcoceppi> mrsolo: it's in an error state, agent-state-info: '(error: hook failed: "install")'
[17:41] <mrsolo> yes how do i correct that.. that instance is totally gone
[17:41] <mrsolo> i forced wipe it from ec2
[17:41] <marcoceppi> mrsolo: run `juju resolved jenkins/0`
[17:41] <marcoceppi> mrsolo: Oh, so you took the neuclear option. Not sure if you'll be able to remove it at this point.
[17:42] <mrsolo> nice
[17:43] <mrsolo> ya i did the nuclear option :-)
[17:43] <marcoceppi> mrsolo: it doesn't hurt anyone/thing at this point, just muddies up the status output
[17:43] <mrsolo> ya something to know.. so  if i want to wipe the entire lab.. do i need to generate environments.yaml?
[17:43] <marcoceppi> for future reference, if it's in an error and you're trying to destroy, running `juju resolved <unit>` will move the hook execution along. When an error is incurred, juju stops and queues all future events (including the destroy events)
[17:44] <marcoceppi> mrsolo: if you want to remove the environment, just `juju destroy-environment`
[17:44] <marcoceppi> that will delete and tear down everything (including bootstrap)
[17:44] <marcoceppi> but you won't have to change your environments.yaml
[17:44] <marcoceppi> you can then run juju bootstrap and generate a clean environment to work with again
[17:44] <mrsolo> okay neat
[17:45]  * marcoceppi records that we need a troubleshooting guide for the docs, with how to destroy a service in error state
[18:07] <X-warrior> config.yaml
[18:07] <X-warrior> ops
[18:07] <X-warrior> sorry
[18:21] <X-warrior> Can I create my keys on config.yaml? I mean, I would like to add a git-key on it but when I'm using git-key: | option, it returns me an error
[18:21] <X-warrior> and if I remove the | and use just a regular string I receive this output "error: unknown option "git-key""
[18:22] <sidnei> X-warrior: unkown option means the charm's config.yaml doesn't define that key
[18:22] <sidnei> X-warrior: it needs to know that it's a valid option and that its type is 'string'
[18:24] <marcoceppi> X-warrior: so you'll have to have git-key in your config.yaml for the charm in the charm directory, but you don'thave to give it a default value, you can leave it as an empty "" for the default. Then you can create a seperate configuration file (maybe call it deployment.yaml) that you can keep outside of the charm and fill in the git-key configuration option
[18:27] <marcoceppi> X-warrior: an example, is with the phpmyadmin charm, which requires you to set a password for the user http://bazaar.launchpad.net/~charmers/charms/precise/phpmyadmin/trunk/view/head:/config.yaml however, when I deploy it I have another yaml file in my home folder with the following:  http://paste.ubuntu.com/6037542/ that I call deploy.yaml. So I can do things like `juju deploy --config ~/deploy.yaml phpmyadmin` and it'll get those
[18:27] <marcoceppi> three values set at deploy time
[18:29] <marcoceppi> https://juju.ubuntu.com/docs/charms-config.html for additional reference
[19:30] <adam_g> wedgwood, sidnei any objections to merging darwin into lp:juju-deployer?
[19:34] <wedgwood> adam_g: not from me. I've been using darwin exclusively for a while. you should ask mthaddon though
[19:34] <adam_g> wedgwood, thanks
[19:35] <adam_g> mthaddon, ^^
[19:35] <wedgwood> adam_g: he's EoD, so an email might be best
[19:35] <adam_g> k
[20:13] <sidnei> adam_g: +1
[20:18] <marcoceppi> adam_g: I'd love to see juju deployer in the stable ppa too, if you're going to be making the merge
[20:18]  * marcoceppi thows 2C around
[20:24] <jcastro> adam_g: yes please!
[20:24] <jcastro> deployer in the the stable ppa!
[20:40] <sidnei> c'mon, what's wrong with saucy? :)
[20:40] <sidnei> jcastro, marcoceppi: which distros you care about precise only or all in between?
[20:42]  * sidnei splashes some commas around
[20:42] <sidnei> i think i *can* upload to the ppa, but i need someone to tell me if i *should* do it or not
[20:44] <sidnei> maybe it should go into ppa:juju/pkgs with amulet and all that?
[20:44] <sidnei> in fact, there's a version there, it's just that it's fairly old
[20:46] <marcoceppi> sidnei: precise raring saucy
[20:46] <marcoceppi> sidnei: no, it needs to go in to ppa:juju/stable
[20:46] <marcoceppi> sidnei: per cross team discussion
[20:50] <sidnei> marcoceppi: so it's agreed on already?
[20:50] <marcoceppi> sidnei: correct
[20:51] <sidnei> marcoceppi: ok, i'll trigger a backport from saucy then
[20:51] <marcoceppi> sidnei: thanks
[20:53] <sidnei> marcoceppi: no quantal?
[20:53] <marcoceppi> sidnei: oh yeah, all current release please :)
[20:59] <sidnei> marcoceppi: all pending build, starting soonish: https://launchpad.net/~juju/+archive/stable/+builds?build_text=&build_state=all
[20:59] <marcoceppi> sidnei: thanks
[20:59] <marcoceppi> sidnei: could you stick the python-jujuclient api thing in there too?
[20:59] <marcoceppi> so deployer works
[20:59] <sidnei> marcoceppi: i take it taht you haven't looked at the url :)
[21:00] <marcoceppi> sidnei: what, click on things? nah
[21:00] <sidnei> in other words, yes, done
[21:00] <marcoceppi> sidnei: <3 thanks
[21:24] <sidnei> marcoceppi: all done
[21:24] <sidnei> well, 'Binary packages awaiting publication'
[21:25] <marcoceppi> sidnei: Thank you sir!
[23:07] <weblife> can someone help my figure out why I am getting a error with a mongo shell script on the install hook: http://paste.ubuntu.com/6038371/
[23:08] <weblife> when I ssh into it I can load the mongo shell
[23:08] <weblife> the service is up and running
[23:09] <weblife> I know the bash script works locally with the same version and install
[23:10] <weblife> The error response is below the script allso
[23:10] <weblife> also
[23:10] <sarnold> weblife: sudo sudo sudo ... won't this thing run as root?
[23:12] <weblife> sarnold: I figured it wouldn't hurt just in case to have it there.  That could be the issue you think?
[23:12] <sarnold> weblife: probably not the issue, you -do- get a mongo attempt to connect after all..
[23:13] <weblife> DO I actualy need to open that port you think?
[23:13] <sarnold> weblife: I think I'd throw a netstat -alp in there before the mongo << EOF ... -- see if the socket is open yet?
[23:13] <weblife> sarnold: I wouldn't think so due to it being local but I could be wrong.
[23:14] <sarnold> weblife: 'service' is going to return nearly immediately, the service may not yet be running?
[23:14] <weblife> sarnold: looks like were on the same page.  Will try.