[00:12] If I'm making a charm for an (http) application that has a setup step that needs to run once mongo is available but before it can be started, I assume that relation-mongodb-relation-joined is the appropriate place to do that? [00:15] cory_fu: I think that would be appropriate. You would need to ensure it is idempotent though [00:16] ie, [00:22] Not sure if I missed your reply after "ie," === arosales_ is now known as arosales [00:24] cory_fu, python-django may have some good example of a relation-joined for mongo [00:32] cory_fu, yes.. that's the place... joined is a bit early if you depend on getting any settings from mongodb [00:33] How do you mean? [00:39] cory_fu, I was going to ref in python-django but I realize that is a "sym-link" charm [00:39] "sym-link" == all the hook files point to a central file [00:40] Yeah, I figured that out. mongodb_relation_joined_changed was still helpful. :-) [00:42] * arosales was just looking to see if https://bazaar.launchpad.net/~charmers/charms/precise/python-django/trunk/view/head:/hooks/hooks.py#L659 [00:42] was helpful [00:43] cory_fu, I think what hazmat is saying you can expect mongoDB on relation-joined _start_ getting the DB hooked up. So if you need an action to happen before mongo is set up then this would be a good place [00:43] As long as you don't need mongo up and running at that time. [00:43] cory_fu, helpful? [00:43] But it would have the host and port settings, at least, right? [00:43] What hook should I use if I need mongo up and running? [00:46] pending how much it needs to be up and running you may be able to insert toward the end of relation-joined, if not you could leverage relation-changed [00:47] * arosales looking at node.js as an example [00:47] Also, Allura needs four mongo databases: task, activity, pyforge, and project-data. Is it ok to use those hard-coded names? The databases will need to be shared across allura instances, so I assume hard-coding them is ok, though they could conflict with other charms (especially task) [00:47] http://bazaar.launchpad.net/~charmers/charms/precise/node-app/trunk/view/head:/hooks/mongodb-relation-changed [00:48] ya, I don't see immediately why a user would need to name those DBs differently [00:50] So, if relation-get private-address works, then mongo should be up and ready? [00:52] Also, I notice that python-django uses host while the node-app one uses private-address; but I don't see either of those in the mongodb hook's relation_set [00:55] cory_fu, should be [00:55] re mongo being up [00:55] * arosales looking at those code bases [00:57] I was looking at this: http://bazaar.launchpad.net/~charmers/charms/precise/mongodb/trunk/view/head:/hooks/hooks.py#L1057 [01:05] cory_fu, I am looking around http://bazaar.launchpad.net/~charmers/charms/precise/mongodb/trunk/view/head:/hooks/hooks.py#L136 and django and node both seem to be getting the host [01:12] Hrm. If my charm depends on mongodb, can it be assumed that the mongo command-line client is available? I'm guessing not, and I'd be better off using python [01:17] Alternatively, is it reasonable to create some sort of flag in the installed location to track whether a needed-once setup step has been done? [01:18] cory_fu, you can specify in the install hook which packages you want to ensure are specifically available [01:19] Ah, yeah, of course [01:19] cory_fu, sorry I didn't parse your last question [01:21] :-) Instead of connecting to mongo and checking for the database's existence to know if the setup-app step has been run, I could do something like touch /var/local/allura/setup-done, but that seems hinky [01:28] cory_fu, fyi python-django uses https://bazaar.launchpad.net/~charmers/charms/precise/python-django/trunk/view/head:/hooks/hooks.py#L660 for the install [01:30] cory_fu, you could call the mongodb-relation-changed if you want to take some action on the mongodb post a certain setup call [01:31] cory_fu, hooks should be able to be called more than once. this is what the node-app does [01:31] cory_fu, I am not sure if that information is helpful though [01:32] https://bazaar.launchpad.net/~charmers/charms/precise/node-app/trunk/view/head:/hooks/mongodb-relation-changed#L8 [01:33] cory_fu, sorry went afk.. but i was referencing .. relations being bi-directional, if mongo needs to send something like an ssl cert.. joined is early to see any settings from the remote side.. lots of charms just do changed hooks for relations, and check to see if the values they need are present. [01:33] Makes sense [01:34] cory_fu, does allura always need this setup-app to be done before other services are connected? [01:35] Yes. The setup-app creates the databases, collections, and indexes that the app needs [01:35] So it needs to be done once before the app is started, when mongo is available [01:36] ah, once mongo is available [01:37] cory_fu, seems you would need to do that in the mongodb-relation-joined hook and check to make to see if setup-app has run and if not . . . [01:38] this would put a dep on the charm to mongodb. Specifically the charm is not usable until the mongodb relation is created [01:38] that would need to be documented in the readme [01:39] hazmat, sound reasonable? ^ [01:39] If you're interested in seeing what I have so far, I just put it up here: https://sourceforge.net/u/masterbunnyfu/allura-charm/ci/master/tree/ [01:39] seems the install hook is too early as mongo may not be available [01:39] arosales, sounds good [01:40] * hazmat peaks [01:41] cory_fu, allura is python afaicr.. i'd probably go with an upstart job for the wsgi api.. and s/paster/gunicorn [01:42] oh.. allura made it to an apache project.. cool [01:42] Incubating, but we're hoping to get it graduated soon [01:44] cory_fu, re install i'd probably do pip ---use-mirrors [01:46] looks like your following the node example and skipping the mongodb-relation-joined and checking for the db in the mongodb-relation-changed [01:46] Yeah [01:47] hazmat: I've not tried to run Allura under gunicorn. Is it more or less drop-in? [01:47] cory_fu, some charms capture there deps locally.. probably overkill but with pip you can do offline installs if you $ pip install -d dist -r requirements.txt to download all the eggs and then install in the charm with $ pip install --no-index --find-links=file://tools/dist -r requirements.txt [01:48] cory_fu, yeah.. more or less.. there's a gunicorn_paster command that behaves like a paster analogue for serve http://gunicorn-docs.readthedocs.org/en/latest/run.html [01:48] er.. http://gunicorn-docs.readthedocs.org/en/latest/run.html#gunicorn-paster [01:49] cory_fu, i'd go ahead and get it running with whatever your comfortable with first.. there's always room for optimization later [01:49] Yeah [01:50] cory_fu, if you have issues with waiting the the db in mongodb-relation-changed you may want to specifically move the setup-app into the -joined hook [01:51] * arosales still looking to see what mims is doing to skip the -joined and block to the -changed hook is fired [01:51] Now I'm confused. Is -changed before or after -joined? [01:52] cory_fu, sorry for the confusion [01:52] -joined is before -changed [01:52] joined is always called before the first changed, and always accompanied by a changed. [01:53] -relation-joined is run once only, when that remote unit is first observed by the unit. [01:53] ref = https://juju.ubuntu.com/docs/authors-charm-hooks.html [01:53] once for each unit of the remote service [01:54] so if you have a replica set.. you'll get joined multiple times. [01:54] and each of those joins will be immediately followed by a changed hook firing for the same unit. [01:55] hazmat, is -joined to early to call setup-app here? [01:55] or just the right place [01:56] cory_fu, needs setup-app to be called once only after mongodb is ready [01:56] ready = mongodb can start creating tables [02:00] arosales, joined feels a bit early for db initialization.. ie. a mongodb client connections might need a replicaset name that's sets on the connection, and port is a config option for mongo which is conveyed along relations (although defaults to std 27017)... neither would be set in -joined [02:00] cory_fu, if all you need in setup-app is the private address and the DB names which you know before hand -joined should be ok, but you can also check if it is a first run -changed [02:00] hazmat, good point on the replica set [02:00] hazmat, thanks for the info [02:01] cory_fu, hopefully that info is helpful. [02:01] cory_fu, I am going to grab some dinner but feel free to leave a ping here if you run into any other questions [02:02] thanks. I'm probably going to call it a night soon (on east coast time), but I'll be on tomorrow working on this. [02:03] cory_fu, np.. folks will be around [02:03] Thanks [02:04] * hazmat included === ev_ is now known as ev === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away [19:13] cory_fu, so you've got local provider in vagrant you want to access from cli on osx host [19:13] ? [19:13] Well, I followed this setup: https://juju.ubuntu.com/docs/config-vagrant.html [19:14] hmm.. so there are two ports you would need forwarded from host to container [19:14] not sure if the vagrant box does that [19:14] So, yeah, wondering how to use that from the cli now. (Or how to add and debug my charm to it) [19:15] cory_fu, you can use the cli from within the vagrant box [19:15] Ah [19:15] cory_fu, vagrant ssh [19:16] I did do the sshuttle step, so it would be nice to take advantage of that [19:17] cory_fu, that will be helpful for access services you deploy in the local provider (lxc contianers in the vagrant box) [19:18] Ah, so it doesn't really help with using the cli locally against the vagrant box? [19:18] to use the cli on osx, you'd need to copy ~/.juju/environments/$env_name.jenv within the box to the host in the same place.. you'll also need to forward some additional ports.. [19:19] cory_fu, yeah.. it doesn't seem like it does.. [19:19] cory_fu, you can not sure which version of the gui its using, but you can drag and drop local charm folders to the gui in the latest version to deploy them [19:19] but not helpful for debugging [19:19] Oh, nice === timrc is now known as timrc-afk [19:20] cory_fu, i'd recommend just doing cli from within the vagrant box [19:20] Just the local folder into the GUI? [19:20] Ok [19:22] vagrant@vagrant-ubuntu-precise-64:~/allura-charm$ juju debug-log [19:22] Permission denied (publickey,password). [19:29] The authorized-keys in ~/.juju/environments/local.jenv matches the id_rsa.pub. o_O [19:30] marcoceppi, you know anything about that vagrant box setup? ^ [19:31] * hazmat notes he's not here [19:31] cory_fu, switch to the ubuntu user [19:32] Ah [19:32] cory_fu, i'm guessing.. i haven't used that vagrant box before [19:33] Can't seem to su; none of the mentioned passwords work [19:33] Ok, su -> su ubuntu worked [19:34] * hazmat installs virtualbox [19:34] Nope. No .juju directory for that user [19:34] cory_fu, and no private keys in ~/.ssh? [19:34] It does seem like it's set up to use the vagrant user. [19:34] for vagrant user [19:34] yeah [19:34] it does [19:34] Not for ubuntu, no [19:34] but it should have a private key as well is the odd part [19:35] The vagrant user does have one, and it matches what's in the local.jenv file [19:35] But I still get the error [19:40] sounds like a different issue then [19:40] cory_fu, i'm in process of downloading the jujubox img. [19:55] cory_fu, works out of the box for me [19:55] cory_fu, interesting i take that back [19:55] cory_fu, aha [19:55] juju debug-log doesn't.. juju ssh unit_does [19:56] cory_fu, you can juju ssh allura/0 and view log at /var/log/juju/unit-allura-0.log [19:56] cory_fu, debug-log doesn't work for local provider in 1.16.6 version of juju [20:00] Ah [20:01] This is after doing juju deploy allura from within the vagrant image? [20:04] vagrant@vagrant-ubuntu-precise-64:~$ juju ssh allura/0 [20:04] ERROR unit "allura/0" has no public address [20:05] cory_fu, juju status allura [20:06] Ah, I see. It wasn't up yet [20:24] I don't see a juju destroy command; how do I re-deploy if it failed? [20:25] Oh, nevermind. juju help commands [20:25] :-p [20:30] How long should I expect destroy-service to take? [20:43] Hrm, yeah. The status says it's in "life: dying" but it doesn't seem to be doing anything [20:43] cory_fu, shouldn't take that long [20:44] Is there another log besides the unit-allura-0.log? [20:44] that I should check? [20:44] cory_fu, for local provider on the vagrant box there should be some logs in ~/.juju [20:45] the interesting one is the host machine-0.log [20:46] Yeah, it doesn't seem to do anything when I issue a destroy-service allura [20:47] Can I more forcibly remove it with destroy-machine? [20:48] cory_fu, yes.. juju terminate-machine --force machine_id_with_allura [20:50] Ok, that worked [20:50] Wonder why it didn't work with destroy-service [20:54] not sure, i've been using the dev version 1.17 series and hitting it pretty hard.. haven't seen that [20:55] Blargh. It didn't use my update to the install script [20:55] *hook [20:57] I assume that once I referenced it once with --repository=/path and local:precise/allura, it's somehow cached and I need to do something to clear that? [21:11] cory_fu, you have to deploy with -u [21:11] Ah [21:11] cory_fu, you can also juju upgrade-charm --force to update in place [21:11] --force is needed to upgrade if its currently in an error state [21:11] Ok [21:11] cory_fu, if you hit a hook error, you can juju resolved --retry to have it re-execute [21:12] juju debug-hooks drops you in a tmux session where you can interactively explore the state of the hook (the hooks pop up in new tmux windows) can be used in combination with resolved --retry.. [21:12] caveat there is debug-hooks always returns success for the hook being executed [21:18] Though I can't retry the hook, since I need the copy of the hook to be updated. I assume it won't re-pull the hook from the source I gave to the original deploy command? [21:25] cory_fu, you can juju upgrade-charm --force to put the new source in place [21:25] cory_fu, or deploy -u which will increment the revision file automatically in the charm