[00:00] has anyone been able to get the nova-ccc up and running properly? It keeps erroring out for me when I join a nova service to it === defunctzombie_zz is now known as defunctzombie [00:08] zradmin: have you gotten a relationship successfully between nova-cc and nova-cs? [00:13] kurt__: thats when nova-ccc turns red [00:15] kurt__: the log only shows something about the ssh keys for the compute node and then fails the hook, http://pastebin.ubuntu.com/6117251/ [00:16] right, you are running in to the same bug as we are [00:16] you have two work arounds [00:16] are you doing maas? [00:16] yes [00:17] from kentb: kurt__: yep. indeed. What I ended up doing was adding the "option domain-search 'master';" line to /etc/maas/dhcpd.conf and restarting the service so all future machines would get the new settings. [00:17] that's one way [00:18] the other way is to ensure your nodes alls have /etc/host entries for each other [00:18] you'll have to ensure the nodes are up via maas with a valid image, then go in and update them manually [00:19] kentb's method is probably easier. I've tested neither yet [00:19] ah ok so adding to dhcpd, restart the service and reboot the affected nodes? [00:20] well, make changes to dhcpd, then just destroy and re-add nova-cc [00:20] then add relationship [00:20] you keep dropping from the room [00:20] ok I'll play around with it a bit [00:21] but the key is obviously, the nova-cc node needs to re-up via dhcp to get the info [00:22] back in a few [00:22] sounds good === defunctzombie is now known as defunctzombie_zz [00:58] FYI jamespage adam_g: this is a bug that's affecting a lot of people trying to deploy openstack on MAAS https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160 [00:58] <_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error [00:59] marcoceppi, can you please test with the py rewrite branches in lp:~openstack-charmers? [01:00] adam_g: I certainly can, I'll update the bug report tomorrow with my findings [01:02] marcoceppi, can't promise its going to work any better, actually [01:03] marcoceppi, issue is that MAAS/users need to ensure DNS works when resolving non-FQDN hostnames. not just for the charm to deploy okay, but for ssh block migration to work.. nova attempts to initiate that by pointing a libvirt migration to qemu+ssh://$HOSTNAME/system, not qemu+ssh://$FQDN/system [01:03] adam_g: that's fine, I don't mind getting you as much information as possible to fix it. I've got a 4 hour flight tomorrow so I might just try to find where in the code it's dropping the tld and patch the bash charm for now [01:04] adam_g: So this is more a bug in maas, where the workaround outlined in the bug report needs to be applied to maas and not the charm? [01:05] marcoceppi, i believe you can work around better by just ensuring /etc/resolv.conf contains 'search master' [01:05] marcoceppi, or getting maas to ensure that when its giving out DHCP [01:05] adam_g: this is the current workaround: https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160/comments/1 [01:05] <_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error [01:06] marcoceppi, oh, didnt see that. but yeah, that is correct [01:06] I mean, this is a big problem for users deploying on maas, I just want to know who I have to annoy to get this fixed [01:06] so if it's not at the charm level, maybe this needs to find it's way in to MAAS, either during the MAAS setup or as part of the documentation [01:06] marcoceppi, i'd think it would be a reasonable default to set the domain-search in MAAS? [01:07] marcoceppi, alterantively, you can just disable live-migration in nova-compute [01:07] adam_g: I agree as far as maas [01:07] if DNS is borked, thats not going to work anyway [01:07] (live migration) === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [01:49] hey guys, anyone around here? [01:49] jose: yes [01:52] yes [01:52] hey, I'd like to know if any of you know how could I add hook triggering on my charm [01:58] jose: what do you mean by that? [02:00] on http://bazaar.launchpad.net/~jose/charms/precise/postfix/trunk/files I have hooks such as add-ssl, and would like to know if there is a way to trigger them like an option change [02:00] maybe something like 'juju set charmname option true' [02:14] jose: yes, handle that in config-changed [02:14] davecheney: and is there any pages that can give me a clue on that? I'd like to implent it [02:15] jose: to restate your request, you would like the enable ssl with a config flag ? [02:15] is that correct ? [02:16] enable ssl and update ssl certs, which are the two hooks I have [02:16] something like [02:17] [[ $(config get option) == "true" ]] hooks/ssl-update [02:20] so, if that line's in there, would the person be able to do 'juju set postfix ssl-update' and get the hook running? because as far as I can understand, that line would need to have something changed in config.yaml === defunctzombie is now known as defunctzombie_zz [02:27] jose: ssl-update is not a hook name [02:28] the names of hooks are fixed [02:28] juju set will fire the config-changed hook [02:28] so you caninsert logic inside that hook to check the content of $(config-get ssl-update) [02:28] oooooh, got it now! [02:29] so basically if I say it to check if the command is 'juju set postfix ssl-update' and that is true, then it can trigger it [02:29] jose: close [02:29] you check the value of the config item, ssl-update [02:30] great, I think I now have an idea on what to do. Thanks a bunch, really! [02:30] * jose runs and fixes his code [02:31] jose: don't forget to define ssl-update in your charm's config.yaml [02:31] yep, thanks! [02:33] and also, if $(config-get ssl-update) is false, don't skip it [02:33] you need to remove any existing ssl configuration from the service [02:34] hm, bug appears [02:35] although it'd be best to add a 'remove ssl' config option, as I find it quite difficult for me to get around a solution on that === defunctzombie_zz is now known as defunctzombie [02:45] jose: hmm, i think that will be harder [02:45] all config vlaues have a default [02:46] that is to say, there is no way that config-get will return nothing [02:46] so make it a bool param [02:46] and if inside the config-changed hook you find 'ssl-update' is false, then do whatever logic you need to disable ssl support [02:46] that logic should expect to be run multiple times [02:47] I don't understand why people would like to disable ssl, but I'll try to find a workaround [02:49] also, is it possible to end and if/elif without an else? === defunctzombie is now known as defunctzombie_zz [03:04] anyone around still? [03:07] I am, though I'm not that of a coder === defunctzombie_zz is now known as defunctzombie [03:42] zradmin: I'm here, and so are a few ozzies [03:42] but I don't follow this all the time as I have about 30 channels open [03:45] do you know anything about setting the api ips in openstack? glance is giving me some trouble and only coming up with one of the two nodes instead of the HA IP (I'm following the ubuntu HA guide using juju for this) [03:51] hey thumper, do you by chance know if an if/elif can end without an else? [03:51] or zradmin, ^ [03:51] sorry don't know anything about openstack [03:52] jose: in what language? [03:52] bash [03:52] forgot to mention :) [03:52] sure, just don't code the else [03:52] great, thanks! :) [03:52] thumper: no problems, I'm sure I'll figure out whats going on with it eventually [03:56] thumper: also, if you have a min, do you know a command for juju to return the charm name deployed on a node? not wordpress/1 but wordpress, as an example. [03:57] jose: command from where? [03:57] * thumper doesn't really know much about charms [03:57] such as config-get [03:57] so from the client machine [03:58] using the juju command line, or a library, or what? [03:58] juju CL [03:59] second, let me grab a linkl [04:01] urgh, can't find it [04:02] https://juju.ubuntu.com/docs/authors-charm-anatomy.html here, in hook environment [04:03] something like $JUJU_UNIT_NAME [04:04] so inside the hook context? [04:05] yeah [04:05] no, it doesn't look like it [04:06] but you could to an rsplit on the slash [04:06] rsplit? /me googles [04:06] right split [04:06] so take foo-bar/2 [04:06] and split on the last / [04:06] which should be the only slash [04:06] to get foo-bar and 2 [04:06] the service name is the first bit [04:07] unfortunately I have to go === thumper is now known as thumper-afk [04:07] no worries [04:07] thanks! [04:28] marcoceppi, jcastro: http://manage.jujucharms.com/review-queue is 404 === defunctzombie is now known as defunctzombie_zz [04:29] ah === jose changed the topic of #juju to: Share your infrastructure, win a prize: https://juju.ubuntu.com/charm-championship/ || OSX users: We're in homebrew! || Review Calendar: http://goo.gl/uK9HD || Review Queue: http://manage.jujucharms.com/tools/review-queue || http://jujucharms.com || Reviewer: ~charmers" === defunctzombie_zz is now known as defunctzombie === CyberJacob|Away is now known as CyberJacob === tasdomas_afk is now known as tasdomas === jcsackett_ is now known as jcsackett === defunctzombie is now known as defunctzombie_zz === axw_ is now known as axw === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === tasdomas is now known as tasdomas_afk === tasdomas_afk is now known as tasdomas [10:13] is it ok to reboot units after the intall charm has run? if say you edit users and groups? [11:29] mattyw: you can reboot units, they are designed to survive restarts [11:29] marcoceppi, just doing reboot in the hook? [11:30] mattyw: I really /really/ wouldn't recommend a reboot, I don't know how it will handle during a hook execution. Why do you need to run a reboot in the first place? [11:32] marcoceppi: mattyw it would be great if you didnt reboot a unit [11:32] davecheney: that's my gut feeling wrt this [11:40] davecheney, marcoceppi I have a feeling it's something I don't want to be doing [11:41] I don't think I really need to be doing it, but wondered if other were doing it in charms already [11:42] mattyw: no, it would be without precident [11:42] i'm fairly sure it would raise merry hell [11:42] davecheney, my favourite kind of hell [11:43] mattyw: go on, blow up the world [11:44] marcoceppi, davecheney, I was playing around with trying to do a docker charm last night, part of the install instructions suggests to create a docker group and add the ubuntu user to the group so you don't have to do sudo docker all the time, but it appears that it isn't working [11:44] I was going to try a last ditched reboot to see what happens (out of desperation rather than thought) [11:45] mattyw: hooks are executed as root, but new groups won't take affect until "next login" of that user [11:46] there's a newgrp command that might help, but you shouldn't need to do anything with the ubuntu user if you do everything as root [11:47] marcoceppi, it's basicall the why sudo? section here I was following http://docs.docker.io/en/latest/use/basics/ [11:48] mattyw: since hooks are run as root, you can just pretend like that's not an issue. Alternatively, create a new user (like docker) to run all the docker commands under [11:49] mattyw: the bbb is rather nice [11:49] blows the doors off the RPI [11:49] shame to turn it into a freebsd builder [11:53] davecheney, have you tried running ubuntu on it? [11:53] mattyw: not tried nothing, it was waiting at the door when I got home [11:53] will have a play this weekend [11:54] davecheney, so it's not building yet but will be? [11:55] yeah [11:55] getting a stable freebsd 10 image is the next challenge [11:56] davecheney, I'm still trying to find an lcd screen to connect to my pi that doesn't suck or cost too much [11:57] mattyw: there are a few SPI based Oled ones [11:57] freetronics has one [12:03] davecheney, this is actually the sort of thing I was after https://store.tinygreenpc.com/monitors-displays/7-hdmi-touch-monitor-without-enclosure.html [12:03] davecheney, see if I can turn my pi into some budget tablet thing === tasdomas is now known as tasdomas_afk === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === tasdomas_afk is now known as tasdomas === lifeless_ is now known as lifeless [13:20] so here's a question, has anyone tried upgrading the kernel in a charm? [13:20] mattyw: yes, and no [13:21] you'll find thta most hosts expect to be rebooted due to the implicit apt-get update && apt-get upgrade that cloud-init does on your behalf [13:21] mattyw: it is important to recognise that juju is a _service_ management tool, not a _host_ management tool [13:21] as such, juju expects a few things to be already done for it [13:21] ie, we expect the host to work [13:22] davecheney, yeah sure, understood [13:22] and we expect the host to have working networking [13:22] etc, etc [13:22] but if the service requires a special version of the kernel that means it's not really suitable to be charmed? [13:23] mattyw: we'd handle that with a constraint [13:23] davecheney, oh right [13:23] in the same way that you would say 'this service should be deployed on machines with a big /dev/sdb' [13:24] well, if we had working constriants, that would be what we use them for [13:24] davecheney, ok understof [13:24] understood as well [13:41] hi sinzui, would it do any good for you to review my rewriterules branch or should i just wait for a webops to do it? [13:41] I can in a few minutes [13:51] thanks sinzui, it is at https://code.launchpad.net/~bac/canonical-is-charm-configs/remove-head-redirects/+merge/185927 [13:52] bac, we are removing the HEAD redirects because charmworld supports juju-gui urls? [13:53] sinzui: yes [13:53] sinzui: revisionless urls return head [13:54] o/ [13:54] didn't juju-gui also need to understand revisionless urls to route to the details [13:54] Hi lifeless [13:54] ^ bac [13:55] sinzui: unsure, let's ask gary_poster ^^ [13:55] sinzui: no, the gui is kind of revision agnostic https://jujucharms.com/precise/mysql [13:56] rick_h_, bac: fab! [13:56] we want both revisionless and revision...-full fwiw [13:56] revisionless works now. with revision works 1/4 way [13:57] 1/4 way? [13:57] it doesn't fall over but ignores revision [13:57] will change upcoming [13:57] oh right [13:58] yea, right the gui just takes whatever it's given as the charm id and passes it to charmworld. Making sense/providing the right data comes from there. === kentb-out is now known as kentb [14:09] jcastro: are you familiar with the gunicorn charm? [14:11] jcastro: and do I need to have this charm I'm developing in a ./precise/ directory? [14:29] mhall119: yes it needs to be in a series directory if you want to deploy it [14:29] mhall119: the gunicorn charm is being depreciated I think. let me check === benji__ is now known as benji [15:12] marcoceppi: could you please have a look here: https://code.launchpad.net/~adeuring/charm-tools/check-config-yaml/+merge/186066 ? [15:13] adeuring: see my reply [15:13] marcoceppi: I have a limited amount of space on /, but a lot of it on /home/, how can I get juju's local provider to put it's instance on a different partition? [15:13] marcoceppi: thatnkas -- and sorry for not seeing you reply ;) [15:14] adeuring: no worries! thanks for the submission [15:14] adeuring: give me two seconds to push the latest to the python-port branch [15:14] sure [15:15] adeuring: actaully, latest is up there. This is the 1.0.0 release. While I probably won't get your merge in for that, I'll have it for 1.1.0 [15:15] So it'll land in the daily ppa, and in stable in about a week [15:17] mhall119: you'll need to configure lxc to do that. I believe it's outside the reach of juju [15:17] mhall119: either way, I've not tried [15:18] mhall119: first thing that comes to mind is to just symlinkn the containers in to your home directory, from /var/lib/lxc* [15:18] * I think that's where the containers are put [15:21] ok, I'll give that a try [15:29] marcoceppi: should I be able to run juju debug-log immediately after bootstrapping [15:29] ? [15:29] or is that only usable after deploying [15:29] mhall119: no, debug-log does not work on local provider [15:30] mhall119: you can find all the logs in ~/.juju/local/log [15:31] mhall119: that should be in the docs but is not [15:31] mhall119: I'll make sure it gets added [15:31] yeah, it just returns a 255 error code [15:31] mhall119: you also can't juju ssh 2; where 2 is the machine number, you must always use the unit-name [15:31] mhall119: because the debug-log is on the bootstrap node, and the bootstrap node is your machine. so the bootstrap node has the IP address of the lxc bridge [15:32] mhall119: you just tried to ssh into the gateway of the LXC bridge, hence the 255. [15:32] 2013-09-17 15:30:50 ERROR juju.container.lxc lxc.go:161 lxc container creation failed: container "mhall-local-machine-1" is already created [15:33] 2013-09-17 15:30:50 ERROR juju.provisioner provisioner_task.go:341 cannot start instance for machine "1": container "mhall-local-machine-1" is already created [15:33] even after juju destroy-environment -e local, I still have /var/;lib/lxc/mhall-local-machine-1 [15:34] I don't think jcastro is *actually* sick, I think he just heard that I was going to be trying Juju and decided to hide from me :) [15:36] mhall119: that's odd [15:36] mhall119: about to board a flight, best of luck, but if you dont' get it working I'll be back online in a few hours [15:43] marcoceppi: new attempt: https://code.launchpad.net/~adeuring/charm-tools/python-port-check-config/+merge/186080 [15:53] sinzui: did that MP look ok? i see moon did your previous reviews. is he the webop i should try to get to review it? [15:54] bac: yep [15:58] bac: sorry, I was ambiguous. moon127 or any webops can do the review. [15:58] sinzui: oh, ok [15:58] bac I don't understand the user rewrite rule. [15:59] sinzui: getting on a call. can chat in 15 minutes. [15:59] bac: http://manage.jujucharms.com/~abentley/precise/charmworld is an example of a user charm url [15:59] user urls don't have "charms" in them [16:22] sinzui: good catch. we had thought there had been URLs of that form before. i'll remove that rule. [16:47] so on the local provider, can I just go into /var/lib/lxc/{machine}/rootfs/ and see what it has? [16:48] marcoceppi: ^ [16:54] is there documentation on setting up a clean environments.yaml? [17:15] hey mhall119 [17:15] I ran into this this weekend [17:16] I just blew away /var/lib/lxc/jorge-whatever [17:16] and rebootstrapped [17:17] jcastro: I tried that,then it complained about stuff in /etc/lxc/auto/ [17:18] jcastro: I think I need to blow away my environments.yaml, it currently has http://paste.ubuntu.com/6120281/ [17:18] what's the cleanest way to get a new one with properly configured local [17:18] ? [17:18] hope you're starting to feel better, btw :) [17:30] move it out of the way [17:30] and do `juju init` to make a new one [17:30] it being the old environments.yaml file [17:38] thanks jcastro, seems to be getting further now [17:41] jcastro: can local instances get to external websites (like LP)? [17:44] yeah [17:44] otherwise charms wouldn't work [17:48] man, I can't type "unit" without my fingers automaticaly putting a "y" at the end :( [17:49] yeah, it took me 6 months to fix that [17:49] jcastro: ok, so my install hook failed, I made some changes and added a bunch of juju-log commands, how to I get rid of that instance and deploy again using the new charm? === defunctzombie_zz is now known as defunctzombie [18:18] mhall119, if your install and upgrade_charm hooks point to the same code, you can do juju upgrade-charm service --force [18:22] I have a question of my own - is it normal that juju-core leaves the instance running after a destroy-service ? [18:24] tasdomas: no [18:25] tasdomas: watch the status / debug-log to ensure the service is in fact truly getting destroyed [18:25] my guess is it is not === defunctzombie is now known as defunctzombie_zz [19:00] tasdomas: it seems mine are different, what are my options in that case? [19:01] mhall119, 'juju resolved service/unit' - repeat until the service is in a non-error state [19:01] then do a destroy-service and redeploy [19:04] thanks tasdomas [19:05] mhall119, hope that helps [19:07] tasdomas: trying it now [19:07] we shall see [19:08] well the service is gone, but the machine is still there, is that expected? [19:08] do I need to destroy the machine before I redeploy? [19:17] where would I find the output from juju-log? [19:22] I'm not seeing *any* output in ~/.juju/local/log/ that looks like it's coming from juju-log commands [19:24] mhall119: I'd this on a local deployment still? [19:24] mhall119: the machine staying behind is expected === defunctzombie_zz is now known as defunctzombie [19:27] marcoceppi: local still yes [19:27] marcoceppi: where does juju-log write to? === defunctzombie is now known as defunctzombie_zz [19:29] marcoceppi: can you look at http://paste.ubuntu.com/6120826/ [19:29] "instance-state: missing" looks suspiciously bad [19:29] mhall119: on the unit itself in /var/log/juju/unit-(service)-#.log however, they will be in .juju/(env-name)/log/unit-(service)-#.log [19:30] for local environments [19:30] mhall119: that just means it can't query state. happens on some openstack local and Maas deployments [19:31] marcoceppi: hall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls /var/lib/lxc/mhall-local-machine-3/rootfs/var/log/juju/ [19:31] mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls /var/lib/lxc/mhall-local-machine-2/rootfs/var/log/juju/ [19:31] mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls /var/lib/lxc/mhall-local-machine-1/rootfs/var/log/juju/ [19:31] none of the 3 machines I've created so far have anything in their rootfs/var/log/juju/ directory === defunctzombie_zz is now known as defunctzombie [19:31] mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ ls ~/.juju/local/log/ [19:31] machine-0.log machine-1.log machine-2.log machine-3.log unit-api-website-0.log [19:32] all I have in my ~/.juju/local/log is machine logs, no unit ones [19:32] unit-api-website-0.log looks like it [19:33] mhall119: FYI, that directory is loop mounts from the VM. of you ssh to it you'll see the loss, but they are in that. juju folder so nbd [19:34] marcoceppi: ah, didn't see that in there before [19:35] mhall119: if you want to mimic debug-log just run `tail -f .juju/local/log/unit-*.log` [19:36] thanks marcoceppi, I see some useful messages in that log, I'll be back if I get stuck again [19:36] mhall119: cheers [19:40] hey guys and gals ... read about GreenQloud (https://greenqloud.com) in Iceland offering cloud instances with a fully EC2 compatible API ... shouldn't that mean that Juju could be hacked to work with GC quite easily or would it be hard to tweak it? [19:41] jaywink: you might get away with just changing the URL.. [19:43] sarnold, yeah I though of trying but the environments.yaml has no url. Will check the code where it could be defined. Just thought if someone has any quick "NO cannot be done"'s :) [19:44] jaywink: ec2-uri: https://juju-docs.readthedocs.org/en/latest/provider-configuration-ec2.html [19:44] marcoceppi: does juju keep a cached copy of my charm? I changed my install hook but it doesn't seem to be using the changed version [19:44] jaywink: oh, looks like you'll also need s3-uri [19:45] mhall119: iirc, juju deploy -u will help with rapid iteration of charm development [19:45] thanks sarnold, I'll give it a try [19:45] sarnold, ok thank will try that [19:51] are they any plans to allow deploying via github: something like juju deploy github:mattyw/charms/mycharm? [20:00] sarnold, after setting ec2-uri and s3-uri to the endpoints in greenqloud, bootsrap gives 'error: The AWS Access Key Id you provided does not exist in our records.'. I noticed GC has a separate EC2 key and separate S3 key for the API - so maybe that... will try to grep the code once I find it :) [20:05] mhall119: yes, it does === tasdomas is now known as tasdomas_afk [20:31] marcoceppi: still trying to figure out how to work around that nova-cc problem [20:31] I thought kentb had a more elegant solution than adding it in to \/etc/hosts [20:32] but I can't get the dhcpd.conf to accept it. Any ideas? === tasdomas_afk is now known as tasdomas === tasdomas is now known as tasdomas_afk [20:38] kurt_: check the bug report? [20:38] kurt_: https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160 [20:38] yeah, that doesn't work [20:38] <_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error [20:39] according to adam_g this is actually a problem with MAAS and not the charm [20:39] http://pastebin.ubuntu.com/6121109/ [20:39] kurt_: http://irclogs.ubuntu.com/2013/09/17/%23juju.html#t00:58 [20:40] marcoceppi: ok, so how do I deploy gunicorn to work with my django? [20:40] mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/charm$ juju deploy --to 9 gunicorn [20:40] error: cannot use --num-units or --to with subordinate service [20:40] kurt_: try `option domain-search "example.com";` [20:41] mhall119: right, because by nature subordinates live on other services [20:41] marcoceppi: ok, so how to I get it playing together with my "api-website" service? [20:41] so you `juju deploy gunicorn`, then `juju add-relation gunicorn api-website` [20:41] marcoceppi: maas by default sets up .master as it's domain [20:41] mhall119: apparently not, given the errors you're seeing [20:42] s/example.com/.master/ [20:42] kurt_: what marcoceppi said, and afterward restart the maas-dhcp-server (or whatever it's called) service [20:42] in my example [20:42] I tried that too [20:42] still borked [20:42] kurt_, are you really planning on using ssh block live migration? if you are not, just turn it off on nova-compute charm config, and you should not have the DNS error during the hook execution. [20:42] kurt_: I'm just echoing what's in the man page [20:42] kurt_, if you are planning on using it, you need reliable DNS [20:44] adam_g: ok. but there's a gap here that will prevent maas working with juju the way things are. so maybe it needs to be carefully captured in documentation then? [20:44] let me qualify that - in HA situations as described in jamespage's document where live migration via ssh is concerned [20:44] kurt_: I plan on re-targeting the bug for maas, it's not outside the realm of too much to ask for, to have maas setup include this step [20:45] marcoceppi: thanks and adam_g thanks for chiming in [20:49] kurt_: adam_g: re-targeted to maas, https://bugs.launchpad.net/maas/+bug/1225160, adam_g thanks for the help earlier! [20:49] <_mup_> Bug #1225160: MAAS doesn't add tld to DHCP domain-search [20:49] marcoceppi: sarnold: tasdomas_afk: thanks for all your help, I'm very close now! [20:49] mhall119: yahoo! [20:50] marcoceppi: the add-relation between gunicorn and api-website failed because of a missing python-jinja2 package, which I assume gunicorn uses [20:51] why wouldn't gunicorn be installing that? Do I have to install any gunicorn dependencies in my charm? [20:51] mhall119: it's possibly a dependency? I've not used that charm. If it is a gunicorn dep, the gunicorn charm needs to install it [20:52] jinja2 is a templating system for django, so it might not be a gunicorn dep [20:54] well api-website doesn't use it... [20:54] mhall119: I'm simply googling around :\ what's the log for the unit say?> [20:54] unless it's somehow still buried in the remnants of the charm I'm basing this off of [20:55] mhall@mhall-thinkpad:~/projects/Ubuntu/api-website/udn$ grep -i error ~/.juju/local/log/unit-gunicorn-0.log [20:55] * marcoceppi shrugs, it could [20:55] 2013-09-17 20:42:45 INFO juju.worker.uniter context.go:234 HOOK ImportError: No module named jinja2 [20:55] 2013-09-17 20:42:45 ERROR juju.worker.uniter uniter.go:356 hook failed: exit status 1 [20:56] mhall119: everything I'm reading suggests jinja2 is married to django and not gunicorn [20:57] mhall119: typically, a charm should install it's own dependencies. For the sake of moving you along, I'd say have you api-website install the dependency [20:57] marcoceppi: hmmm, I see it in both the postgres and gunicorn logs: http://paste.ubuntu.com/6121191/ [20:57] wat. [20:58] why is postgres doing anything? [20:58] http://i.imgur.com/kUie3s2.gif [21:05] hmmm stupid question if someone can help. I'm looking to tweak locally how juju uses creds towards ec2. It imports txaws module to do the magic. But I cannot find this module on my system no matter where I look? [21:06] jaywink: what version of juju are you using? [21:06] marcoceppi, 1.13.3-raring-amd64 [21:06] marcoceppi: well I deployed postgres and added it as a relation to api-website [21:06] not sure why it's using jinja though [21:06] mhall119: I have no idea why postgresql is installing anything [21:07] jaywink: well, 1.13.3 is written in Go-lang if you didn't know already, so juju is compiled. [21:07] marcoceppi: I'm using the IS postgres charm, if that matters [21:07] mhall119: probably, I've not looked at that one [21:08] marcoceppi, ok so I took the wrong sources from launchpad haha.... sigh.. that explains it [21:08] thanks [21:08] jaywink: yeah, juju-core is juju version > 1.0 [21:08] jaywink: just "juju" is version 0.7 and below [21:08] * marcoceppi makes a note to update the juju project page [21:08] :) [21:11] marcoceppi: yeah, the postgres charm uses jinja in it's hooks [21:12] * marcoceppi tilts head === thumper-afk is now known as thumper [21:31] marcoceppi: which service do I juju expose, gunicorn or api-website? [21:39] marcoceppi: http://paste.ubuntu.com/6121351/ shouldn't that mean I should be able to go to http://10.0.3.94:8081/ and see something? [21:41] hmm, more missing deps it looks like [21:50] marcoceppi, so charm-tools still depends/recommends pyjuju [21:51] marcoceppi, http://pastebin.ubuntu.com/6121393/ [21:53] mhall119, jinja2 gets pulled by charm-helpers [21:53] which postgres charm might be using [21:56] juju-core doesn't seem to support ec2-uri and s3-uri (grepped the code too) - any idea if this support is coming or any branches with it included? === CyberJacob is now known as CyberJacob|Away === natefinch is now known as natefinch-afk [22:53] Deploying with juju-core on non-ec2 ec2 clouds | http://askubuntu.com/q/346860 === natefinch-afk is now known as natefinch === defunctzombie is now known as defunctzombie_zz [23:00] marcoceppi, is there a separate place for python charm-tools? [23:01] oh.. python-port branch === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz