=== defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [00:59] Hey. I'm developing a charm and trying to debug one of the hooks. What is the best to recover a node from an error state. According to the docs error state nodes don't run upgrade hooks. [01:03] ZonkedZebra: I think ssh to the unit, fix it up, and run juju resolved [01:03] sarnold: ZonkedZebra [01:04] juju resolved --retry $UNIT && juju debug-hooks $UNIT [01:04] will retry the failed unit [01:04] ooh! --retry :) very nice [01:04] then immediately (before that is actioned) start debug-hooks so you can run the unit yourself [01:04] s/unit/hook [01:05] in the debug-hooks it appears to drop be right before the script is exacted. Is there something functionality there I am missing? tailing /var/log/juju/unit-* has provided the best feedback so far [01:06] ZonkedZebra: you get to run the hook, debug the hook [01:06] the name of the hook will be in the $PS1 [01:06] so, if the hook that failed is config-changed [01:06] at the prompt, type [01:06] hooks/config-changed [01:06] see where it breaks [01:06] fix it [01:06] run it again [01:07] when you're happy that it has worked [01:07] exit 0 [01:07] will take you to the next hook queued [01:07] when you've processed all the hooks [01:07] exit the final shell [01:07] those changes will be saved on that node? [01:08] no [01:08] you will need to apply those changes to your charm [01:08] then use something like [01:08] Interesting, guess I can copy back out when done. I was just updating locally and then upgrading the charm [01:08] ZonkedZebra: the best mental model of a chamr is to expect them to avaporate at any second [01:08] any chanes done on the unit themselves are not saved anywhere [01:09] davecheney: I can figure that part out :) thanks [01:19] ZonkedZebra: sorry mate, it wasn't my intention to talk down to you [01:21] (very new to juju -- about 5 min into the docs) does juju set-constraints just change a configuration file? [01:21] I'm trying to understand how all these settings you're making would be shared with your team [01:21] I guess you keep your whole ~/.juju in version control? [01:22] cespare: yeah, the .juju directory started simple but now contains a lot of files which all need to be on every client [01:22] we're trying to address this [01:22] but it won't happen on a timeframe useful to this discussion [01:22] davecheney: ok thanks [01:23] is there a way to use something other than .juju, perhaps with an env variable or something? [01:23] does ceph charm need a second disk? [01:24] cespare: you can move the location of JUJU_HOME with that environment variable [01:24] freeflying: yes [01:24] davecheney, any particular reason for the second disk? should only ceph-osd need a second disk? [01:25] cespare: davecheney's right in general, but set-constraints in particular goes into the state database [01:25] davecheney: ok neato. So is just keeping JUJU_HOME in VC the recommended way of working as a team? The docs feel like they're addressed to a single dev or something [01:25] axw: oh ok, that makes sense [01:25] axw: that's mongo? [01:25] yup [01:26] cespare: i'd say it's a good idea [01:26] we dont' have a recommendation [01:26] ok [01:26] the contents of .juju are very much in flxu [01:26] we konw that we keep too much state on dick [01:26] err [01:26] disk [01:26] and are trying to fix it [01:27] but it's not top priority compared to other stuff we want to get done [01:28] davecheney: i see. Are you saying that in an ideal world, your .juju would mostly just have enough info to point at the juju configuration server and everything else would be stored there? [01:30] cespare: bingo [01:30] would you like a job ? [01:30] Happy with the one I've got, thanks :) [01:30] Plus I could never use bzr for my dayjob. It wouldn't work out. [01:31] everyone has their price [01:31] can't argue with that [01:33] hehe [01:37] Is there something I can read about using juju to manage your own application? I suppose it's basically just going to involve writing a charm for it...but what about redeploying it when the code changes, monitoring, logging, etc? [01:40] cespare: juju is configuration management [01:40] it doesn't do monitoring or process management [01:40] what it does do is define a framework for connecting services together [01:40] the main driver was virtual environments like ec2 [01:40] when the names of the machines are not known ahead of time [01:41] davecheney: ok. [01:42] process management is always tricky [01:42] juju doesn't want to be the process manager [01:42] So how would juju help out if, say, I have an application that needs a database. Can I get it to just provision me some machines, and my deploy tool can ask juju for what the latest set of application servers are? [01:42] ie, we don't want to, and in reality cannot demand that processes do not daemonise themselves [01:43] cespare: sort of [01:43] but not relaly [01:43] juju lets you define an environemnt, a collection of services [01:43] and then I would want juju to invoke my deploy tool if i add more nodes...:\ [01:43] juju is your deploy tool [01:43] you describe your environment, ie [01:43] juju deploy wordpress [01:43] juju deploy mysql [01:43] juju add-relation wordpress mysql [01:43] juju expose wordpress [01:44] you don't describe machines, hosts, networks, firewall ports, etc [01:44] davecheney: that all makes sense. Does juju not help with my application servers at all? [01:44] cespare: you'd have to be more specific what kind of help you are looking for ? [01:44] for monitoring we have the idea of subordinate charms [01:44] which let you descrive things like zenoss and nagios agents [01:44] well, like in my example. I have an application server that I'm hacking on and want to deploy frequently [01:45] (forget about monitoring and stuff for now) [01:45] it connects to a db that I brought up with juju deploy mysql [01:45] davecheney, can we deploy ceph onto machine which only have 1 disk, and ceph-osd to machine has 2 disks [01:45] you, you'd describe the process of deploying your application server as a charm [01:45] now, maybe I need to scale up the app server by adding more nodes, or maybe move the db to a different box...does juju do those things? [01:45] freeflying: no, ceph rquires two luns [01:46] you need to use constraints to make sure the unit is provisioned on a machine with that disk setup [01:46] but, regretably,we haven't implemented those constraints yet [01:46] cespare: juju does those things [01:46] davecheney, what about ceph-osd then [01:46] freeflying: that will work [01:47] davecheney, one disk for osd will be fine? [01:47] cespare: juju add-unit $SERVICE [01:47] freeflying: i guess so, isn't it a dashboard or something [01:47] cespare: juju has the model of one machine per unit of a sevice [01:47] davecheney: ok, so what does deploying version N+1 look like? You build a new version of the charm and then... [01:47] cespare: you have two options [01:48] 1. juju upgrade-charm, and write a hooks/upgrade-charm hook that will git pull your code or something [01:48] or [01:48] 2. juju destroy-service && juju deploy $SERVICE [01:48] or juju deploy $SERVICE $NEW_NAME [01:48] then destroy the old name [01:49] davecheney: do the subordinate charms break from the one machine per unit of service paradigm? [01:49] i.e. can I run the nagios charm with my app server together? [01:49] cespare: yes [01:49] s/i.e/e.g [01:49] ok thanks [01:49] cespare: yes, subordinate charms are deployed 'into' the machine that the thing they are subordinate too [01:49] nagios isn't a subordinate [01:50] its the server component [01:50] oh, the agent, whatever [01:50] nagios-nrpe is the agent [01:50] yeah [01:50] fair warning [01:50] we understand that the one machine per service unit is a sucky requirement [01:50] and makes it hard to have 'small' juju environemnts [01:50] we'er working on fixing that with lxc containers [01:50] davecheney: yeah, saw that in the docs [01:51] but there are complex problems, mainly around networking in hostile environemnts like ec2 and private openstack clouds which make the problem much harder [01:51] davecheney: sounds good, but actually for our infrastructure we pretty much have a machine per service [01:51] i mean (many) dedicated machines per service [01:52] cespare: thta is why i say sevice unit [01:52] the sevice is the abstract idea [01:52] the unit is the physical manifestation of one instance of that service [01:52] davecheney: when a charm reacts to the upgrade-charm hook is it supposed to transform itself to the same state as if the upgraded charm were deployed to a fresh machine? [01:53] davecheney: good terminology [01:53] cespare: as the charm author, we push a lot of that work onto you [01:53] all we do is call the hook and wave our hands that it is your problem to figure out what that means [01:53] right, I'm asking if that's what I'm supposed to do as a good citizen [01:53] ok [01:54] cespare: there are many ways of skinning the cat [01:54] davecheney: pretty easy if the application is a single jar/go binary [01:54] you could also use a config variable to define the revision you want to use [01:54] then your upgrade could be [01:54] juju set revision=XXX [01:54] which would fire the hooks/config-changed hook and you could do a git pull [01:55] leaving upgrade charm to only change the actual code of the hooks/* [01:55] ah [01:55] cespare: at it's core, juju is two things [01:55] 1. a generic interface to varoius vm providers [01:56] 2. a way of scheduling the remote execution of remote commands [01:56] any assuptoins above and beyond that have to belong with the charm authors [01:56] (we do push a lot of responsibility to charm authors) [02:02] I wish juju worked on digital ocean, that'd be cool [02:03] cespare: we're working on a thing called manual provisoining [02:03] which lets you supply machines via ssh [02:03] ah ok, still totally scriptable though heh [02:03] that would do the trick [02:03] it's there in tip if you want to try it [02:04] but not documented because there are a lot of rough edges [02:05] davecheney: thanks for answering all my questions [02:06] np === defunctzombie is now known as defunctzombie_zz === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === defunctzombie_zz is now known as defunctzombie === freeflying is now known as freeflying_away === axw_ is now known as axw === CyberJacob|Away is now known as CyberJacob [08:10] marcoceppi, please ping me when you start today re charm-tools update [08:10] marcoceppi, as its pretty much a complete re-write I need more information before I go speak to the release team === defunctzombie is now known as defunctzombie_zz [11:48] Set up nodes for Ubuntu cloud 12.04 | http://askubuntu.com/q/349867 === freeflying_away is now known as freeflying === cereal_b_ is now known as cereal_bars [13:18] Error on juju configuration for maas | http://askubuntu.com/q/349892 [13:34] jamespage: ping [13:34] hey marcoceppi [13:35] marcoceppi, so a few questions re cloud-tools 1.0.0 if you have time [13:35] jamespage: I've got all the time in the world for this [13:35] marcoceppi, OK _ so I pulled the packages from the PPA and merged them into the main packaging branch in ubuntu [13:35] then restored a few files under debian/* that had got dropped [13:36] 1.0.0 is a complete rewrite in python right? [13:36] jamespage: correct, the code is rewritten, the packagining is re-done, and the structure of the package changed [13:37] marcoceppi, so the rationale is really about supporability going fowards right? [13:38] as the current package is a mix of bash/python and not actively developed [13:38] jamespage: the current package, 0.3, is no longer maintained. The re-write was to make charm-tools multi-platform and bring it's quality up [13:49] marcoceppi, OK [13:57] hey, guys, is there any way to tell juju which lxc bridge name it should be using? lxc works fine, but juju is failing with the net device not found error [14:05] ehw: not that I know of, let me dig through the environments.yaml options [14:08] marcoceppi: thanks; was looking through the source, but it hasn't got any clearer for me [14:09] ehw: there's two places for config options, one is in the providers code itself, then there's like this global options file that is env.yaml options for all environments [14:13] ehw: it looks like there's an "JUJU_LXC_BRIDGE" environment variable you can set during bootstrap [14:13] let me dig a little more [14:13] ehw: oh, wait, that's for something different [14:14] marcoceppi: yeah, just tried that, didn't seem to get me what I needed [14:14] ehw: line 42 of provider/local/environ.go has it hard-coded `const lxcBridgeName = "lxcbr0"` [14:15] ehw: if you wanted that to be configurable, which isn't out of reason, you'd need to open a bug https://bugs.launchpad.net/juju-core [14:15] marcoceppi: yeah, looks like I'll be doing that [14:17] marcoceppi: could you please have a look here: https://code.launchpad.net/~adeuring/charm-tools/python-port-check-config/+merge/186080 ? [14:18] adeuring: sure can! [14:18] thanks! [14:18] adeuring: while you're here [14:19] REQUIRED_OPTION_KEYS = set(('description', )) - description is the only req key? I thought type was as well? [14:21] marcoceppi: http://bazaar.launchpad.net/~charmers/juju/docs/view/head:/source/service-config.rst says that "str" is the default type. so I assume that is not need to be specified (well, unless you want an int or float) [14:21] adeuring: ah, gotchya, thanks [14:29] marcoceppi: done. pad.lv/1230306 [16:03] Charm call, http://ubuntuonair.com and http://pad.ubuntu.com/7mf2jvKXNa [16:14] I can't see anything on ubuntuonair? [16:16] mattyw: http://www.youtube.com/watch?v=UPUO62DQiuw&feature=youtu.be [16:17] marcoceppi, much better tnanks :) [16:20] jcastro, can I ask some more questions? [16:20] I know you love it [16:23] sure! [16:23] keep on keeping on! [16:28] These pages: https://jujucharms.com/fullscreen/search/~mattyw/precise/docker-3/?text=docker [16:28] how do they get generated? I expected them to update if I updated the charm - or after some interval - but they don't === defunctzombie_zz is now known as defunctzombie [16:29] my 2nd question is: this is what I'm doing with the config http://bazaar.launchpad.net/~mattyw/charms/precise/docker/trunk/view/head:/hooks/config-changed. but it's not really config. is this how the framwork charms work at the moment? === defunctzombie is now known as defunctzombie_zz [16:41] jcastro, I think that's all my questions actually [16:43] the pages are generated ... nightly I think? rick_h_ do you know the interval? [16:43] marcoceppi can answer the 2nd one [16:46] mattyw: that hook looks fine. a lot of my charms do things like that in the conduit changed hook [16:46] marcoceppi, ok cool, glad I get the basic idea [16:47] config* [16:50] jcastro: I think we did the charm call to soon? [16:53] no it's always been at this time [16:53] I just moved it to thr wrong spot [16:56] fixed, thanks === defunctzombie_zz is now known as defunctzombie [17:58] http://highscalability.com/blog/2013/9/25/great-open-source-solution-for-boring-ha-and-scalability-pro.html [17:58] share/tweet/reddit/whatever please! [19:11] Hi all! [19:56] Is there a juju PostGIS charm out there? | http://askubuntu.com/q/350035 [20:16] hey popey [20:20] huh [20:20] hey guys check this out [20:20] http://code.scenzgrid.org/index.php/p/jujucharms/ [20:21] http://code.scenzgrid.org/index.php/p/jujucharms/source/tree/f906376d2ecba34e82e15b1e558e1b9e3c4d4ea1/postgis/precise/postgis/README === CyberJacob is now known as CyberJacob|Away [21:57] jcastro: used to be about every 15min === freeflying is now known as freeflying_away === thumper is now known as thumper-dogwalk === thumper-dogwalk is now known as thumper