[00:16] SpamapS, I'm getting an invalid ssh key now -- I did juju status, and it told me the key had been changed (from my many reinstalls no doubt) and asked to accept or no -- I said no, meaning to cancel out and delete the known hosts file and retry, and now it's Invalid SSH key each time [00:24] ok, I copied all my .ssh files from the client I'd been working on to the server -- seems to have gotten me past that bit [00:30] juju status completed -- looks like my first system is in place [00:52] jason_, woot! [01:08] <_mup_> juju/go-store r18 committed by gustavo@niemeyer.net [01:08] <_mup_> Implemented URL.WithRevision. [01:11] hazmat, mysql deploy success, too... [01:26] jason_: ho ho [03:58] <_mup_> juju/go-store r19 committed by gustavo@niemeyer.net [03:58] <_mup_> New store package with AddCharm and OpenCharm interface. [03:58] <_mup_> The interface to the package is trivial, but internally it actually [03:58] <_mup_> handles all the necessary logic for concurrent runs of the algorithm, [03:58] <_mup_> including mongo-based atomic locks with expiration, multi-URL synchronous [03:58] <_mup_> revision bumping as described in the charm specification, GridFS-based [03:58] <_mup_> memory-friendly uploading for large files, and ponies too. [03:58] <_mup_> Lacks documentation and sha256 handling, though.. but I need some sleep. [04:11] Night all [06:04] <_mup_> juju/expose-retry r402 committed by jim.baker@canonical.com [06:04] <_mup_> Support retrying port mgmt ops in periodic machine check [08:21] <_mup_> Bug #872164 was filed: [Oneiric] Cannot deply services - store.juju.ubuntu.com not found < https://launchpad.net/bugs/872164 > [08:48] morning - I took the liberty of pointing the bug reporter for bug 872164 in the right direction and marked the bug as invalid [08:48] <_mup_> Bug #872164: [Oneiric] Cannot deply services - store.juju.ubuntu.com not found < https://launchpad.net/bugs/872164 > [08:52] thanks jamespage, I just saw, much better response than mine [08:52] fwereade_, np [09:37] I think I must be missing something: should the stop hook be called when a unit is removed from a service using remove-unit? [11:19] where can i find documentation for txaws? [11:19] oops, LMGTFY [12:07] good morning [12:09] fwereade_, the docs still look out of date.. https://juju.ubuntu.com/docs/user-tutorial.html#deploying-service-units [12:09] i think jimbaker mentioned yesterday they weren't regenerating [12:15] jamespage, on bug 871966 when you say local juju environment you mean a local provider? [12:15] <_mup_> Bug #871966: FQDN written to /etc/hosts causes problems for clustering systems < https://launchpad.net/bugs/871966 > [12:17] jamespage, the stop hook is not called [12:18] jamespage, pretty much everything that deals with remove/destroy works one level up from the supervisor of the thing being killed [12:18] with the notion that even if the thing is AWOL, the action will happen [12:20] hazmat: hiya [12:20] rog, txaws is pretty much UTSL for most questions imo [12:22] hazmat: yeah, i discovered that. thanks. [12:22] foundations of sand :-) [12:24] rog, not really.. its well tested. but yeah.. its a consequence of using twisted, vs using the standard python library for aws (boto ) [12:24] uh huh [12:25] hmm.. interesting [12:31] hazmat: the comment on bug 871966 does refer to the local provider - but that provides an IP address for private-address anyway [12:31] <_mup_> Bug #871966: FQDN written to /etc/hosts causes problems for clustering systems < https://launchpad.net/bugs/871966 > [12:31] jamespage, yup and private-address==public-address there [12:31] and it shows up in juju status [12:32] hazmat: I now have something that works with the local provider, and on ec2 and openstack [12:32] jamespage, nice [12:32] jamespage, comments about the local provider probably aren't relevant on a cloud-init bug, since the local provider doesn't use cloud-init.. fwiw [12:33] hazmat: they more referred to the fix for cassandra [12:33] ah.. ic. its linked [12:33] yep [12:36] hazmat: with regards to units leaving a service/not calling stop I was trying to figure out the best way to remove a node from a cassandra cluster [12:36] because the node does not get shutdown, it remains in the ring === plars-holiday is now known as plars [12:45] rog: ping [12:46] robbiew: pong [12:46] rog: have you registered for UDS? [12:46] robbiew: i think so.... but i'll just check [12:47] robbiew: yes, i have [12:47] rog: -> http://uds.ubuntu.com/register/ :) [12:47] robbiew: i did it on 15th Sep... [12:47] and flights all booked too [12:48] rog: hmm, okay. I'll talk to our admins then, thx [12:49] robbiew: at any rate, i've got a confirmatiom email from marianna [12:49] robbiew: i'll just check the web site directly [12:50] rog: ah, cool [12:50] nevermind then [12:50] :) [12:53] robbiew: ah, maybe i didn't register on the linaro web site. i think i only did the UDS registration. [12:55] jamespage, hmm [12:55] jamespage, yeah.. i guess we really should be calling stop on units [12:56] hazmat: I need to deal with two scenarios - one where its a controlled removal [12:56] and one where the node goes AWOL [12:56] jamespage, pls file a bug [12:56] i can look at that today [12:56] hazmat: ack - doing now [12:57] for stopping a machine its almost irrelevant, since we shutdown the machine, but for a unit if we don't call stop, there isn't any thing to keep it from continuing to run [12:57] at least till all units are containers [12:57] and then the container is killed [12:57] rog: UDS is all you need ;0 [12:57] ;) [12:58] robbiew: ok, i'll ignore the FAQ then... [12:58] but we really can't do the latter on ec2, till we figure out some magical networking solution, or stop doing dynamic port management [13:00] unless we assume a single unit per machine in ec2 and do a targeted forward rule per exposed port [13:04] <_mup_> Bug #872264 was filed: stop hook does not fire when units removed from service < https://launchpad.net/bugs/872264 > [13:05] hazmat: ^^ [13:05] I tried to document the two challenges I have specifically with the cassandra charm [13:05] jamespage, thanks [13:06] I guess they may apply to other charms that have similar ring storage methods [13:07] jamespage, so on 2) and 1) the other units should both detect the removal [13:07] hazmat: yes - they do [13:07] just realised that "canonical/linaro employee" means "(canonical AND linaro) employee" not "(canonical OR linaro) employee"... [13:07] doh [13:11] hazmat: and I could use the hook on the remaining nodes to deal with both situations [13:11] I would need to write it such that only one node completes the action [13:12] * jamespage thinks about that one [13:15] * SpamapS awakens.. far too early [13:17] Good morning all [13:18] niemeyer: yo! [13:18] jamespage: I think there's another bug asking for similar functionality.. [13:19] jamespage: bug 862422 [13:19] <_mup_> Bug #862422: Provide a way for services to protect units during dangerous operations < https://launchpad.net/bugs/862422 > [13:20] jamespage: swift is a similar ring service and has times where adding or removing is a bad idea [13:21] SpamapS, agreed - it looks very similar [13:27] Does seem like the stop hook should handle this [13:27] SpamapS: it would do for controlled removal [13:28] jamespage: not sure I understand the AWOL case [13:28] SpamapS, thats more of a housekeeping case [13:29] in cassandra if you never moved entries for nodes that had gone away ('Down' status) it gets very crufty [13:29] also you want to ensure that loadbalancing etc.. get re-adjusted as the node won't be coming back [13:29] jamespage, but don't you get a departed event at all other nodes when one goes AWOL? [13:29] SpamapS, yes [13:29] sorry - I mean hazmat [13:30] * hazmat checks the bug report [13:30] jamespage: yeah that should be detected in the peer relations [13:31] cassandra has a prescribed procedure for removing a dead node from the ring [13:31] niemeyer: i'm porting the ec2 launch code and i'm not sure how goamz's AuthorizeSecurityGroup is supposed to work the way it's being used in the python code. here's a comparison: http://paste.ubuntu.com/706060/ [13:31] SpamapS, it does [13:32] so on departed.. you would run that procedure for the departed unit [13:32] jamespage, so in the case of 1) the desire is for the actual termination of the unit to hang till the stop (which is potentially a long running op) completes? [13:32] and of course to execute stop as part of 1 [13:32] hazmat: ideally yes [13:33] SpamapS: what information is provided when the -departed hook fires about the remote service unit? [13:33] jamespage, doesn't the same problem exist in reverse when adding units.. as i recall for cassandra (might be outdated), your supposed to only add a single unit at a time [13:33] rog: Looks like there's a protocol setting missing [13:33] jamespage, just the unit name and that it departed [13:33] rog: Check out the docs and the implementation [13:34] hazmat: +1 for that, let stop be proactive about locally stored data [13:34] SpamapS, niemeyer g'morning [13:34] niemeyer: the python code doesn't seem to set a proto - i was just checking that it wasn't an obvious bug [13:34] * hazmat just up the ante on his war against rodents, bring in the exterminator [13:34] * SpamapS wishes the time would change, its pitch black here in LA at 6:30am :-P [13:34] rog: Maybe it has a default? [13:34] we're porting the ec2 launch code? [13:35] hazmat, there is a restriction on adding units - N+N rather than N+1 [13:35] niemeyer: it seems to have two distinct modes of operation [13:35] there's no obvious default in the python code [13:35] i'll recheck though [13:35] rog: They're both backed by the same implementation [13:35] rog: The same API [13:35] rog: If one of them is failing, the call is different.. just figure how it's different and you'll understand the problem [13:36] hazmat: bug 862422 has a case where swift requires that nodes wait to be added until rebalance is done [13:36] <_mup_> Bug #862422: Provide a way for services to protect units during dangerous operations < https://launchpad.net/bugs/862422 > [13:37] SpamapS, hazmat: Cassandra has a similar requirement [13:37] hmm [13:37] Its not that hard on the add-unit case though [13:37] you can error out the joined event [13:37] they can't really scan for a rebalance attribute since its being set by the same hook that's doing it [13:37] and the hook values are only flushed at the end of the hook [13:37] and admins will just have to resolve --retry [13:38] hazmat: the services should protect themselves [13:38] hazmat: there's somewhere that an admin has to look to see if a re-balance is going on [13:38] thats where the hook should look [13:38] SpamapS, there isn't any service level logic.. atm.. its got to be what the units can coordinate among themselves [13:39] so - just to flip back to my -departed thinking [13:39] ATM I will need to a) detect which node needs to be removed from the ring [13:39] hazmat: yeah, I don't think preventing it is juju's problems. Handling failures gracefully should be all it needs to do. [13:40] and b) elect which of the remaining units is going to execute the removal [13:40] in the -departed hook [13:40] Though this does go back to the --wait argument where as an admin I'd like to get feedback from the command's intended actions. [13:41] jamespage, so a leader election/detection cli api for hooks [13:41] Does anyone want to volunteer to do a juju session for ubuntu openweek? https://wiki.ubuntu.com/UbuntuOpenWeek [13:42] niemeyer: hmm, it looks like the python code is using an undocumented feature of aws. [13:42] hazmat, that would be nice [13:42] as it would prevent some fragile hack in the charm hook [13:42] rog, that api has several different spellings, they are documented [13:43] I'm doing something similar at the moment for unit bootstrapping - which it not 100% reliable [13:43] when units join the peer relation [13:43] jcastro: I'm down for it. [13:43] SpamapS: can you claim a block please? [13:44] SpamapS: I'll do it with you if you want [13:44] Yeah at least be there to help me with the bot. ;) [13:44] rog, txaws is a poor reference impl to look at.. https://github.com/boto/boto/blob/master/boto/ec2/connection.py#L1917 [13:44] is much better at api coverage and docs, notice right above that impl there is support for a deprecated mechanism with slightly different spelling [13:45] hazmat: SpamapS: got the juju macports done and working, just a versioning question, let me paste here the versions of the python packages I'm using and let me know which ones would you deem as "need upgrading" [13:46] hazmat: the name "SourceSecurityGroupName" is used as a parameter. i'd have thought that should be documented in http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/index.html?ApiReference-query-AuthorizeSecurityGroupIngress.html [13:46] given that seems to be the entry point. [13:47] argparse (1.2.1), zookeeper (3.3.0), python-regex (0.8.0), python-txaws (0.2), pydot (1.0.25), python-argparse (1.2.1) [13:49] hazmat: maybe we should upgrade txaws? [13:51] niemeyer: looks like a new entry point is warranted. perhaps the original call would be better named AuthorizeSecurityGroupIP. hmm. [13:51] rog its quite possible txaws is not targeting the latestt api [13:52] rog, actually highly likely given its lack of dev [13:52] hazmat: txaws has the call. as does boto. but the AWS documentation doesn't mention that variant AFAICS [13:54] it looks like all the language APIs have that variant. do you know what it's actually doing? authorizing one group with the privileges of another? [13:54] that would be my guess, but it would be nice to know for sure, so that i can choose a good name. [13:56] rog, aws supports both because they have a versioned api, boto has separate implementations for each version one marked deprecated. [13:56] rog, it is documented, but not under the latest version of the api docs which document the latest [13:57] SpamapS: which slot do you want? [13:57] hazmat: so what do you reckon :) [13:57] hazmat: ah, so... we have to ask: what's the equivalent of that old call in the new API? [13:58] i'll try and find the old docs [13:58] lynxman, so txaws doesn't have a release with the openstack fixes atm [13:59] and i should probably push out a new version of txzookeeper [13:59] lynxman, give me a moment, i'll cut releases for both [13:59] hazmat: cool :) [13:59] lynxman, besides that.. what's python-regex? [13:59] lynxman, we use the builtin re module not a third party lib [14:00] unless a dep needs it like pydot.. [14:00] rog, it should be pretty clear from context how to translate [14:02] hazmat: I can drop it as a dependency then, pydot has its own :) [14:03] hazmat: perhaps. this page talks about a "user/group pair permission", but perhaps that's just code for "allow all IP access". http://docs.amazonwebservices.com/AmazonEC2/dg/2007-01-03/ApiReference-Query-AuthorizeSecurityGroupIngress.html [14:04] lynxman, so python-txzookeeper 0.8.0 is needed as well [14:05] lynxman, and zookeeper 3.3.3 .. there are definitely bug fixes in the py bindings we need [14:05] hazmat: alright, I'll upgrade both then, ty [14:06] lynxman, np.. the latest pypi release for txzookeper looks good, off to push out a 0.2.1 txaws release [14:06] hazmat: lovely, thanks! :D [14:17] tcp port numbers are 16 bit even with IPv6, right? [14:18] * niemeyer looks at rog with the eye [14:19] ok, ok, i should know that. [14:36] jcastro: sorry, family stuff, I'll grab one in the next 2 hrs [14:41] niemeyer: just checking: have you already written some Go code to parse environments.yaml? [14:41] rog: No, that was the first bit I suggested you could start with [14:42] ok, cool [14:42] (BTW the instance starting and group set up code is all working now) [14:42] rog: Please follow the existing convention in the charm package [14:42] rog: Wow, neat! [14:42] rog: How're you testing it? [14:43] niemeyer: it's just a stub file currently, no tests written so far [14:43] rog: Heh [14:43] rog: So there's nothing.. [14:43] niemeyer: just running it and going to the aws console to check [14:43] rog: :) [14:43] rog: Please write tests with the logic, rather than retrofitting them [14:44] rog: We should follow a similar model to what was done with goamz itself [14:44] rog: Rather than the mocking craziness we have in the Python side [14:44] niemeyer: yes, tests are the next thing i'm putting in. the code isn't even in a package yet. [14:45] rog: Ok, it's a spike then [14:45] niemeyer: a spike? [14:45] rog: yeah, a temporary hack to get a feeling of the problem [14:45] niemeyer: yeah, although i've ported a lot of the logic from the original python, so it should be trivial to do it right. [14:46] niemeyer: this is all i've got so far: http://paste.ubuntu.com/706139/ [14:48] rog: Nice [14:50] lynxman, latest txaws release @ http://launchpad.net/txaws/trunk/0.2/+download/txAWS-0.2.1.tar.gz [14:50] hazmat: lovely, thanks :) [14:50] niemeyer: what's the best approach to testing with ec2? actually interact with ec2 directly? [14:53] rog: No, we can follow a similar model from goamz [14:53] ok, i'll have a look. [14:55] niemeyer: BTW is this the only spec for the environment yaml? https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment [14:55] rog: Please read the Python code [14:55] ok [15:05] hazmat: new ports submitted, contacted one of the maintainers and it's *possible* that juju will be in the archive by next week [15:07] lynxman, sweet! === hazmat` is now known as hazmat [15:26] lynxman: is there an artifact somewhere where I can test and provide positive feedback to the maintainers? [15:27] SpamapS: I can send you my portindex branch if you want [15:45] SpamapS, this branch should hopefully fix the problem you saw on openstack with expose failing: lp:~jimbaker/juju/expose-retry [15:53] Hah, I love this code [15:53] self.mocker.call(simulate_random_failure) [15:53] :) [15:54] jimbaker: indeed that should retry those ops. There are many others.. I think we just have to get defensive about txaws [15:59] * hazmat lunches [16:00] I'm off to lunch too. [16:06] SpamapS, :). we need to be defensive about txaws because it needs work and it necessarily deals with bad stuff. in general, txaws will fail early, if it has a bad payload it can't parse [16:07] for commands like destroy-environment that can be repeated, this may be ok. for agents, we need to do retries [16:10] i'm pretty certain that the provisioning agent retry mechanism (ignoring that it's a SPOF for now) seems to robust, so long as we have errbacks defined such that stuff doesn't just stop. in the case of expose, the only place where txaws can be called is that one method (open_close_ports_on_machine), so trapping there and then using the existing resync mechanism for retries would seem to suffice [16:11] Are there any operations that the provisioning agent does w/ txaws where it shouldn't retry on error? [16:12] expose/unexpose was just the most common fail we had [16:12] there were others [16:12] any time listing instances returned empty ... things were likely to just grind to a halt [16:14] SpamapS, i suspect the problem with that is seen here: http://pastebin.ubuntu.com/706206/, specifically lines 17-21 [16:15] i need to check that get_machines will always raise a ProviderError if it fails [16:16] SpamapS, no, it only catches EC2Error, but txaws will raise other errors [16:17] jimbaker: yeah seems like we should be able to trust our internal libraries to always raise only ProviderError. :) [16:18] SpamapS, that's definitely not the convention we have [16:18] no catchalls [16:18] seems like catchalls at external libraries would be a good idea, but not for internal ones. [16:18] except perhaps in some twisted code where we use an errback setup, and then that does catch everything [16:20] SpamapS, yeah, i don't know. i think i can defend the existing mechanism by stating that for nonagent code, it's better to failfast, so any unknown errors bubbling up is fine [16:22] SpamapS, but if i look at periodic_machine_check, it does the right thing: it always reschedules itself, even if there's an error (equiv to inlineCallbacks with a finally) [16:23] SpamapS, so it should be resilient. and of course, if txaws is bad here, vs just getting an occasional bad payload, there's nothing that can be done anyway except to repeatedly log the problem [16:26] jimbaker: thats really what I'm wondering.. I don't know of any action the provisioning agent takes that shouldn't just be retried over and over. I will say that we need a better way than debug-log to track provisioning operations. [16:28] SpamapS, i think this would be helpful, bug 769120 [16:28] <_mup_> Bug #769120: Ensemble status shouldn't report dead units based soley on state, but also on presence. < https://launchpad.net/bugs/769120 > [16:32] niemeyer, the doc builds on juju docs have been broken for a while.. their still referencing old ways of deploying [16:33] SpamapS, ok, i think i see one bug here however: watch_machine_changes is a watch, and it calls process_machines. so this watch would stop working if process_machines fails because of some random exception from txaws [16:33] hazmat: Can you please raise that up in #is? [16:35] SpamapS, we would still see the resync from the periodic_machine_check, but the provisioning agent wouldn't respond to changes to ZK as they happen [16:35] jimbaker: exactly! [16:35] SpamapS, cool, glad to see your evidence corresponds to what i'm seeing here :) [16:35] jimbaker: did we ever open an actual bug for this? [16:36] I suppose you can just lpad it :) [16:37] SpamapS, i'll just open it conventionally, since i don't have a branch in place to fix it [16:37] niemeyer, done.. is there any one i should ping about it? [16:38] hazmat: Hmm.. #is? Who did you ping if you're wondering about who to ping? [16:38] niemeyer, i just put the message about the problem on #is.. just wondering if i should bring it to a particular person's attention on #is [16:39] hazmat: Ah, gotcha [16:39] hazmat: No, I'd just wait to see if someone there is able to help [16:39] hazmat: Otherwise mail rt [16:39] niemeyer, k, thanks [16:54] <_mup_> juju/go-store r20 committed by gustavo@niemeyer.net [16:54] <_mup_> Introduced revision key tracking so that we can detect whether a [16:54] <_mup_> charm update is already the current tip across all requested URLs [16:54] <_mup_> or not. If at least one of the URLs are out-of-date, the update [16:54] <_mup_> will proceed and bump a revision on all of them. [17:00] i'm off for the day. see y'all tomorrow. [17:03] rog: Cheers! [17:05] <_mup_> Bug #872378 was filed: Provisioning agent stops watching machine changes in ZK < https://launchpad.net/bugs/872378 > [17:05] SpamapS, i just filed bug 872378 [17:05] <_mup_> Bug #872378: Provisioning agent stops watching machine changes in ZK < https://launchpad.net/bugs/872378 > [17:05] jimbaker: thanks, will confirm and mark High [17:05] SpamapS, thanks, just what i was going to ask :) [17:06] oh you did that :) [17:06] i did the high part, you can still confirm it however [17:06] need to raise a txaws bug too [17:06] i'll get the bug dance better next time [17:06] well I am pretty religious about not confirming my own bugs :) [17:07] SpamapS, it's an interesting question about txaws, but given that it's a closely related project, worth seeing their philosophy here - do they handle bad payloads or not? [17:07] no [17:07] the project expects its AWS partner to be well behaved [17:08] so there's also a nova bug to raise [17:08] as nova shouldn't be returning empty ever [17:08] heh.. we should probably have a little triage party to clean up txaws's bug list. [17:08] got it. but regardless we would still expect to see TimeoutError, so there's some class of errors txaws will likely not handle [17:08] 34 new, 72 open, 3 high.. [17:51] <_mup_> juju/go-store r21 committed by gustavo@niemeyer.net [17:51] <_mup_> Track sha256 and store next to the charm information so we can answer [17:51] <_mup_> related API requests in the future. [18:01] <_mup_> juju/go-store r22 committed by gustavo@niemeyer.net [18:01] <_mup_> Copied log.go from personal project (mgo). [18:44] lynxman: heya, any update on the macports thing? [18:51] hazmat: hey is there an easy way to tell the local provider to use my existing apt cache instead of installing all this apt-cacher-ng business? [18:52] jcastro, i think he mentioned updated the portfile, he's going to ping one of the maintainers, with luck soon [18:53] jcastro, sadly no [18:55] jcastro, is the initial download a problem? [18:55] yeah, this close to release the mirrors are hammered, I'll suffer and find something else to do [19:05] SpamapS: did you mention you had pending MW charm changes? [19:06] m_3: everything I had is in lp:charm/mediawiki [19:07] SpamapS: cool thanks [19:09] <_mup_> juju/go-store r23 committed by gustavo@niemeyer.net [19:09] <_mup_> Added info/debug logging across the charm storage operations. [19:18] jamespage, ping [19:19] jamespage, i'm wondering how problematic it is to always kill the unit's processes on removal instead of a controlled termination via stop [19:20] hazmat: stop needs to be able to *cancel* the removal [19:21] SpamapS, there's not much distinguishing a unit removal to a service removal at that level [19:21] It would be awesome if charms could prevent data loss without a --force flag by simply refusing to stop the service while it is vulnerable. [19:21] and units overriding the user express commands.. [19:22] hmm [19:22] is this only happening on destroy-service, not on remove-unit ? [19:22] I do kind of think destroy-* should be more heavy handed [19:22] SpamapS, it would happen on either one, the mechanics are the same atm [19:23] SpamapS, how does the service know if its redundant or not? [19:23] service unit [19:25] <_mup_> juju/config-get r393 committed by kapil.thangavelu@canonical.com [19:25] <_mup_> juju get for service config/schema inspection [19:27] hazmat: in the case of any clustered service, it will have some way to determine if removing this node is safe or not. [19:29] hazmat: stop would also be a decent place for a single node service to signal some kind of snapshot or backup. [19:30] so blocking until its done would be cool [19:31] SpamapS, the converse question is how to prevent problems with problematic charms, that might for example have a broken stop... or even well meaning ones that go out of control [19:32] decomissioning a node in cassandra is potentially a fairly long operation afaicr [19:32] we'll need intermediary states to properly convey status to a ui [19:32] ie. 'stopping' [19:32] we only have nouns now.. not verbs [19:36] hazmat: --force ? [19:36] sounds reasonable [19:37] hazmat: I see what you mean. Yes it would be cool if we followed upstart's model there and had a goal state, and the in-between states with hooks available for each state. [19:37] SpamapS, exactly [19:37] hmm.. well maybe not hooks available for each state, but at least the same re status [19:37] effectively it would be a hook per verb [19:38] stop/running -> stop/hook-stop-running -> (if hook says so, stop/deferred-stop) -> stop/stopping-unit -> oblivion [19:38] Like if a hook exits 100 , that means it is running the safe stop in the background [19:39] then you can just keep trying to stop it, and getting back 100 until its done decomissioning [19:39] and you can still have a short timeout to deal with misbehaving charms [19:39] i'm going to capture this discussion into the bug