[11:43] <micheil> Is there a document somewhere describing how upstart issues SIGTERM and SIGKILL, I can recall reading something about it, but I can't remember where
[12:08] <micheil> I'm basically wanting to be able to send a signal to my process to kill it, but allowing me time to shut down the process properly.
[14:21] <ion> Do whatever shutdown mechanics you need in pre-stop.
[14:24] <micheil> hmm, can I send a signal to my program in pre-stop?
[14:24] <micheil> (my program is a webserver, and there's information that I need to make sure is persisted prior to killing it)
[14:24] <ion> sure
[14:25] <ion> Do whatever you need.
[14:25] <micheil> using kill -s SIGNAL, right?
[14:25] <ion> yes
[14:25] <micheil> and then have my program listen for that signal?
[14:26] <micheil> can I get the PID for the process that upstart created?
[14:26] <micheil> (eg, $PID, like the other variables upstart provides me)
[14:29] <micheil> (I haven't actually been able to find a list of variables available within upstart scripts)
[14:42] <ion> Are you sure just letting Upstart send TERM and using something like “kill timeout 120” to give it time before sending KILL isn’t enough?
[14:43] <ion> The environment variables are listed in init(5).
[14:43] <ion> pre-stop script
[14:43] <ion>   PID=
[14:43] <ion>   eval "$(
[14:43] <ion>     status | \
[14:43] <ion>     sed -nre 's|^.* stop/pre-stop, process ([0-9]+)$|PID=\1;|p'
[14:43] <ion>   )"
[14:43] <ion>   [ -n "$PID" ]
[14:43] <ion>   # Do stuff with "$PID"
[14:43] <ion> end script
[14:45] <micheil> hmm.. okay.
[14:46] <micheil> so, if I just give my process ample time to shutdown, then first it'll recieve SIGTERM, then there'll be X timeout before SIGKILL is sent
[14:48] <ion> yes
[14:48] <micheil> okay, I think I get what you mean.
[16:21] <micheil> hmm, kill timeout seems not to always function correctly..
[16:26] <micheil> it seems to not always space them out
[16:52] <micheil> actually, it was working, just that my program didn't flush before I exited from within.
[18:55] <adam_g> hi, are there any script sections that will execute upon starting and already started job? 
[18:56] <adam_g> ie, im using an upstart job to start a variable number of instances of another job. there is a default number, but it can also be passed to the upstart job as NUM_INSTANCES.  
[18:57] <adam_g> if the upstart job never enters the started state, i can just start it again with a new NUM_INSTANCES, and the pre-start script will start up or shutdown instances accordingly
[18:58] <adam_g> with a post-stop script, the job actually enters and stays in started state, and i can no longer modify the NUM_INSTANCES without stopping/starting
[19:11] <Keybuk> the started event will pass the "name" of the instance as $INSTANCE
[19:11] <Keybuk> so you get as many:
[19:11] <Keybuk>   started JOB=$jobname INSTANCE=$instancename
[19:11] <Keybuk> as there are instances
[19:11] <Keybuk> is that what you mean?
[19:12] <adam_g> not exactly
[19:14] <adam_g> ive got a "launcher" job that i start like: start launcher NUM_INSTANCE=5
[19:14] <Keybuk> ok
[19:14] <Keybuk> and that runs start <somethingelse> INSTANCE=$i ?
[19:14] <Keybuk> or somesuch thing?
[19:14] <adam_g> in the pre-start, yeah
[19:14] <adam_g> ideally id like to be able to do 'stop launcher' and it will tear down all the jobs it has launched
[19:14] <adam_g> which works
[19:15] <adam_g> but id also like to later be able to do: start launcher NUM_INSTANCE=10 to launch an additional 5
[19:15] <adam_g> its more for convienence than anything, i could certainly just do it manually or via some script elsewhere
[19:15] <Keybuk> right
[19:15] <Keybuk> so you're going to need somewhere to store state
[19:16] <Keybuk> your launcher will need to use /var/run to record how many instances it has running
[19:16] <Keybuk> you might want a second job "on started <thing>" to verify that they are running
[19:16] <Keybuk> and then your stop will iterate that /var/run file to shut them down
[19:16] <Keybuk> likewise your start will check that file to see how many more it needs to start
[19:16] <adam_g> right now im just querying initctl list to find out how many are running
[19:17] <Keybuk> yeah, you could do that too
[19:17] <adam_g> the starting and stopping isn't an issue
[19:17] <adam_g> and actually, i can start the job again with a different NUM_INSTANCES and sync the number of jobs running accordingly
[19:18] <adam_g> my issue is that, once the launcher job is in the started state, i can't just start it again with a new NUM_INSTANCES since its already running
[19:18] <Keybuk> right
[19:18] <Keybuk> because you've used pre-start
[19:18] <Keybuk> it's "running"
[19:18] <Keybuk> the alternative would be to use "task" and "script"
[19:19] <Keybuk> but then "stop launcher" won't work
[19:19] <Keybuk> another option for you:
[19:19] <Keybuk> to adjust the number of instances use "restart job NUM_INSTANCES=10"
[19:19] <Keybuk> and have the post-stop check whether the new num instances is not the old, and if so, you know you're being restarted with a new target
[19:20] <adam_g> that will require current instances to be stopped, though?
[19:22] <Keybuk> no, you could do some magic in your post-stop to prevent that
[19:22] <Keybuk> after all, it's your code stopping the current instances
[19:23] <adam_g> that sounds like it might do the job
[19:24] <adam_g> while i have you, are there any plans to implement some form of scripted status operation?
[19:24] <Keybuk> how do you mean?
[19:26] <adam_g> well, as far as i can see.. status some-job just reports that its pid is running and returns accordingly? 
[19:26] <Keybuk> yes
[19:26] <Keybuk> you're asking the init daemon what its understanding of the process's status is
[19:26] <Keybuk> as far as init is concerned, a process is running or it isn't
[19:26] <adam_g> yeah
[19:28] <Keybuk> what were you expecting instead?
[19:29] <adam_g> something similar to a sysv init script, where the status operation can be scripted to do any number of checks so long as its return codes conform to spec. the same of start and stop.. 
[19:29] <Keybuk> but that would mean Upstart would have to continually run those
[19:30] <Keybuk> for that kind of thing, you probably want something like monit
[19:30] <Keybuk> that can do things like connect to a server, make requests, etc. to determine whether the running process is stoned, etc.
[19:30] <Keybuk> and if so, tell Upstart to take action to respawn it
[19:30] <Keybuk> (killing the stoned process in the process, obviously)
[19:31] <adam_g> right
[19:31] <adam_g> i guess im just used to abusing init.d scripts 
[19:32] <Keybuk> yeah, Upstart takes a "everything that is running is ok" approach
[19:32] <Keybuk> ie. hands off sysadmin
[19:33] <Keybuk> you shouldn't need to login to 10,000 boxes every day and run "status" just to see whether the service is still ok
[19:33] <Keybuk> that's the init daemon's job
[19:35] <Keybuk> so one thing I will be supporting in Upstart 2 is side-along jobs
[19:35] <Keybuk> and temporal jobs
[19:36] <Keybuk> so you'd be able to do things like:
[19:36] <Keybuk>   every hour
[19:36] <Keybuk>   while running
[19:36] <Keybuk>   script
[19:36] <Keybuk>     test_its_working || restart
[19:36] <Keybuk>   end script
[19:36] <Keybuk> type things
[19:38] <adam_g> nice. :)
[19:41] <Keybuk> another thing you could do would be define a custom command:
[19:41] <Keybuk>   on print-status
[19:41] <Keybuk>   script
[19:41] <Keybuk>     echo ...
[19:41] <Keybuk>   end script
[19:41] <Keybuk> then "yourjob print-status" would run that
[19:41] <Keybuk> but that wouldn't be automatic as a result of "status" (but /usr/sbin/service could, for example)
[19:41] <Keybuk> since "status" itself is supposed to be
[19:41] <Keybuk>  (a) cheap
[19:41] <Keybuk>  (b) safe
[20:00] <JanC> custom commands sounds like a nice feature