[07:17] hey all :) I'm trying to run this command in an upstart script. Does anyone of you see whats wrong with it? "exec start-stop-daemon --start -c youtrack --exec /usr/bin/java -jar /usr/local/youtrack/youtrack.jar 10002" [09:18] xnox, stgraber: could you take a look at the o/s MPs when you get a chance? [12:45] jodh: ' If a job is running, the changes to the definition will not occur until the job has been stopped; for instance jobs, all instances must be stopped. ' [12:45] jodh: what does 'instance job' mean there? [12:45] jodh: and will 'initctl reload' override the above? [12:47] jodh: never mind, the job I care about is an instance job [12:48] jodh: or rather, ignore the first question; I'd love to know if 'initctl reload' enough to overide [12:52] elmo: 'initctl reload' simply sends the reload signal (SIGHUP by default) to the job. If you change a .conf file, Upstart gets notified immediately, but if a job is actually running, *it* won't get a new copy of the configuration unless you stop and then restart it. [12:52] jodh: so, the issue is I have a ceph osd instance job [12:52] jodh: and I *really* don't want to restart all 11 of them [12:53] jodh: is there no way to get upstart to use the new ceph-osd.conf for any future restarts? [12:53] (short of stopping all 11) [12:57] jodh: e.g could I use 'initctl reload ceph-osd' without risk of it stop/starting all the instances? [13:03] jodh: I suspect there's some reload vs reload-configuration confusion propagated by http://upstart.ubuntu.com/faq.html#reload [13:03] elmo: yes, are you talking about reloading the upstart (.conf) config, or the ceph config (assuming there is one?) [13:04] just the upstart config [13:04] basically right now it's starting up the OSDs with incorrect command-line arguments calculated in the upstart conf itself [13:04] we fixed that [13:05] but if someone bounces one of these, it'll come back with incorrect settings (to potentially disastrous results) [13:06] and I'd expect that diagnosing that problem would be nigh impossible without the context we just got about how upstart caches the configuration just now [13:06] Spads: confused - I thought you said you fixed the issue and that the running instance(s) have the wrong arguments? [13:06] Spads: the whole point of upstart instances is that they are "templates" - every instance is "identical", atleast to upstart, apart from the value of the 'instance' variable(s). [13:06] jodh: yes, that's what we're saying. We fixed ceph-osd.conf, but when we restart an individual instance job it uses the bad information from the old broken conf [13:07] sure [13:07] how do we make them all use the new version of the config without taking them all down together? [13:07] Spads: you need to stop, tehn start not "restart". [13:07] yes, we stopped, slept for ten seconds, then started [13:07] for one instance job [13:07] stop ceph-osd id=${id}; sleep 10; start ceph-osd id=${id} [13:08] right, but since there were presumably other instances still running, upstart will not give the newly started instance the *new* config since that would mean the newly started instance would/could be completely different from the running instances (bad). [13:08] and that start caused it to start with the same broken arguments (which are not the ones in the conf on disk any more) [13:08] sure [13:08] how do we tell upstart to switch to the new one while we do rolling restarts of the instance jobs? [13:09] stop all instances (which allows upstart to propagate the new config), then start the new instances. [13:09] ☹ [13:10] is there not even a hacky way to force the issue here? [13:12] we don't like hacks :) seriously, not that I can think of, sorry. [13:14] Hm, no way to (say) take an instance job down and start it up in a new instance group somehow? [13:14] some kind of staging upstart job for these to live in for a time? [13:14] * Spads is clutching at straws [13:45] Spads: you can cheat! [13:45] xnox: Tell me how. [13:45] Spads: cp /etc/init/ceph-osd.conf /etc/init/seph-osd-transition.conf [13:45] * Spads nods [13:45] Spads: stop 5 instances of ceph-osd, and start 5 instances of ceph-osd-transition [13:45] that's what I was hoping would work. have you done this? [13:46] Spads: stop remaining 5 instances of ceph-osd, start 5 instances of ceph-osd (new config) [13:46] Spads: stop & delete all ceph-osd-transition [13:46] Spads: start ramaining 5 of ceph-osd. [13:46] Spads: but you just need to make sure that the first 5 instance of ceph-osd-transition are operational and do cephy things that ceph-osd is suppose to do. [13:48] * Spads nods [13:52] actually, that technique is in the cookbook (http://upstart.ubuntu.com/cookbook/#using-expect-with-script-sections), but maybe we should elevate it to its own section :) [14:00] interesting.