[10:02] <johnm> Hello all. I wonder if anyone can answer what I presume to be a quick question. I have an upstart job with named instances, and I wanted to specify which named isntances to automatically start during init. Is this possible?
[10:11] <jhunt> johnm: you mean at boot? If so, you'll need to arrange for some other job or event to specify which named instances of the job to start.
[10:11] <jhunt> johnm: have you read http://upstart.ubuntu.com/cookbook/#instance ?
[10:17] <johnm> jhunt: I have indeed, but it didn't really imply how best to do it regarding multiple instances on boot.
[10:18] <johnm> I already have a working script, I just presumed there may be a neater way than having a new job in place to call the other with arguments, which is what I presume needs to happen?
[10:18] <johnm> (as mentioned in the doc)
[10:19] <jhunt> 'instance' is a way to allow a single job to run multiple times. However, you need something to *tell* that job to run multiple times with a given unique name :)
[10:19] <johnm> absolutely. I hoped there was something a little fancier in upstart to already cater for it opposed to having another job call it - but it's no biggy :)
[10:20] <johnm> Thanks.
[10:20] <jhunt> how would that work though? remember that Upstart needs to track a single pid, hence a job needs to map to a single pid, hence you cannot have a job creating multiple pids. 
[10:21] <jhunt> but you can have multiple jobs with a single pid each and that's what instance provides.
[10:23] <johnm> I've only recently been digging into upstart in any serious fashion, but I suppose what I was expecting was an interface somewhat similar to how initctl list would display instanced jobs when it came down to executing jobs. perhaps a small stanza somewhat similar to the start stanza defining which enumerations of the script. But without understanding upstart internals more that might be a very naive way of looking at it.
[10:26] <johnm> perhaps even something a little like 'start on runlevel [2345] instances ( "INSTANCE=instance1 VAR1=one VAR2=two" "INSTANCE=instance2 VAR1=one VAR2=two" )' etc
[10:29] <jhunt> job configuration files are supposed to be simple and that's already looking pretty hairy :) Also, you're hard-coding the instance within the job itself. What if you want to create a new instance dynamically to cater for an increase in demand for your service?
[10:30] <johnm> isn't that the idea of it being instanced anyway? but yes, I imagined the enumeration to happen within upstart outside of the job configuration itself
[10:31] <johnm> but I realise that upstart doesnt seem to habe much external configuration if any outside of the jobs themselves anyway
[10:34] <johnm> I mean, having the second job to start instanced jobs is much the same as just sticking the initial instances hardcoded in the job itself.
[10:35] <johnm> it just makes it a single .conf opposed to two.
[10:43] <jhunt> not quite - *any number* of other jobs could be configured to start new instances of your service should you so desire. And crucially, an admin could also manually start more instances using initctl. It's an interesting idea, but I don't envisage that we'll be changing the design any time soon as the current one works very well :)
[10:57] <johnm> Yeah indeed, it was more me just not understanding that a new job to orchestrate the instanced job itself was required :)
[10:58] <johnm> I thought it may be possible to essentially instruct upstart that when executing jobX to enumerate it passing in ARGs for each instance specified. Perhaps though something similar to update-rc.d, but I think that I missed the point there :)
[10:59] <johnm> I suppose I'm using a slightly awkward example of mongod
[11:00] <johnm> there is also the problem of course whereby start-stop-daemon will check the path to binary on --exec and for multuple instance jobs where it's expected to run multiple, it just exits successful.
[11:00] <johnm> I saw the bug talking about making that native in the future, which would be nice (since the exec su shell hack is pretty messy), but for cases where the instances are fairly static then it's a bit odd.
[11:01] <johnm> another example might be multiple instances of MySQL or similar.
[11:01] <johnm> httpd has been used in the cookbook example somewhere I think.
[11:01] <johnm> All of these things would require static configuration unless the job itself were to instance based on globbing config files or some such
[11:02] <johnm> (which to be honest, is probably the more sensible way)
[11:02] <johnm> just controlling it might then prove a little tricky ;)
[19:19] <scoopex> my apache webserver uses configuration snippes which are located at a nfs share....on systemboot it fails to start because the nfs share is mounted to late(snippet cannto be loaded).....the lsb-tags of apache request $remote_fs...but ubuntu seems to ignore this...any hints?