[00:31] <Redoubt> If I have an Upstart job start on event1 or (event2 and event3), I assumed it would run once upon event1, and again upon event2 and event3
[00:31] <Redoubt> That doesn't seem to be the case
[00:31] <Redoubt> Can anyone explain why?
[00:32] <Redoubt> It only seems to run once
[00:32] <Redoubt> But if I have a job start on event1, and another job start on (event2 and event3), both jobs run
[00:37] <stgraber> Redoubt: well, a job only starts once, unless you define it as a task or use upstart's instances
[00:37] <Redoubt> Oh my apologies-- it is a task
[00:38] <stgraber> hmm, odd, my understanding of tasks was that they'd re-trigger every time one of the conditions would match... /me tests
[00:39] <Redoubt> That's what I thought too. I can give you my specific task if you like
[00:40] <stgraber> http://paste.ubuntu.com/1393270/
[00:41] <stgraber> that simple example seems to show that upstart behaves as you'd expect...
[00:44] <Redoubt> Well this is fortunate. I'm working on that bluetooth upstart job (bug 1073669) which you're involved in
[00:46] <stgraber> can you pastebin what you've got currently?
[00:53] <Redoubt> Well, here are my test scripts: http://paste.ubuntu.com/1393294/
[00:54] <Redoubt> Because when I test them like you did (via initctl), they do seem to work as expected
[00:54] <Redoubt> But when I reboot, I only get one print from kyle-all.conf, and the event that triggers it is only local-filesystems
[00:55] <Redoubt> All the others print as well, but kyle-all.conf only prints once instead of the three times I expected
[00:58] <stgraber> hmm, so the problem seems to be that it's expecting the local-filesystems to be re-emited too which it won't
[00:58] <stgraber> testing some alternatives here
[00:59] <Redoubt> Oh interesting, so I have a fundamental misunderstanding, I guess. I sort of thought of events as... states, I guess
[00:59] <Redoubt> Something that was persistent
[00:59] <Redoubt> That doesn't actually make sense though, so... :)
[01:00] <stgraber> upstart basically tracks event state per job to know when the start condition matches, but as soon as it matches it looses that state, or so it looks like anyway :)
[01:01] <Redoubt> That makes perfect sense
[01:01] <Redoubt> But interferes with all my plans, darnit
[01:02] <Redoubt> Hmm... it would be nice if it analyzed start conditions a little more and was smarter with that
[01:03] <Redoubt> The only way around this that I can think of is three separate scripts. Yuck
[01:04] <Redoubt> I would probably do four and make one an instance
[01:06] <stgraber> well, you could keep rfkill-restore as it's and add a new rfkill-restore-interface which would roughly be
[01:06] <stgraber> start on net-device-added or bluetooth-device-added
[01:06] <stgraber> task
[01:06] <stgraber> exec initctl start --no-wait rfkill-restore
[01:08] <stgraber> the INTERFACE environment variable should then be sent all the way to the rfkill-restore job that can then change behaviour slightly when it's defined
[01:09] <Redoubt> Well, what about local-filesystems?
[01:10] <stgraber> that'd still be the start condition of the rfkill-restore.conf job, just not of the -interface one
[01:11] <Redoubt> Would exec initctl start --no-wait rfkill-restore fail if local-filesystems hadn't happened?
[01:12] <stgraber> hmm, no, it'd run the code anyway but would exit 0 immediately because of the first if statement in rfkill-restore.conf
[01:13] <Redoubt> If not, this two-script combo is the same as just changing rfkill-restore starts to local-filesystems or net-device-added or bluetooth-device-added, right?
[01:13] <stgraber> so that wouldn't be a problem as such a case would mean that the main rfkill-restore call didn't happen yet and that the interface will be rfkilled a bit later
[01:13] <stgraber> hmm, good point, and I guess doing that change + adding the needed tweaks to the shell script would do the trick
[01:14] <stgraber> no need for a separate job
[01:14] <Redoubt> Okay, I like that. I really appreciate your help man, I've made a couple forum posts about this and no one seems to have any clue
[01:15] <Redoubt> That saving states until start condition met thing... I'm not sure the docs explicitly mention that. Not sure I ever would have learned it :)
[01:15] <stgraber> hehe, np and thanks for the work on that bug. I've been quite busy with other things lately, one of which being upstart development, so it's good to have you help with this one :)
[01:16] <Redoubt> Oh my pleasure!
[01:17] <stgraber> yeah, the state saving is a bit odd, it makes sense when you think of it, but it can surprise you :) I believe Scott (original upstart author) covered that in some blogpost a few years ago, or maybe it was a session at UDS, can't recall... anyway, may be something we should better cover in the cookbook.
[01:19] <SpamapS> Redoubt: if you want to poke something when network connections change, just use /etc/network/if-up.d
[01:20] <SpamapS> Redoubt: upstart is a bit low level for what you're intending
[01:21] <SpamapS> Redoubt: also if you do need to do state tracking, you can use the 'wait-for-state' job added in Ubuntu 11.04
[01:21] <Redoubt> stgraber: Yeah, if you think of it, it would be handy to mention that in there somewhere.
[01:22] <Redoubt> SpamapS: Huh, wait-for-state is a new one for me, thank you for the reference!
[01:23] <stgraber> SpamapS: wait-for-state only works for jobs though, not for events right? (it waits for a job to get into the WAIT_FOR state)
[01:24] <SpamapS> stgraber: indeed.. the idea though, is that you use the job to track state.. and then you just call wait-for-state with that job when you want to block on a state
[01:24] <stgraber> SpamapS: Redoubt is helping me with a limitation of my rfkill-restore job I introduced in 12.10 where some devices you can rfkill show up after the rfkill-restore job is done. The only way to reliably cover this case is to trigger the restore job when network or bluetooth devices show up.
[01:25] <stgraber> SpamapS: initially I thought I added the local-filesystems requirement of rfkill-restore because I actually needed local-filesystems, but I don't really as the job itself is checking for the paths it needs and exiting if they're not there.
[01:26] <stgraber> SpamapS: so the simple fix for our problem is to change from "start on local-filesystems" to "start on local-filesystems or net-device-added or bluetooth-device-added" which will guarantee the job to be triggered once per boot + once per device
[01:26] <stgraber> SpamapS: with some of the per-device call potentially being no-ops if the filesystem isn't there yet, but in such case, the rfkill-restore job will be called a bit later anyway and will take care of any leftovers
[01:26] <SpamapS> stgraber: Right, so it sounds like you just need a single task, with start on net-device-added or bluetooth-device-added , and an instance value that will be unique enough to run them at the right time
[01:26] <SpamapS> stgraber: that won't actually guarantee that it will be run once per device
[01:27] <stgraber> SpamapS: as it's a task, we don't even need to use instances
[01:27] <SpamapS> stgraber: thats not true at all
[01:28] <SpamapS> stgraber: the only thing task does is guarantee that it will block until it reaches stopped again
[01:28] <SpamapS> stgraber: it does not serialize events. If the state is already 'start/running' .. the next event in the or is ignored
[01:28] <Redoubt> SpamapS: So you think instances are a better way to go?
[01:29] <stgraber> SpamapS: it won't be start/running because the task exits in a few miliseconds
[01:29] <SpamapS> I think instances is the only way to go if you need to run it once per event
[01:29] <SpamapS> stgraber: >< cross your fingers? ;)
[01:31] <SpamapS> thats actually again where wait-for-state works, because you can wait for stopped on a task, but start it. That way all of the waiters will block until it has been run once
[01:31] <stgraber> SpamapS: well, the way the script is designed, we don't need the value of INTERFACE, whenever it triggers, it'll restore the rfkill value of any interface it hasn't restored yet. So if we have multiple devices showing up at the same time and we end up only triggering the job once, that's fine as it'll cover all of them anyway
[01:32] <SpamapS> start wait-for-state WAITER=$UPSTART_JOB WAIT_STATE=stopped GOAL_STATE=start WAIT_FOR=rfkill-restore
[01:32] <SpamapS> assuming rfkill-restore is a task
[01:33] <stgraber> it's
[01:33] <SpamapS> stgraber: sounds like a very tight race..
[01:34] <SpamapS> [dev1-added][rfkill-restore starts and handles dev1][dev2-added][upstart acks that rfkill-restore is done and returns to stop/waiting]
[01:34] <SpamapS> seems like there's a very tiny window for dev2 to not be handled
[01:36] <Redoubt> An instance would then require a helper script, correct?
[01:36] <Redoubt> helper job, rather
[01:36] <Redoubt> Two jobs: One instance, one job to run the instance
[01:36] <SpamapS> Well, the helper is wait-for-state
[01:37] <Redoubt> So in order to use wait-for-state the job must be an instance?
[01:37] <SpamapS> but yes, there will be two jobs, one which lists the events, and runs the wait-for-state command for each instance of itself, and then the actual rfkill-restore
[01:37] <SpamapS> Redoubt: no, its just in this case, it makes more sense that way as it closes the race, even if the race is tiny
[01:38] <Redoubt> Of you have a job that lists the events and runs wait-for-state, doesn't that provide a window for the race to occur as well?
[01:39] <stgraber> SpamapS: I'd actually have to check, but I vaguely remember the udev events being blocking on the udev side, which is that until the initctl done by udev returns, no other uevent can be emitted
[01:40] <stgraber> hmm, upstart-events marks them as signal, so I guess not
[01:40] <SpamapS> http://paste.ubuntu.com/1393368/
[01:40] <stgraber> ah no, I'm being confused with some other weird udev rules we have somewhere else that doesn't use the bridge
[01:40] <stgraber> WHATEVER_DEVICE_ADDED_RELIABLY_EXPORTS would be INTERFACE
[01:41] <SpamapS> stgraber: so udev will queue up the events until the waited for event is handled?
[01:41] <SpamapS> because that means you can just handle with or's
[01:43] <stgraber> SpamapS: nope, as I said, I got confused with some other hacks I had to look at recently where something indirectly does an "initctl emit" from a udev hook, which in this case indeed blocks everything on udev's side. But that's not the case for those event emitted by upstart-udev-bridge.
[01:45] <stgraber> SpamapS: so your solution is fine, as much as I dislike introducing an extra job to cover that.
[01:48] <SpamapS> stgraber: its a bug in upstart not to have this built in
[01:49] <SpamapS> stgraber: In fact we talked about just changing or's to work this way, but we'd have to call that upstart 2.0 or something.. because it would likely break some things
[01:49] <Redoubt> SpamapS: That would be handy!
[01:50] <SpamapS> Yeah, it would
[01:50] <stgraber> yeah, or just add another keyword "or2" or whatever, not sure what'd be the most confusing for the users :)
[01:50] <SpamapS> well, its actually not or that is broken, per se
[01:50] <SpamapS> its blocking in general
[01:50] <SpamapS> it just doesn't get done if the goal does not change
[01:52] <SpamapS> what should happen is, on a blocking event emission, we should look for any start on's or stop ons that match, and then flip only the ones that need changing, but block if there are other events already being waited on for any that don't change
[01:53] <SpamapS> Of course, another one would be to finally do the "state rewrite" that keybuk intended, which would introduce a 'while' keyword and allow you to do 'while bluetooth-device is running'
[15:19] <stgraber> yay, my prctl branch builds and appears to work!
[15:24] <stgraber> stgraber 25187  0.0  0.0  41988  1928 pts/4    S+   10:22   0:00  |   \_ /home/stgraber/data/code/upstart/init --session --confdir=~/.init/
[15:24] <stgraber> stgraber 25201  0.0  0.0   4328   360 ?        S    10:24   0:00  |       \_ /bin/sleep 2000
[15:24] <stgraber> neat!
[15:25] <stgraber> and as init is now the parent, I can even stop that job (in the past it'd just hang as it wouldn't receive the SIGCHILD)
[17:02] <SpamapS> stgraber: *nice*