[20:36] <speaker1234> I'm trying to add speech recognition capability to ubuntu by making part of the NaturallySpeaking extension, natlink, run on linux while being driven by NaturallySpeaking on Windows
[20:37] <speaker1234> the first generation attempt is only going for keystroke injection at a system level which should drive most applications for plaintext input.
[20:38] <speaker1234> The second generation would be feeding back window focus cues so we can change grammars based on context
[20:39] <speaker1234> my question is where do I start looking for this kind of information (systemwide keystroke injection and finding state of window on top. It would also be nice to be able to direct keystrokes to a specific window whether it is on the top or not