=== yofel_ is now known as yofel === yofel_ is now known as yofel === jalcine is now known as webjadmin_ === webjadmin_ is now known as jalcine [19:00] Is HUD accessible over D-Bus? I'm working on a speech app that'd forward text as if it were typed into HUD [19:05] gord: ^^ [19:05] tedg might know too, if you want to ask him when he's in tomorrow [19:19] mhall119: thanks :) [19:19] I might just pull in the code and just grep any references to d-bus and see where that takes me. [21:26] jalcine: seen hud-cli and hud-dump-application and hud-list-applications and hud-verify-app-info? [21:27] plus yes, the menus are very dbus exposed [21:27] I saw hud-cli, but not the others. [21:27] I don't know what they all do exactly [21:27] And if d-bus is available, I'd stick to that. [21:27] so what are you doing with it? [21:27] One less dependency to code in. [21:28] I was thinking of using dbus to get the menu structure then mangling it into a jsgf grammar file to feed to pocket-sphynx [21:29] Well, we've gotten mediocre speech recognition going and right now, we're packaging the latest version of CMU's tools for acoustic model training. [21:29] Because the version in the repository aren't as proficient as the ones available. [21:29] com.canonical.hud is the dbus thing [21:29] Yeah, SC's like powered on PocketSphinx, its it's battery. [21:30] the dbus hooks for HUD are about submitting queries to it, should be controllable that way [21:31] That should be it, then. [21:31] I'd begin hacking up another plug-in. [21:31] yeah, but it doesn't expose the full list of options through that [21:32] you would need to poke at com.canonical.AppMenu.Registrar and call GetMenus() there, then go through the menus picking up entries, just like the HUD does [21:32] Well, all I really need to be able to provide input. [21:32] *able to do [21:33] well how are you restricting the grammar? [21:33] We aren't. We're working on building speaker-independent models. HUD's a feature I'd like to connect with it. [21:34] oh, well should be pretty easy then, but recognition accuracy won't be so hot [21:35] if you can parse the menu structures and create a jsgf grammar file then pocketsphynx will only listen for useful words and will be more accurate in theory [21:36] It would, but we'd have to get the user to adapt that content into a specific model. [21:38] I proposed a delicately difficult idea: http://thesii.org/wiki/SpeechControl/Ideologies/AutomaticAcousticModelImprovement [21:38] Hi ,what do you think about this feature for unity ? http://brainstorm.ubuntu.com/idea/29302/ [21:38] But I figure with HUD + SpeechControl, AAMI (above) would gradually improve in use.