VoxForge
--- (Edited on 8/27/2008 9:34 pm [GMT-0500] by Visitor) ---
Hi FrozenFire
What I know: each sentence is 'compiled' into the language model. I think the language model should recognize more voices: even if you speak two totally different voices in one time. Contributing (even more) speech and tell it to your friends is the best way to help open source speech recognition!
Daniël
--- (Edited on 8/28/2008 12:37 am [GMT-0500] by dano) ---
Hi Dano,
>each sentence is 'compiled' into the language model
I think you mean acoustic model.
Ken
--- (Edited on 9/1/2008 8:19 pm [GMT-0400] by kmaclean) ---
Hi FrozenFire,
>In my contributions, I have used my dictation voice, however I've been
>thinking of switching to my speaking voice [...] Would this screw up
>processing for my contributions?
As a general rule, a speech recognition engine best recognizes the same type of speech as it was trained on. Therefore, it would seem that submitting your dictation voice would give you the best recognition results when you to dictate (since that is the voice you will be using).
However, the training process does vocal tract length normalization to take things like this into account (since many people also have different speaking styles depending on the context). I'm not sure how that would help in contexts like yours where your dictation voice greatly differs from your speaking voice.
I would say that if you feel you've contributed enough of your dictation voice (which will benefit you... once we have enough speech) then by all means contribute with your speaking voice, to provide some variety (and thus benefit others).
Ken
--- (Edited on 9/1/2008 8:18 pm [GMT-0400] by kmaclean) ---