VoxForge
Hi,
I followed the tutorial (automated version) to create my own custom acoustic model. It works very well! Recoginition of my voice is excellent.
Is there a way to combine my custom acoustic model with the voxforge model?
Also, is there a script or other automated way to all all the words that appear in my prompts file to my .dict file with proper pronunciations?
--- (Edited on 4/21/2009 9:49 pm [GMT-0500] by Visitor) ---
Hi,
> Is there a way to combine my custom acoustic model with the voxforge model?
You can adapt the VoxForge acoustic model to your own voice. See this tutorial. Warning: the tutorial is for HTK 3.2.1, for the new version (3.4.1) see the tutorial section 3.6 Adapting the HMMs of HTKBook.
This discussion might also be relevant.
> Also, is there a script or other automated way to all all the words that appear in my prompts file to my .dict file with proper pronunciations?
See steps 1 and 2 in Automated Audio Segmentation Using Forced Alignment (Draft)
--- (Edited on 22.04.2009 08:48 [GMT+0200] by tpavelka) ---
Thanks. In creating my custom acoustic model I used words from the voxforge_lexicon that were not in the speaker independant voxforge model. Will the adapt tutorial take this into account and work with my new words as well as the words already in the speaker independant model?
--- (Edited on 4/22/2009 9:16 am [GMT-0500] by Visitor) ---
> Will the adapt tutorial take this into account and work with my new words
Yes. During both training and adaptation phase you train phonetic unit HMMs (i.e. HMMs of individual phonemes or triphones). You specify the words that you will use during the decoding phase by supplying a dictionary. To add a new word you only need to know it's phonetic transcription.
--- (Edited on 22.04.2009 17:02 [GMT+0200] by tpavelka) ---
--- (Edited on 22.04.2009 17:04 [GMT+0200] by tpavelka) ---