VoxForge
I've been working with Julius and HTK for a month and these toolkits helped me a lot. I'm working on Persian datasets and it is almost 42 hours speech wav files.
It is obvious that if my dataset hour rises, my accuracy can rise too. It's not my concerned now.
I want to see every phonemes what I say. I mean, I just see every phoneme what Julius decides to show me after going through garmmar. How can I see every phoneme what I just said when I'm debuging Julius code? Which part of code is charge of making them an array of phonemes without considering grammar file?
Is it possible?
Thank you.
>How can I see every phoneme what I just said when I'm debuging Julius code?
It should be part of the regular output from running Julius live, next to 'phseq1'
Thank you for your reply.
I know but I meant, how can I get phonemes array exactly after acoustic model matching. The link what you mentioned was after going through the grammar file.
For instance, I said 'Phonix' but Julius illustrate 'Phone' for me.
Why? Because there is no word for 'Phonix' in my grammar file and Julius estimates 'Phone' because is closer to 'Phonix' signal.
I just want to see every phonemes what I say after acoustic model.
Ah, OK
You want to do "phoneme recognition", this CMU Sphinx page provides some background you might be able to apply to the Julius context.
Thank you, but could you give me more detail?
Inaddition I want to see every words I say. I'm not going to use GRAMMAR file. Grammar file limit me.
Is it possible?