Voice Commands

Home Forums OpenEars Voice Commands

Viewing 2 posts - 1 through 2 (of 2 total)

  • Author
  • #1019327

    Dear Openears team,

    Thank you very much for the excellent iOS library.

    I would like to use Openears with my application for only activating voice commands instead of tapping buttons that using most of the time.

    There are 6 voice commands that going to activate in my application via voice commands such as Cancel, Reply, Reply All, Send, Block, Delete.

    I have created a dictionary and language model using only these 6 words and also I have added Rejecto plugin.

    Here is the problem. If I speak “Bad”, it recognizes as “Block” or If I speak something else than the dictionary items, it recognizes as my one of the voice commands/dictionary item. I would like to recognize only the list of items when I speak. So what is the best approach to do this? Can I use something like confident value (however it will not work out if I’m getting this value for the entire detected sentence instead of a word)?

    Here are the approaches that I’m planning to implement.

    1. Provide a huge language-model file to process and skip the words which are not a command.

    2. Use the recognitionScore as confident value ( however I don’t know more details about recognitionScore such as its max value, min value etc).

    Please let me know which would be the best approach. I would appreciate if you have any great suggestions.

    Thank you.

    Halle Winkler

    Hi Ramshad,

    I don’t recommend ever using the scores for any kind of objective confidence cutoff since they are so dependent on the speaker and the session. Have you tried all of the tuning elements available in Rejecto such as weight, exclusions and vowels-only? That would be the first place I’d experiment, making use of the new pathToTestFile tool so that you can retest the same audio input and compare.

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.