Home › Forums › OpenEars plugins › Rejecto questions
- This topic has 8 replies, 2 voices, and was last updated 10 years ago by Halle Winkler.
-
AuthorPosts
-
April 11, 2014 at 10:49 am #1020852loryParticipant
Hello,
i’m new to openears, I’ve played a bit with openears but the word spotting was a bit random in my opinion. I’ve seen on stackoverflow that using Rejecto could help.
I follow the tutorial but my app crash immediatly with this error:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[LanguageModelGenerator generateRejectingLanguageModelFromArray:withFilesNamed:withOptionalExclusions:usingVowelsOnly:withWeight:forAcousticModelAtPath:]: unrecognized selector sent to instance 0x178262500'
I just remplaced:
NSError *err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[AcousticModel pathToModel:@"AcousticModelEnglish"]];
With:
NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words withFilesNamed:name withOptionalExclusions:nil usingVowelsOnly:FALSE withWeight:nil forAcousticModelAtPath:[AcousticModel pathToModel:@"AcousticModelEnglish"]];
Notes: I’m on Openears 1.7
Imports in my Controller:
#import <OpenEars/PocketsphinxController.h>
#import <OpenEars/LanguageModelGenerator.h>
#import <RejectoDemo/LanguageModelGenerator+Rejecto.h>
#import <OpenEars/AcousticModel.h>
#import <OpenEars/OpenEarsEventsObserver.h>April 11, 2014 at 12:40 pm #1020854Halle WinklerPolitepixWelcome Lory,
I can’t recall receiving reports of random word spotting with OpenEars, but if you are receiving poor accuracy levels it is usually due to testing on the Simulator, or having typos in your language model, or using trying to trigger recognition using speech which isn’t represented in the language model (this can happen if you unintentionally haven’t used the language model you’ve thought you used, or if your language model happens to contain a different set of words than you are expecting, perhaps due to a minor logic error). Accuracy can also be significantly reduced with certain non-US accents if that is a possibility.
In order to fix your issue with your Rejecto install, take a look at the tutorial instructions again, paying special attention to the one regarding setting the “Other Linker Flags” build setting to contain the -ObjC flag.
April 11, 2014 at 1:15 pm #1020855loryParticipantThanks for the fast response, it was the flag issue.
It’s seems to work better with Rejecto now. At the beginning it was a still a bit random but when I speak faster the good sentence is more often spotted.
I used these sentences:
"words" : [ { "word" : "half and", "match" : false }, { "word" : "left hand", "match" : true }, { "word" : "help me", "match" : false }, { "word" : "test it", "match" : false } ]
If I say “left-hand” rapidly it’s ok but with “left hand” hypothesis return only “and”.
But you are right, I have a french accent and it’s may be the problem ! But I really think I pronounce the same sentence with the same pronunciation and having different results is a bit strange.
April 11, 2014 at 1:39 pm #1020857Halle WinklerPolitepixHi Lory,
Yes, any non-English accent will unfortunately have a distinct effect on recognition. I wish that wasn’t the case but it’s unavoidable since the actual phoneme sounds (the building blocks of recognition) are different for native speakers of different languages, even when speaking a word in a second (or third, etc) language that you are so highly fluent in. I have the same problems when I use German speech recognition since German is my second language, even though many native German speakers say that my pronunciation sounds only mildly-accented to their ear. Since the result is uncertainty for the engine, it isn’t surprising that the wrong results can be differently wrong.
However, I think the bigger issue is that what you’ve shown above appears to be an NSDictionary of an NSArray of more NSDictionaries, is that correct? It doesn’t match the format of any kind of input that OpenEars takes in order to create language models or grammars, so it isn’t possible that you are using it to successfully create a language model or grammar using LanguageModelGenerator. Perhaps the issue is that the model is not being created but a different model is being used with different words? Or that a very buggy model is being created from that input. Take a look at the standard input for LanguageModelGenerator in the tutorial to get a sense of what the input for a successful model generation looks like. It’s a very good idea to turn on verboseLanguageModelGenerator, verbosePocketsphinxController and OpenEarsLogging while troubleshooting these issues so that you can see any error messages/warning messages encountered.
BTW, OpenEars vocabulary has to be submitted in uppercase – submitting it in lowercase can have a big effect on accuracy.
April 11, 2014 at 2:00 pm #1020858loryParticipantThis is just my jSON, I pass a real array to the generator and hypothesis can output these 8 different words.
Something like this:
NSMutableArray *words = [[NSMutableArray alloc] init]; for (NSDictionary *dict in [self.currentSentenceDictionnary valueForKey:@"words"]){ [words addObject:[[dict valueForKey:@"word"] uppercaseString]]; }
I have accent but sometimes it recognize very different things from what I say. For exemple if I say “left hand” it return often “it and”. My “left” pronunciation is not near of “it”. In opposition if I said “test it” it work everytimes. “Help me” never worked but already have a “half hand help” output.
I’ll explain what I want to do, maybe I don’t use the best algorithm to do what I want.
I made an app for learning english with music. In my controller an extract of a video music clip is played and you have to repeat what you have listen.
For this example the sentence you listen is “I wanna be your left hand man”
In the UIView I display “I wanna be your … … man” and the user must say “left hand” to success. But I can’t only put two words in the array, because then it’s too easy and “left hand” is always recognized. So I put some “close” sentences to see if the user really understand what he listens but even me who know the answer I can’t success everytime so it’s a bit problematic.April 12, 2014 at 10:03 am #1020865Halle WinklerPolitepixWhich device are these results with?
April 17, 2014 at 9:10 am #1020891loryParticipantThe latest iPad Air
April 17, 2014 at 9:57 am #1020893Halle WinklerPolitepixOK, have you seen this latest blog post about dynamic grammar creation with OpenEars? It could get you closer to the result you’re seeking than using a probabilistic language model plus Rejecto:
https://www.politepix.com/2014/04/10/openears-1-7-introducing-dynamic-grammar-generation/
You definitely don’t want to put words into your model that aren’t words you want to detect. That will increase the potential for confusion, and that confusion will be multiplied by the fundamental design feature of the app that it is to be used by non-native speakers. I think this is one where you might either get better results by using Rejecto with a high weighting and removing any words from your language model that you don’t want to detect (i.e., let Rejecto exclusively perform the role of detecting things you don’t want and only have words in your model that you do want to detect), or by moving to a grammar instead of a language model.
April 17, 2014 at 10:00 am #1020894Halle WinklerPolitepixI would definitely go into this app concept expecting to encounter multiple issues that need to be accommodated and worked through, since non-native speaker recognition is automatically very difficult for any speech recognition engine. I’m happy to help you work through them as best I can with the understanding that this isn’t the kind of goal that works out of the box.
-
AuthorPosts
- You must be logged in to reply to this topic.