- This topic has 1 reply, 2 voices, and was last updated 8 years, 1 month ago by Halle Winkler.
-
AuthorPosts
-
March 5, 2016 at 3:37 pm #1029715AidenParticipant
Hello friends openears working successfully, but I want to stop the problem var.op Ears listening.
[[OEPocketsphinxController sharedInstance] setActive:FALSE error:nil];
I use the method is not changing anything and still hear my voice. I want you to perceive sound clicked the stop button.
Also [super viewDidLoad]; Delete the code below in part by still hear my voice. He does not see any way setactive method to work.
[[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil];How do we solve this problem?
[super viewDidLoad];
OELanguageModelGenerator *lmGenerator = [[OELanguageModelGenerator alloc] init];
NSArray *words = [NSArray arrayWithObjects:@”WORK”, @”STATEMENT”, @”OTHER WORD”, @”A PHRASE”, nil];
NSString *name = @”AcousticModelEnglish”;
NSError *err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:@”AcousticModelEnglish”]]; // Change “AcousticModelEnglish” to “AcousticModelSpanish” to create a Spanish language model instead of an English one.NSString *lmPath = nil;
NSString *dicPath = nil;if(err == nil) {
lmPath = [lmGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@”AcousticModelEnglish”];
dicPath = [lmGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@”AcousticModelEnglish”];} else {
NSLog(@”Error: %@”,[err localizedDescription]);
}
// Do any additional setup after loading the view, typically from a nib.[[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil];
[[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:lmPath dictionaryAtPath:dicPath acousticModelAtPath:[OEAcousticModel pathToModel:@”AcousticModelEnglish”] languageModelIsJSGF:NO];
// Change “AcousticModelEnglish” to “AcousticModelSpanish” to perform Spanish recognition instead of English.self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
[self.openEarsEventsObserver setDelegate:self];}
– (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
NSLog(@”The received hypothesis is %@ with a score of %@ and an ID of %@”, hypothesis, recognitionScore, utteranceID);if ([hypothesis isEqualToString:@”WORK”])
[self method];[[OEPocketsphinxController sharedInstance] setActive:FALSE error:nil];
}
– (void) pocketsphinxDidStartListening {
NSLog(@”Pocketsphinx is now listening.”);
}– (void) pocketsphinxDidDetectSpeech {
NSLog(@”Pocketsphinx has detected speech.”);
}– (void) pocketsphinxDidDetectFinishedSpeech {
NSLog(@”Pocketsphinx has detected a period of silence, concluding an utterance.”);
}– (void) pocketsphinxDidStopListening {
NSLog(@”Pocketsphinx has stopped listening.”);
}– (void) pocketsphinxDidSuspendRecognition {
NSLog(@”Pocketsphinx has suspended recognition.”);
}– (void) pocketsphinxDidResumeRecognition {
NSLog(@”Pocketsphinx has resumed recognition.”);
}– (void) pocketsphinxDidChangeLanguageModelToFile:(NSString *)newLanguageModelPathAsString andDictionary:(NSString *)newDictionaryPathAsString {
NSLog(@”Pocketsphinx is now using the following language model: \n%@ and the following dictionary: %@”,newLanguageModelPathAsString,newDictionaryPathAsString);
}– (void) pocketSphinxContinuousSetupDidFailWithReason:(NSString *)reasonForFailure {
NSLog(@”Listening setup wasn’t successful and returned the failure reason: %@”, reasonForFailure);
}– (void) pocketSphinxContinuousTeardownDidFailWithReason:(NSString *)reasonForFailure {
NSLog(@”Listening teardown wasn’t successful and returned the failure reason: %@”, reasonForFailure);
}– (void) testRecognitionCompleted {
NSLog(@”A test file that was submitted for recognition is now complete.”);
}– (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}H. file
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <OpenEars/OELanguageModelGenerator.h>
#import <OpenEars/OEAcousticModel.h>
#import <OpenEars/OEPocketsphinxController.h>
#import <OpenEars/OEAcousticModel.h>
#import <OpenEars/OEEventsObserver.h>@interface ViewController : UIViewController <OEEventsObserverDelegate>
@property (strong, nonatomic) OEEventsObserver *openEarsEventsObserver;
@end
March 5, 2016 at 3:45 pm #1029716Halle WinklerPolitepixWelcome,
setActive doesn’t start or stop listening. From the header and docs:
/**Start the speech recognition engine up. You provide the full paths to a language model and a dictionary file which are created using OELanguageModelGenerator and the acoustic model you want to use (for instance [OEAcousticModel pathToModel:@"AcousticModelEnglish"]).*/ - (void) startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF; // Starts the recognition loop.
/**Shut down the engine. You must do this before releasing a parent view controller that contains OEPocketsphinxController.*/ - (NSError *) stopListening; // Exits from the recognition loop.
setActive prepares the shared OEPocketsphinxController to receive property settings, etc. From the header and docs:
/**This needs to be called with the value TRUE before setting properties of OEPocketsphinxController for the first time in a session, and again before using OEPocketsphinxController in case it has been called with the value FALSE.*/ - (BOOL)setActive:(BOOL)active error:(NSError **)outError;
I recommend using the tutorial:
-
AuthorPosts
- You must be logged in to reply to this topic.