It crashes in ps_start_utt in 0x80df2. I have logging enabled and receive following output: https://www.sourcedrop.net/WPg4e3069b5b0
The source of the view controller is just some copy&paste from the sample app. https://www.sourcedrop.net/bsB4e270527f7
Does it matter that my application is a music player and changes the AVAudioSession on launch? - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
NSError *audioSessionError = nil;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:&audioSessionError];
[[AVAudioSession sharedInstance] setActive:YES error:&audioSessionError];
if (audioSessionError != nil) {
NSLog(@"Something went wrong with initialising the audio session!");
}
AudioSessionSetActive(true);
AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, ARAudioSessionPropertyListener, nil);
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
self.viewController = [[ARViewController alloc] initWithNibName:@"ARViewController_iPhone" bundle:nil];
} else {
self.viewController = [[ARViewController alloc] initWithNibName:@"ARViewController_iPad" bundle:nil];
}
self.window.rootViewController = self.viewController;
[self.window makeKeyAndVisible];
[OpenEarsLogging startOpenEarsLogging];
return YES;
}
And setting verbosePocketSphinx doesn’t change anything. Just Listening and then crashes.
Acoustic model and language model is generated dynamically, so this shouldn’t be missing.
]]>Acoustic model and language model is generated dynamically, so this shouldn’t be missing.
The language model can be generated dynamically, but the acoustic model is part of the “framework” folder that has to be dragged into an app and cannot be dynamically generated. My guess is that the acoustic model isn’t in your new app.
]]>Thanks for the support.
]]>