Tagged: LanguageModel, nil. LanguageModelPath, null, Path
- This topic has 11 replies, 2 voices, and was last updated 7 years ago by Halle Winkler.
-
AuthorPosts
-
April 19, 2017 at 11:14 am #1031761AlexMeParticipant
Hello Halle,
When using the OELanguageModelGenerator everything works fine but i want to use my own language model(i want to test 1gram and trigram models) and dictionary im getting an error that my models are not found:
startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this language model, that means the correct path to your language model that you should pass to this method's languageModelPath argument is as follows: NSString *correctPathToMyLanguageModelFile = [myLanguageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"TheNameIChoseForMyVocabulary"]; Feel free to copy and paste this code for your path to your language model, but remember to replace the part that says "TheNameIChoseForMyVocabulary" with the name you actually chose for your language model or you will get this error again (and replace myLanguageModelGenerator with the name of your OELanguageModelGenerator instance). Since this file is required, expect an exception or undocumented behavior shortly.
The models are just at the root path of my project.
When calling the model path:NSString *lmPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"2277.lm"]; NSLog(@"%@", lmPath); NSString *dicPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"6258.dic"]; NSLog(@"%@", dicPath);
Log:
/var/containers/Bundle/Application/xxxxxxxx9/EntryPoint.app/2277.lm <code>/var/containers/Bundle/Application/xxxxxxxx9/EntryPoint.app/6258.dic
i also turned on logging
#import <OpenEars/OELogging.h>; [OELogging startOpenEarsLogging];
but the log content didn’t change.
I tested the app on ios 9.3.5 and 10.3 on different ipads and the Simulator(though i know that it can cause problems but that was just to check)
Do you know whats wrong?
Thanks in advance.April 19, 2017 at 7:07 pm #1031766Halle WinklerPolitepixWelcome,
Let’s start by troubleshooting why logging isn’t working for you. Does it work for you when you run the sample app and uncomment the line where OELogging is started?
April 20, 2017 at 11:38 am #1031768AlexMeParticipantIn the sample app its working fine. I added my class as a delegate of OEEventsObserver and when using the OELanguageModelGenerator it gives me a nice long log(mainly about the sphinx stuff) just like the sample app, but with the custom Models it’s still only that one Message i don’t know if thats on purpose or an error.
April 20, 2017 at 2:34 pm #1031769Halle WinklerPolitepixHi,
OEEventsObserver and OELogging are different things – we need to get OELogging working in order to see the kinds of errors we can use to troubleshoot your issue. Is OELogging working for you in the sample app, and if so, can you show me the OELogging output from the sample app? All you have to do to turn on OELogging in the sample app is to uncomment the line [OELogging startOpenEarsLogging] if it is commented, thanks.
April 20, 2017 at 2:51 pm #1031770AlexMeParticipantHi,
here’s the log with OELogging turned on from the sample app:2017-04-20 14:41:21.020 OpenEarsSampleApp[8007:196990] Starting OpenEars logging for OpenEars version 2.504 on 64-bit device (or build): iPhone running iOS version: 10.300000 2017-04-20 14:41:21.021 OpenEarsSampleApp[8007:196990] Creating shared instance of OEPocketsphinxController 2017-04-20 14:41:21.026 OpenEarsSampleApp[8007:196990] Starting dynamic language model generation 2017-04-20 14:41:21.036 OpenEarsSampleApp[8007:196990] Done creating language model with CMUCLMTK in 0.009271 seconds. 2017-04-20 14:41:21.036 OpenEarsSampleApp[8007:196990] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish 2017-04-20 14:41:21.096 OpenEarsSampleApp[8007:196990] I'm done running performDictionaryLookup and it took 0.031641 seconds 2017-04-20 14:41:21.098 OpenEarsSampleApp[8007:196990] I'm done running dynamic language model generation and it took 0.072660 seconds 2017-04-20 14:41:21.099 OpenEarsSampleApp[8007:196990] Starting dynamic language model generation 2017-04-20 14:41:21.110 OpenEarsSampleApp[8007:196990] Done creating language model with CMUCLMTK in 0.010184 seconds. 2017-04-20 14:41:21.110 OpenEarsSampleApp[8007:196990] Returning a cached version of LanguageModelGeneratorLookupList.text 2017-04-20 14:41:21.145 OpenEarsSampleApp[8007:196990] The word Quidnunc was not found in the dictionary of the acoustic model /Users/entwicklung/Library/Developer/CoreSimulator/Devices/D5E4F19E-7990-4A44-87AC-0710872F52F3/data/Containers/Bundle/Application/69ACA303-E3A0-4FE7-96BB-48DD14F7E840/OpenEarsSampleApp.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues. 2017-04-20 14:41:21.145 OpenEarsSampleApp[8007:196990] Using convertGraphemes for the word or phrase quidnunc which doesn't appear in the dictionary 2017-04-20 14:41:21.146 OpenEarsSampleApp[8007:196990] Elapsed time to generate unknown word phonemes in English is 0.001343 2017-04-20 14:41:21.147 OpenEarsSampleApp[8007:196990] the graphemes "K W IH D N AH NG K" were created for the word Quidnunc using the fallback method. 2017-04-20 14:41:21.159 OpenEarsSampleApp[8007:196990] I'm done running performDictionaryLookup and it took 0.048424 seconds 2017-04-20 14:41:21.160 OpenEarsSampleApp[8007:196990] I'm done running dynamic language model generation and it took 0.061761 seconds 2017-04-20 14:41:21.160 OpenEarsSampleApp[8007:196990] Welcome to the OpenEars sample project. This project understands the words: ( backward, change, forward, go, left, model, right, turn ), and if you say "change model" (assuming you haven't altered that trigger phrase in this sample app) it will switch to its dynamically-generated model which understands the words: ( Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Quidnunc, "change model" ) 2017-04-20 14:41:21.161 OpenEarsSampleApp[8007:196990] Attempting to start listening session from startListeningWithLanguageModelAtPath: 2017-04-20 14:41:21.162 OpenEarsSampleApp[8007:196990] User gave mic permission for this app. 2017-04-20 14:41:21.162 OpenEarsSampleApp[8007:196990] setSecondsOfSilence wasn't set, using default of 0.700000. 2017-04-20 14:41:21.163 OpenEarsSampleApp[8007:197089] Starting listening. 2017-04-20 14:41:21.163 OpenEarsSampleApp[8007:197089] About to set up audio session 2017-04-20 14:41:21.164 OpenEarsSampleApp[8007:197089] Creating audio session with default settings. 2017-04-20 14:41:21.164 OpenEarsSampleApp[8007:197089] Done setting audio session category. 2017-04-20 14:41:21.164 OpenEarsSampleApp[8007:197089] Done setting preferred sample rate to 16000.000000 – now the real sample rate is 16000.000000 2017-04-20 14:41:21.165 OpenEarsSampleApp[8007:197089] Done setting preferred number of channels to 1 – now the actual input number of channels is 2 2017-04-20 14:41:21.165 OpenEarsSampleApp[8007:197089] Done setting session's preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.128000 2017-04-20 14:41:21.165 OpenEarsSampleApp[8007:197089] Done setting up audio session 2017-04-20 14:41:21.166 OpenEarsSampleApp[8007:197089] About to set up audio IO unit in a session with a sample rate of 16000.000000, a channel number of 2 and a buffer duration of 0.128000. 2017-04-20 14:41:21.326 OpenEarsSampleApp[8007:197089] Done setting up audio unit 2017-04-20 14:41:21.326 OpenEarsSampleApp[8007:197089] About to start audio IO unit 2017-04-20 14:41:22.663 OpenEarsSampleApp[8007:197089] Done starting audio unit 2017-04-20 14:41:22.705 OpenEarsSampleApp[8007:197089] There is no CMN plist so we are using the fresh CMN value 40.000000. 2017-04-20 14:41:22.706 OpenEarsSampleApp[8007:197089] Listening. 2017-04-20 14:41:22.707 OpenEarsSampleApp[8007:197089] Project has these words or phrases in its dictionary: backward change forward go left model right turn 2017-04-20 14:41:22.707 OpenEarsSampleApp[8007:197089] Recognition loop has started 2017-04-20 14:41:22.707 OpenEarsSampleApp[8007:196990] Successfully started listening session from startListeningWithLanguageModelAtPath: 2017-04-20 14:41:22.718 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx is now listening. 2017-04-20 14:41:22.720 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx started. 2017-04-20 14:41:22.890 OpenEarsSampleApp[8007:197089] Speech detected... 2017-04-20 14:41:22.890 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected speech. 2017-04-20 14:41:26.988 OpenEarsSampleApp[8007:197089] End of speech detected... 2017-04-20 14:41:26.989 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance. 2017-04-20 14:41:27.023 OpenEarsSampleApp[8007:197089] Pocketsphinx heard "go" with a score of (-172492) and an utterance ID of 0. 2017-04-20 14:41:27.024 OpenEarsSampleApp[8007:196990] Flite sending interrupt speech request. 2017-04-20 14:41:27.024 OpenEarsSampleApp[8007:196990] Local callback: The received hypothesis is go with a score of -172492 and an ID of 0 2017-04-20 14:41:27.025 OpenEarsSampleApp[8007:196990] I'm running flite 2017-04-20 14:41:27.053 OpenEarsSampleApp[8007:196990] I'm done running flite and it took 0.027457 seconds 2017-04-20 14:41:27.053 OpenEarsSampleApp[8007:196990] Flite audio player was nil when referenced so attempting to allocate a new audio player. 2017-04-20 14:41:27.054 OpenEarsSampleApp[8007:196990] Loading speech data for Flite concluded successfully. 2017-04-20 14:41:27.080 OpenEarsSampleApp[8007:196990] Flite sending suspend recognition notification. 2017-04-20 14:41:27.082 OpenEarsSampleApp[8007:196990] Local callback: Flite has started speaking 2017-04-20 14:41:27.084 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has suspended recognition. 2017-04-20 14:41:28.270 OpenEarsSampleApp[8007:196990] AVAudioPlayer did finish playing with success flag of 1 2017-04-20 14:41:28.422 OpenEarsSampleApp[8007:196990] Flite sending resume recognition notification. 2017-04-20 14:41:28.923 OpenEarsSampleApp[8007:196990] Local callback: Flite has finished speaking 2017-04-20 14:41:28.925 OpenEarsSampleApp[8007:196990] setSecondsOfSilence wasn't set, using default of 0.700000. 2017-04-20 14:41:28.926 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has resumed recognition. 2017-04-20 14:41:29.165 OpenEarsSampleApp[8007:197089] Speech detected... 2017-04-20 14:41:29.165 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected speech. 2017-04-20 14:41:34.154 OpenEarsSampleApp[8007:197089] End of speech detected... 2017-04-20 14:41:34.155 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance. 2017-04-20 14:41:34.203 OpenEarsSampleApp[8007:197089] Pocketsphinx heard "model" with a score of (-273805) and an utterance ID of 1. 2017-04-20 14:41:34.204 OpenEarsSampleApp[8007:196990] Flite sending interrupt speech request. 2017-04-20 14:41:34.204 OpenEarsSampleApp[8007:196990] Local callback: The received hypothesis is model with a score of -273805 and an ID of 1 2017-04-20 14:41:34.205 OpenEarsSampleApp[8007:196990] I'm running flite 2017-04-20 14:41:34.272 OpenEarsSampleApp[8007:196990] I'm done running flite and it took 0.066067 seconds 2017-04-20 14:41:34.272 OpenEarsSampleApp[8007:196990] Flite audio player was nil when referenced so attempting to allocate a new audio player. 2017-04-20 14:41:34.272 OpenEarsSampleApp[8007:196990] Loading speech data for Flite concluded successfully. 2017-04-20 14:41:34.289 OpenEarsSampleApp[8007:196990] Flite sending suspend recognition notification. 2017-04-20 14:41:34.291 OpenEarsSampleApp[8007:196990] Local callback: Flite has started speaking 2017-04-20 14:41:34.292 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has suspended recognition. 2017-04-20 14:41:35.712 OpenEarsSampleApp[8007:196990] AVAudioPlayer did finish playing with success flag of 1 2017-04-20 14:41:35.864 OpenEarsSampleApp[8007:196990] Flite sending resume recognition notification. 2017-04-20 14:41:47.211 OpenEarsSampleApp[8007:197089] End of speech detected... 2017-04-20 14:41:47.211 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance. 2017-04-20 14:41:47.363 OpenEarsSampleApp[8007:197089] Pocketsphinx heard "go" with a score of (-610129) and an utterance ID of 2. 2017-04-20 14:41:47.363 OpenEarsSampleApp[8007:196990] Flite sending interrupt speech request. 2017-04-20 14:41:47.364 OpenEarsSampleApp[8007:196990] Local callback: The received hypothesis is go with a score of -610129 and an ID of 2 2017-04-20 14:41:47.365 OpenEarsSampleApp[8007:196990] I'm running flite 2017-04-20 14:41:47.387 OpenEarsSampleApp[8007:196990] I'm done running flite and it took 0.022564 seconds 2017-04-20 14:41:47.388 OpenEarsSampleApp[8007:196990] Flite audio player was nil when referenced so attempting to allocate a new audio player. 2017-04-20 14:41:47.388 OpenEarsSampleApp[8007:196990] Loading speech data for Flite concluded successfully. 2017-04-20 14:41:47.408 OpenEarsSampleApp[8007:196990] Flite sending suspend recognition notification. 2017-04-20 14:41:48.360 OpenEarsSampleApp[8007:197088] Speech detected... 2017-04-20 14:41:54.493 OpenEarsSampleApp[8007:196990] AVAudioPlayer did finish playing with success flag of 1 2017-04-20 14:41:54.644 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected speech. 2017-04-20 14:41:54.646 OpenEarsSampleApp[8007:196990] Local callback: Flite has started speaking 2017-04-20 14:41:54.647 OpenEarsSampleApp[8007:196990] Flite sending resume recognition notification. 2017-04-20 14:41:54.647 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has suspended recognition. 2017-04-20 14:41:55.148 OpenEarsSampleApp[8007:196990] Local callback: Flite has finished speaking 2017-04-20 14:41:55.149 OpenEarsSampleApp[8007:196990] setSecondsOfSilence wasn't set, using default of 0.700000. 2017-04-20 14:41:55.150 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has resumed recognition. 2017-04-20 14:42:01.280 OpenEarsSampleApp[8007:197088] Speech detected... 2017-04-20 14:42:01.281 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected speech. 2017-04-20 14:42:03.332 OpenEarsSampleApp[8007:197088] End of speech detected... 2017-04-20 14:42:03.333 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance. 2017-04-20 14:42:03.353 OpenEarsSampleApp[8007:197088] Pocketsphinx heard "" with a score of (-81394) and an utterance ID of 3. 2017-04-20 14:42:03.353 OpenEarsSampleApp[8007:197088] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController. 2017-04-20 14:42:03.454 OpenEarsSampleApp[8007:197088] Speech detected... 2017-04-20 14:42:03.455 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected speech. 2017-04-20 14:41:36.365 OpenEarsSampleApp[8007:196990] Local callback: Flite has finished speaking 2017-04-20 14:41:36.366 OpenEarsSampleApp[8007:196990] setSecondsOfSilence wasn't set, using default of 0.700000. 2017-04-20 14:41:36.367 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has resumed recognition. 2017-04-20 14:41:36.589 OpenEarsSampleApp[8007:197089] Speech detected... 2017-04-20 14:41:36.590 OpenEarsSampleApp[8007:196990] Local callback: Pocketsphinx has detected speech.
April 20, 2017 at 4:15 pm #1031771Halle WinklerPolitepixOK, can you check out what is different about the sample app from your app that results in no OELogging output for you? You ought to get the same results up to the point of error, starting with something like “Starting OpenEars logging for OpenEars version 2.504 on 64-bit device (or build): iPhone running iOS version: 10.300000”
April 20, 2017 at 4:43 pm #1031772AlexMeParticipantI just noticed that my log shows
2017-04-20 16:23:05.938 EntryPoint[9573:220257] Starting OpenEars logging for OpenEars version 2.504 on 64-bit device (or build): iPhone running iOS version: 10.300000 2017-04-20 16:23:05.939 EntryPoint[9573:220257] Creating shared instance of OEPocketsphinxController
when turning on logging but thats it nothing else changes.
April 20, 2017 at 7:30 pm #1031776Halle WinklerPolitepixOK, then please show your complete OELogging and verbosePocketsphinx logs for the entire app session – you can read more about it this here: https://www.politepix.com/forums/topic/install-issues-and-their-solutions/
April 21, 2017 at 8:30 am #1031777AlexMeParticipantSure thing:
2017-04-21 08:27:37.316 EntryPoint[2332:892790] Failed to create directory /var/mobile/Containers/Data/Application/15688114-7E32-45C2-83CD-76C1C521B3CC/Documents/cn1storage/ 2017-04-21 08:27:49.856 EntryPoint[2332:892790] Starting OpenEars logging for OpenEars version 2.504 on 64-bit device (or build): iPad running iOS version: 9.300000 2017-04-21 08:27:49.860 EntryPoint[2332:892790] Creating shared instance of OEPocketsphinxController 2017-04-21 08:27:49.881 EntryPoint[2332:892790] User gave mic permission for this app. 2017-04-21 08:27:49.883 EntryPoint[2332:892790] /var/containers/Bundle/Application/xxxxxx9/EntryPoint.app/2277.lm 2017-04-21 08:27:49.883 EntryPoint[2332:892790] /var/containers/Bundle/Application/xxxxxx9/EntryPoint.app/6258.dic 2017-04-21 08:27:49.884 EntryPoint[2332:892790] Attempting to start listening session from startListeningWithLanguageModelAtPath: 2017-04-21 08:27:49.885 EntryPoint[2332:892790] Error: you have invoked the method: startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this language model, that means the correct path to your language model that you should pass to this method's languageModelPath argument is as follows: NSString *correctPathToMyLanguageModelFile = [myLanguageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"TheNameIChoseForMyVocabulary"]; Feel free to copy and paste this code for your path to your language model, but remember to replace the part that says "TheNameIChoseForMyVocabulary" with the name you actually chose for your language model or you will get this error again (and replace myLanguageModelGenerator with the name of your OELanguageModelGenerator instance). Since this file is required, expect an exception or undocumented behavior shortly.
April 21, 2017 at 8:59 am #1031778Halle WinklerPolitepixThanks. Have you taken any steps to verify the existence of your files at runtime, e.g. https://developer.apple.com/reference/foundation/filemanager/1410277-fileexists ? Getting that error means that a fileExists: check has failed for OpenEars, so I think the best line of investigation for you is whether the steps taken to make your lm available at runtime were successful. Sometimes this can be as simple as the path just being off by one directory level or something being added to an app target but not a test target or vice versa. This could also maybe be related to permissions for the file, but that seems less likely, so I would thoroughly investigate the easy stuff first.
April 21, 2017 at 10:20 am #1031779AlexMeParticipantHi,
for some reason xcode wants to compile the .lm file. I removed it from the compile sources before, because xcode obviously couldn’t compile it.
When printing all files in the directory I was expecting the files in, I immediately knew what went wrong…2017-04-21 10:02:01.060 EntryPoint[2351:900694] 6258.dic 2017-04-21 10:02:01.074 EntryPoint[2351:900694] AcousticModelEnglish.bundle 2017-04-21 10:02:01.075 EntryPoint[2351:900694] AppIcon29x29.png 2017-04-21 10:02:01.075 EntryPoint[2351:900694] AppIcon29x29@3x.png 2017-04-21 10:02:01.081 EntryPoint[2351:900694] AppIcon57x57.png 2017-04-21 10:02:01.081 EntryPoint[2351:900694] AppIcon57x57@2x.png 2017-04-21 10:02:01.081 EntryPoint[2351:900694] AppIcon60x60@2x.png 2017-04-21 10:02:01.081 EntryPoint[2351:900694] AppIcon60x60@3x.png 2017-04-21 10:02:01.082 EntryPoint[2351:900694] AppIcon72x72~ipad.png 2017-04-21 10:02:01.082 EntryPoint[2351:900694] AppIcon76x76@2x~ipad.png 2017-04-21 10:02:01.082 EntryPoint[2351:900694] AppIcon76x76~ipad.png 2017-04-21 10:02:01.087 EntryPoint[2351:900694] CN1Resource.res 2017-04-21 10:02:01.087 EntryPoint[2351:900694] CodenameOne_GLViewController.nib 2017-04-21 10:02:01.087 EntryPoint[2351:900694] Default-568h@2x.png 2017-04-21 10:02:01.087 EntryPoint[2351:900694] Default-667h@2x.png 2017-04-21 10:02:01.087 EntryPoint[2351:900694] Default-736h-Landscape@3x.png 2017-04-21 10:02:01.088 EntryPoint[2351:900694] Default-736h@3x.png 2017-04-21 10:02:01.088 EntryPoint[2351:900694] Default-Landscape.png 2017-04-21 10:02:01.090 EntryPoint[2351:900694] Default-Landscape@2x.png 2017-04-21 10:02:01.090 EntryPoint[2351:900694] Default-Portrait.png 2017-04-21 10:02:01.090 EntryPoint[2351:900694] Default-Portrait@2x.png 2017-04-21 10:02:01.091 EntryPoint[2351:900694] Default.png 2017-04-21 10:02:01.091 EntryPoint[2351:900694] Default@2x.png 2017-04-21 10:02:01.091 EntryPoint[2351:900694] EntryPoint 2017-04-21 10:02:01.091 EntryPoint[2351:900694] Frameworks 2017-04-21 10:02:01.093 EntryPoint[2351:900694] Icon-152.png 2017-04-21 10:02:01.093 EntryPoint[2351:900694] Icon-72.png 2017-04-21 10:02:01.093 EntryPoint[2351:900694] Icon-76.png 2017-04-21 10:02:01.093 EntryPoint[2351:900694] Icon-Small-50.png 2017-04-21 10:02:01.093 EntryPoint[2351:900694] Icon-Small.png 2017-04-21 10:02:01.094 EntryPoint[2351:900694] Icon-Small@2x.png 2017-04-21 10:02:01.094 EntryPoint[2351:900694] Icon.png 2017-04-21 10:02:01.095 EntryPoint[2351:900694] Icon7.png 2017-04-21 10:02:01.096 EntryPoint[2351:900694] Icon7@2x.png 2017-04-21 10:02:01.096 EntryPoint[2351:900694] Icon7@3x.png 2017-04-21 10:02:01.096 EntryPoint[2351:900694] Icon@2x.png 2017-04-21 10:02:01.096 EntryPoint[2351:900694] Icon@3x.png 2017-04-21 10:02:01.096 EntryPoint[2351:900694] Info.plist 2017-04-21 10:02:01.096 EntryPoint[2351:900694] MANIFEST.MF 2017-04-21 10:02:01.097 EntryPoint[2351:900694] META-INF 2017-04-21 10:02:01.097 EntryPoint[2351:900694] MainWindow.nib 2017-04-21 10:02:01.099 EntryPoint[2351:900694] PkgInfo 2017-04-21 10:02:01.100 EntryPoint[2351:900694] _CodeSignature 2017-04-21 10:02:01.100 EntryPoint[2351:900694] cn1-version-numbers 2017-04-21 10:02:01.100 EntryPoint[2351:900694] embedded.mobileprovision 2017-04-21 10:02:01.100 EntryPoint[2351:900694] iOS7Theme.res 2017-04-21 10:02:01.100 EntryPoint[2351:900694] iPhoneTheme.res 2017-04-21 10:02:01.100 EntryPoint[2351:900694] iTunesArtwork 2017-04-21 10:02:01.101 EntryPoint[2351:900694] material-design-font.ttf 2017-04-21 10:02:01.101 EntryPoint[2351:900694] pom.properties 2017-04-21 10:02:01.101 EntryPoint[2351:900694] pom.xml 2017-04-21 10:02:01.101 EntryPoint[2351:900694] theme.res 2017-04-21 10:02:01.102 EntryPoint[2351:900694] zoozi18n.bundle
Only the .lm is missing. I renamed it to .languagemodel and now the language model gets found and xcode doesn’t detect it as a compilable file.
Thanks for your help.
April 21, 2017 at 2:41 pm #1031780Halle WinklerPolitepixGood troubleshooting! Glad you found the issue.
-
AuthorPosts
- The topic ‘My LanguageModelPath is nil’ is closed to new replies.