This is the code
LanguageModelGenerator *lmGenerator = [[LanguageModelGenerator alloc] init];
NSArray *words = [NSArray arrayWithObjects:@”TURN EAST”, @”TURN WEST”,@”LUCKY”,@”UNLUCKY”, nil];
NSString *name = @”Fight
//NSError *err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”]];
NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words
withFilesNamed:name
withOptionalExclusions:nil
usingVowelsOnly:FALSE
withWeight:nil
forAcousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”]];
NSDictionary *languageGeneratorResults = nil;
if([err code] == noErr) {
languageGeneratorResults = [err userInfo];
lmPath = [languageGeneratorResults objectForKey:@”LMPath”];
dicPath = [languageGeneratorResults objectForKey:@”DictionaryPath”];
} else {
NSLog(@”Error: %@”,[err localizedDescription]);
}
[self.openEarsEventsObserver setDelegate:self];
[self.pocketsphinxController startListeningWithLanguageModelAtPath:lmPath dictionaryAtPath:dicPath acousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”] languageModelIsJSGF:NO];
Am i missing something?
Thanks in advance for your help.
Sure, please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue.
]]>2014-06-07 13:18:13.124 VoiceSnap[405:60b] [Quincy] WARNING: Detecting crashes is NOT enabled due to running the app with a debugger attached.
2014-06-07 13:18:13.349 VoiceSnap[405:60b] Starting OpenEars logging for OpenEars version 1.65 on 64-bit device: iPhone running iOS version: 7.100000
2014-06-07 13:18:13.356 VoiceSnap[405:60b] Normalized array contains the following entries:
(
TEST,
“TURN RIGHT”
)
2014-06-07 13:18:13.498 VoiceSnap[405:60b] I’m done running performDictionaryLookup and it took 0.055117 seconds
2014-06-07 13:18:13.506 VoiceSnap[405:60b] Starting dynamic language model generation
2014-06-07 13:18:13.514 VoiceSnap[405:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.corpus for reading
2014-06-07 13:18:13.516 VoiceSnap[405:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands_pipe.txt for writing
2014-06-07 13:18:13.516 VoiceSnap[405:60b] Starting text2wfreq_impl
2014-06-07 13:18:13.525 VoiceSnap[405:60b] Done with text2wfreq_impl
2014-06-07 13:18:13.526 VoiceSnap[405:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands_pipe.txt for reading.
2014-06-07 13:18:13.527 VoiceSnap[405:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.vocab for reading.
2014-06-07 13:18:13.528 VoiceSnap[405:60b] Starting wfreq2vocab
2014-06-07 13:18:13.531 VoiceSnap[405:60b] Done with wfreq2vocab
2014-06-07 13:18:13.532 VoiceSnap[405:60b] Starting text2idngram
2014-06-07 13:18:13.546 VoiceSnap[405:60b] Done with text2idngram
2014-06-07 13:18:13.551 VoiceSnap[405:60b] Starting idngram2lm
2014-06-07 13:18:13.571 VoiceSnap[405:60b] Done with idngram2lm
2014-06-07 13:18:13.571 VoiceSnap[405:60b] Starting sphinx_lm_convert
2014-06-07 13:18:13.579 VoiceSnap[405:60b] Finishing sphinx_lm_convert
2014-06-07 13:18:13.582 VoiceSnap[405:60b] Done creating language model with CMUCLMTK in 0.075650 seconds.
2014-06-07 13:18:13.583 VoiceSnap[405:60b] Starting sphinx_lm_convert
2014-06-07 13:18:13.587 VoiceSnap[405:60b] Finishing sphinx_lm_convert
2014-06-07 13:18:13.589 VoiceSnap[405:60b] I’m done running dynamic language model generation and it took 423865093.589445 seconds
2014-06-07 13:18:13.605 VoiceSnap[405:60b] User gave mic permission for this app.
2014-06-07 13:18:13.606 VoiceSnap[405:60b] User gave mic permission for this app.
2014-06-07 13:18:13.607 VoiceSnap[405:60b] Leaving sample rate at the default of 16000.
2014-06-07 13:18:13.608 VoiceSnap[405:60b] The audio session has never been initialized so we will do that now.
2014-06-07 13:18:13.608 VoiceSnap[405:60b] Checking and resetting all audio session settings.
2014-06-07 13:18:13.608 VoiceSnap[405:60b] audioCategory is incorrect, we will change it.
2014-06-07 13:18:13.609 VoiceSnap[405:60b] audioCategory is now on the correct setting of kAudioSessionCategory_PlayAndRecord.
2014-06-07 13:18:13.609 VoiceSnap[405:60b] bluetoothInput is incorrect, we will change it.
2014-06-07 13:18:13.610 VoiceSnap[405:60b] bluetooth input is now on the correct setting of 1.
2014-06-07 13:18:13.611 VoiceSnap[405:60b] Output Device: ReceiverAndMicrophone.
2014-06-07 13:18:13.611 VoiceSnap[405:60b] categoryDefaultToSpeaker is incorrect, we will change it.
2014-06-07 13:18:13.612 VoiceSnap[405:60b] CategoryDefaultToSpeaker is now on the correct setting of 1.
2014-06-07 13:18:13.612 VoiceSnap[405:60b] preferredBufferSize is incorrect, we will change it.
2014-06-07 13:18:13.613 VoiceSnap[405:60b] PreferredBufferSize is now on the correct setting of 0.128000.
2014-06-07 13:18:13.613 VoiceSnap[405:60b] preferredSampleRateCheck is incorrect, we will change it.
2014-06-07 13:18:13.614 VoiceSnap[405:60b] preferred hardware sample rate is now on the correct setting of 16000.000000.
2014-06-07 13:18:13.643 VoiceSnap[405:60b] AudioSessionManager startAudioSession has reached the end of the initialization.
2014-06-07 13:18:13.643 VoiceSnap[405:60b] Exiting startAudioSession.
2014-06-07 13:18:13.645 VoiceSnap[405:5d0b] setSecondsOfSilence value of 0.000000 was too large or too small or was NULL, using default of 0.700000.
2014-06-07 13:18:13.646 VoiceSnap[405:5d0b] Project has these words or phrases in its dictionary:
___REJ_AA
___REJ_AE
___REJ_AH
___REJ_AO
___REJ_AW
___REJ_AY
___REJ_B
___REJ_CH
___REJ_D
___REJ_DH
___REJ_EH
___REJ_ER
___REJ_EY
___REJ_F
___REJ_G
___REJ_HH
___REJ_IH
___REJ_IY
___REJ_JH
___REJ_K
___REJ_L
___REJ_M
___REJ_N
___REJ_NG
___REJ_OW
___REJ_OY
___REJ_P
___REJ_R
___REJ_S
___REJ_SH
___REJ_T
___REJ_TH
___REJ_UH
___REJ_UW
___REJ_V
___REJ_W
___REJ_Y
___REJ_Z
___REJ_ZH
RIGHT
TEST
TURN
2014-06-07 13:18:13.646 VoiceSnap[405:5d0b] Recognition loop has started
NSError *err = [languageGen generateLanguageModelFromArray:self.languageModel withFilesNamed:name forAcousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”]];
//NSError *err = [languageGen generateRejectingLanguageModelFromArray:self.languageModel withFilesNamed:name withOptionalExclusions:nil usingVowelsOnly:FALSE withWeight:nil forAcousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”]];
]]>I also tried setting the weight to [NSNumber numberWithFloat:0.1];
Neither seemed to help.
Here’s the log with verbosePocketsphinx set:
2014-06-07 13:56:15.588 VoiceSnap[485:60b] [Quincy] WARNING: Detecting crashes is NOT enabled due to running the app with a debugger attached.
2014-06-07 13:56:15.756 VoiceSnap[485:60b] Starting OpenEars logging for OpenEars version 1.65 on 64-bit device: iPhone running iOS version: 7.100000
2014-06-07 13:56:15.760 VoiceSnap[485:60b] Normalized array contains the following entries:
(
“VOICE SNAP TAKE A PICTURE”,
“VOICE SNAP TAKE A PHOTO”,
“VOICE SNAP A PHOTO”,
“TAKE A PHOTO”,
“TAKE A PICTURE”
)
2014-06-07 13:56:15.895 VoiceSnap[485:60b] I’m done running performDictionaryLookup and it took 0.051399 seconds
2014-06-07 13:56:15.902 VoiceSnap[485:60b] Starting dynamic language model generation
2014-06-07 13:56:15.909 VoiceSnap[485:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.corpus for reading
2014-06-07 13:56:15.911 VoiceSnap[485:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands_pipe.txt for writing
2014-06-07 13:56:15.911 VoiceSnap[485:60b] Starting text2wfreq_impl
2014-06-07 13:56:15.918 VoiceSnap[485:60b] Done with text2wfreq_impl
2014-06-07 13:56:15.919 VoiceSnap[485:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands_pipe.txt for reading.
2014-06-07 13:56:15.920 VoiceSnap[485:60b] Able to open /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.vocab for reading.
2014-06-07 13:56:15.921 VoiceSnap[485:60b] Starting wfreq2vocab
2014-06-07 13:56:15.923 VoiceSnap[485:60b] Done with wfreq2vocab
2014-06-07 13:56:15.924 VoiceSnap[485:60b] Starting text2idngram
2014-06-07 13:56:15.937 VoiceSnap[485:60b] Done with text2idngram
2014-06-07 13:56:15.942 VoiceSnap[485:60b] Starting idngram2lm
2014-06-07 13:56:16.009 VoiceSnap[485:60b] Done with idngram2lm
2014-06-07 13:56:16.009 VoiceSnap[485:60b] Starting sphinx_lm_convert
2014-06-07 13:56:16.018 VoiceSnap[485:60b] Finishing sphinx_lm_convert
2014-06-07 13:56:16.021 VoiceSnap[485:60b] Done creating language model with CMUCLMTK in 0.118209 seconds.
2014-06-07 13:56:16.022 VoiceSnap[485:60b] Starting sphinx_lm_convert
2014-06-07 13:56:16.028 VoiceSnap[485:60b] Finishing sphinx_lm_convert
2014-06-07 13:56:16.030 VoiceSnap[485:60b] I’m done running dynamic language model generation and it took 423867376.030317 seconds
2014-06-07 13:56:16.050 VoiceSnap[485:60b] User gave mic permission for this app.
2014-06-07 13:56:16.051 VoiceSnap[485:60b] User gave mic permission for this app.
2014-06-07 13:56:16.052 VoiceSnap[485:60b] Leaving sample rate at the default of 16000.
2014-06-07 13:56:16.053 VoiceSnap[485:60b] The audio session has never been initialized so we will do that now.
2014-06-07 13:56:16.054 VoiceSnap[485:60b] Checking and resetting all audio session settings.
2014-06-07 13:56:16.055 VoiceSnap[485:60b] audioCategory is incorrect, we will change it.
2014-06-07 13:56:16.056 VoiceSnap[485:60b] audioCategory is now on the correct setting of kAudioSessionCategory_PlayAndRecord.
2014-06-07 13:56:16.056 VoiceSnap[485:60b] bluetoothInput is incorrect, we will change it.
2014-06-07 13:56:16.057 VoiceSnap[485:60b] bluetooth input is now on the correct setting of 1.
2014-06-07 13:56:16.059 VoiceSnap[485:60b] Output Device: ReceiverAndMicrophone.
2014-06-07 13:56:16.060 VoiceSnap[485:60b] categoryDefaultToSpeaker is incorrect, we will change it.
2014-06-07 13:56:16.061 VoiceSnap[485:60b] CategoryDefaultToSpeaker is now on the correct setting of 1.
2014-06-07 13:56:16.062 VoiceSnap[485:60b] preferredBufferSize is incorrect, we will change it.
2014-06-07 13:56:16.062 VoiceSnap[485:60b] PreferredBufferSize is now on the correct setting of 0.128000.
2014-06-07 13:56:16.063 VoiceSnap[485:60b] preferredSampleRateCheck is incorrect, we will change it.
2014-06-07 13:56:16.064 VoiceSnap[485:60b] preferred hardware sample rate is now on the correct setting of 16000.000000.
2014-06-07 13:56:16.103 VoiceSnap[485:60b] AudioSessionManager startAudioSession has reached the end of the initialization.
2014-06-07 13:56:16.104 VoiceSnap[485:60b] Exiting startAudioSession.
2014-06-07 13:56:16.105 VoiceSnap[485:5d0b] setSecondsOfSilence value of 0.000000 was too large or too small or was NULL, using default of 0.700000.
2014-06-07 13:56:16.107 VoiceSnap[485:5d0b] Project has these words or phrases in its dictionary:
___REJ_AA
___REJ_AE
___REJ_AO
___REJ_AW
___REJ_AY
___REJ_B
___REJ_CH
___REJ_D
___REJ_DH
___REJ_EH
___REJ_ER
___REJ_F
___REJ_G
___REJ_HH
___REJ_IH
___REJ_IY
___REJ_JH
___REJ_K
___REJ_L
___REJ_M
___REJ_N
___REJ_NG
___REJ_OW
___REJ_OY
___REJ_P
___REJ_R
___REJ_S
___REJ_SH
___REJ_T
___REJ_TH
___REJ_UH
___REJ_UW
___REJ_V
___REJ_W
___REJ_Y
___REJ_Z
___REJ_ZH
A
A(2)
PHOTO
PICTURE
SNAP
TAKE
VOICE
2014-06-07 13:56:16.107 VoiceSnap[485:5d0b] Recognition loop has started
INFO: file_omitted(0): Parsing command line:
\
-lm /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP \
-beam 1e-96 \
-dict /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.dic \
-fwdflatbeam 1e-128 \
-hmm /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle \
-lw 6.500000 \
-samprate 16000
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-96
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-128
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP
-lmctl
-lmname default default
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf -1 -1
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02
INFO: file_omitted(0): Parsing command line:
\
-nfilt 20 \
-lowerf 1 \
-upperf 4000 \
-wlen 0.025 \
-transform dct \
-round_filters no \
-remove_dc yes \
-svspec 0-12/13-25/26-38 \
-feat 1s_c_d_dd \
-agc none \
-cmn current \
-cmninit 47 \
-varnorm no
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 47
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 0
-logspec no no
-lowerf 133.33334 1.000000e+00
-ncep 13 13
-nfft 512 512
-nfilt 40 20
-remove_dc no yes
-round_filters yes no
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 4.000000e+03
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.500000e-02
INFO: file_omitted(0): Parsed model-specific feature parameters from /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/feat.params
INFO: file_omitted(0): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: file_omitted(0): mean[0]= 12.00, mean[1..12]= 0.0
INFO: file_omitted(0): Using subvector specification 0-12/13-25/26-38
INFO: file_omitted(0): Reading model definition: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/mdef
INFO: file_omitted(0): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: file_omitted(0): Reading binary model definition: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/mdef
INFO: file_omitted(0): 50 CI-phone, 143047 CD-phone, 3 emitstate/phone, 150 CI-sen, 5150 Sen, 27135 Sen-Seq
INFO: file_omitted(0): Reading HMM transition probability matrices: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/transition_matrices
INFO: file_omitted(0): Attempting to use SCHMM computation module
INFO: file_omitted(0): Reading mixture gaussian parameter: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/means
INFO: file_omitted(0): 1 codebook, 3 feature, size:
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): Reading mixture gaussian parameter: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/variances
INFO: file_omitted(0): 1 codebook, 3 feature, size:
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 0 variance values floored
INFO: file_omitted(0): Loading senones from dump file /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/sendump
INFO: file_omitted(0): BEGIN FILE FORMAT DESCRIPTION
INFO: file_omitted(0): Using memory-mapped I/O for senones
INFO: file_omitted(0): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: file_omitted(0): Allocating 4151 * 32 bytes (129 KiB) for word entries
INFO: file_omitted(0): Reading main dictionary: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.dic
INFO: file_omitted(0): Allocated 0 KiB for strings, 0 KiB for phones
INFO: file_omitted(0): 44 words read
INFO: file_omitted(0): Reading filler dictionary: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/VoiceSnap.app/AcousticModelEnglish.bundle/noisedict
INFO: file_omitted(0): Allocated 0 KiB for strings, 0 KiB for phones
INFO: file_omitted(0): 11 words read
INFO: file_omitted(0): Building PID tables for dictionary
INFO: file_omitted(0): Allocating 50^3 * 2 bytes (244 KiB) for word-initial triphones
INFO: file_omitted(0): Allocated 60400 bytes (58 KiB) for word-final triphones
INFO: file_omitted(0): Allocated 60400 bytes (58 KiB) for single-phone word triphones
ERROR: “file_omitted”, line 0: File /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP not found
ERROR: “file_omitted”, line 0: Dump file /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP not found
ERROR: “file_omitted”, line 0: Failed to read language model file: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP
ERROR: “file_omitted”, line 0: File /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP not found
ERROR: “file_omitted”, line 0: Dump file /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP not found
ERROR: “file_omitted”, line 0: Failed to read language model file: /var/mobile/Applications/63FFED11-8C69-4C61-B5EA-2DCEB385E65F/Library/Caches/PictureCommands.DMP
This is the underlying cause of the DMP not being found:
Starting OpenEars logging for OpenEars version 1.65
Which relates to this item from the FAQ:
]]>Q: I’m having an issue with Rejecto, or with an app that Rejecto is added to.
A: Please make sure you are using OpenEars version 1.66 or newer.
Besides Rejecto, what’s the best way to achieve recognition of just a few commands? If there is background noise or other people talking my app has a really hard time tracking if the command was spoken. This makes sense to me because the framework is trying to parse everything that has been spoken, but is there a way to search the audio for very specific phrases such as my commands? Such that, even if other people are talking or there is mucic in the background it will still pick my phrase out of the audio? I’m not exactly sure how the speech recognition algorithm works so this may seem like a dumb question.
Thanks,
Derek
There isn’t a solution with the exact features you are asking for, but you might want to try out using a grammar instead of a language model (Rejecto isn’t yet compatible with this approach): /2014/04/10/openears-1-7-introducing-dynamic-grammar-generation/
]]>