Forum Replies Created
-
AuthorPosts
-
asadullah797Participant
Thank you for your quick response.
I will figure it out myself after fine tunning. but here I got one serious question.
I have created a grammar/ruleset but I am not sure whether it is legal/valid/possible.
Please have a look over it if you have time:
NSDictionary *grammar = @{
ThisWillBeSaidOnce : @[
@{ OneOfTheseWillBeSaidOnce : @[@”ONE”,
@”TWO”,
@”THREE”,
@”FOUR”,
@”FIVE”,
@”SIX”,
@”SEVEN”,
@”EIGHT”,
@”NINE”,
@”TEN”,
@”ELEVEN”,
@”TWELVE”,
@”THIRTEEN”,
@”FOURTEEN”,
@”FIFTEEN”,
@”SIXTEEN”,
@”SEVENTEEN”,
@”EIGHTEEN”,
@”NINETEEN”,
@”TWENTY”
]
}
]
};In my application case there are some isolated words which can be recognized perfectly.
asadullah797ParticipantHi Halle,
After changing the code as you advised it gives me the following Error:
2015-02-25 23:01:15.443 VoiceTesting[3064:174915] Error: you have invoked the method:
startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF
with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this language model, that means the correct path to your language model that you should pass to this method’s languageModelPath argument is as follows:
NSString *correctPathToMyLanguageModelFile = [NSString stringWithFormat:@”%@/TheNameIChoseForMyLanguageModelAndDictionaryFile.%@”,[NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) objectAtIndex:0],@”DMP”];
Feel free to copy and paste this code for your path to your language model, but remember to replace the part that says “TheNameIChoseForMyLanguageModelAndDictionaryFile” with the name you actually chose for your language model and dictionary file or you will get this error again.
asadullah797ParticipantHello Halle,
I hope you will be doing well.
I think the general rule for RuleORama+RapidEars is to make the changes in the following segment of code:
RuleORama works fine with Open Ears but when I used RapidEars it stopped working.
If these segments have no issue then I fear that Grammar format may be wrong.
Thank you for your consideration to this query. Here is the code snippet:NSError *error = [languageModelGenerator generateFastGrammarFromDictionary:grammar withFilesNamed:@”FirstOpenEarsDynamicLanguageModel” forAcousticModelAtPath:[OEAcousticModel pathToModel:@”AcousticModelEnglish”]];
if(error) {
NSLog(@”Dynamic language generator reported error %@”, [error description]);
} else {
self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@”FirstOpenEarsDynamicLanguageModel”];
self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@”FirstOpenEarsDynamicLanguageModel”];
}
if(![OEPocketsphinxController sharedInstance].isListening) {[[OEPocketsphinxController sharedInstance]
startRealtimeListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@”AcousticModelEnglish”]];
}asadullah797ParticipantThank you for your response.
I think you are asking about this line of code:self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@”FirstOpenEarsDynamicLanguageModel”];
self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@”FirstOpenEarsDynamicLanguageModel”];Best Regards
Asadasadullah797ParticipantHello Halle,
Thank you for for your answer.
But I am using RuleORama and it defines this function:
generateFastGrammarFromDictionary
I think its RuleORama function not OpenEars stock grammar Tool.Best Regards
Asadasadullah797ParticipantI think this might be the issue:
The file you’ve sent to the decoder appears to be a JSGF grammar based on its naming, but you have not set languageModelIsJSGF: to TRUE. If you are experiencing recognition issues, there is a good chance that this is the reason for it.
But not sure.
asadullah797ParticipantHello Halle,
Its the output of verbosePocketSphinxController turning off:2015-02-24 23:42:04.916 VoiceTesting[5999:376051] Starting OpenEars logging for OpenEars version 2.03 on 32-bit device (or build): iPhone Simulator running iOS version: 8.100000
2015-02-24 23:42:04.917 VoiceTesting[5999:376051] Creating shared instance of OEPocketsphinxController
2015-02-24 23:42:04.977 VoiceTesting[5999:376051] I’m done running performDictionaryLookup and it took 0.032844 seconds
2015-02-24 23:42:05.121 VoiceTesting[5999:376051] Starting dynamic language model generationINFO: cmd_ln.c(702): Parsing command line:
sphinx_lm_convert \
-i /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa \
-o /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMPCurrent configuration:
[NAME] [DEFLT] [VALUE]
-case
-debug 0
-help no no
-i /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa
-ienc
-ifmt
-logbase 1.0001 1.000100e+00
-mmap no no
-o /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-oenc utf8 utf8
-ofmtINFO: ngram_model_arpa.c(504): ngrams 1=218, 2=0, 3=0
INFO: ngram_model_arpa.c(137): Reading unigrams
INFO: ngram_model_arpa.c(543): 218 = #unigrams created
INFO: ngram_model_dmp.c(518): Building DMP model…
INFO: ngram_model_dmp.c(548): 218 = #unigrams created
2015-02-24 23:42:05.179 VoiceTesting[5999:376051] Done creating language model with CMUCLMTK in 0.056503 seconds.
2015-02-24 23:42:05.183 VoiceTesting[5999:376051] Generating fast grammar took 0.253022 seconds
INFO: cmd_ln.c(702): Parsing command line:
sphinx_lm_convert \
-i /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa \
-o /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMPCurrent configuration:
[NAME] [DEFLT] [VALUE]
-case
-debug 0
-help no no
-i /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa
-ienc
-ifmt
-logbase 1.0001 1.000100e+00
-mmap no no
-o /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-oenc utf8 utf8
-ofmtINFO: ngram_model_arpa.c(504): ngrams 1=218, 2=0, 3=0
INFO: ngram_model_arpa.c(137): Reading unigrams
INFO: ngram_model_arpa.c(543): 218 = #unigrams created
INFO: ngram_model_dmp.c(518): Building DMP model…
INFO: ngram_model_dmp.c(548): 218 = #unigrams created
2015-02-24 23:42:05.232 VoiceTesting[5999:376051] Attempting to start listening session from startRealtimeListeningWithLanguageModelAtPath:
2015-02-24 23:42:05.233 VoiceTesting[5999:376051] User gave mic permission for this app.
2015-02-24 23:42:05.233 VoiceTesting[5999:376051] Valid setSecondsOfSilence value of 0.300000 will be used.
2015-02-24 23:42:05.234 VoiceTesting[5999:376089] Starting listening.
2015-02-24 23:42:05.234 VoiceTesting[5999:376089] about to set up audio session
2015-02-24 23:42:05.234 VoiceTesting[5999:376051] Successfully started listening session from startRealtimeListeningWithLanguageModelAtPath:
2015-02-24 23:42:05.291 VoiceTesting[5999:376089] done starting audio unit
2015-02-24 23:42:05.292 VoiceTesting[5999:376089] The file you’ve sent to the decoder appears to be a JSGF grammar based on its naming, but you have not set languageModelIsJSGF: to TRUE. If you are experiencing recognition issues, there is a good chance that this is the reason for it.
INFO: cmd_ln.c(702): Parsing command line:
\
-lm /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.gram \
-vad_prespeech 10 \
-vad_postspeech 30 \
-vad_threshold 3.000000 \
-remove_noise yes \
-remove_silence yes \
-bestpath yes \
-lw 6.500000 \
-dict /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic \
-hmm /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundleCurrent configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-keyphrase
-kws
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.gram
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 10000 10000
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-vad_postspeech 50 30
-vad_prespeech 10 10
-vad_threshold 2.0 3.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02INFO: cmd_ln.c(702): Parsing command line:
\
-nfilt 25 \
-lowerf 130 \
-upperf 6800 \
-feat 1s_c_d_dd \
-svspec 0-12/13-25/26-38 \
-agc none \
-cmn current \
-varnorm no \
-transform dct \
-lifter 22 \
-cmninit 40Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 22
-logspec no no
-lowerf 133.33334 1.300000e+02
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-vad_postspeech 50 30
-vad_prespeech 10 10
-vad_threshold 2.0 3.000000e+00
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.562500e-02INFO: acmod.c(252): Parsed model-specific feature parameters from /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/feat.params
INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(124): Attempting to use SCHMM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: dict.c(320): Allocating 5005 * 20 bytes (97 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
INFO: dict.c(213): Allocated 56 KiB for strings, 59 KiB for phones
INFO: dict.c(336): 900 words read
INFO: dict.c(342): Reading filler dictionary: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Bundle/Application/3E95945A-77CA-4BBE-893F-5B016F32552F/VoiceTesting.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(345): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
INFO: dict2pid.c(132): Allocated 25576 bytes (24 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 25576 bytes (24 KiB) for single-phone word triphones
INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
ERROR: “ngram_model_dmp.c”, line 145: Wrong magic header size number 234a5347: /Users/asadullah/Library/Developer/CoreSimulator/Devices/8794BE45-3802-41B1-8AD7-D123B9DCA213/data/Containers/Data/Application/CED0A3D4-5C17-4F85-AF6F-FFFB27AD7637/Library/Caches/FirstOpenEarsDynamicLanguageModel.gram is not a dump file
2015-02-24 23:42:05.452 VoiceTesting[5999:376089] Error: it wasn’t possible to initialize the pocketsphinx decoder.
2015-02-24 23:42:05.503 VoiceTesting[5999:376051] Local callback: Setting up the continuous recognition loop has failed for the reason Error: it wasn’t possible to initialize the pocketsphinx decoder. Please turn on OELogging in order to troubleshoot this. If you need support with this issue, please turn on both OELogging and verbosePocketsphinx in order to get assistance., please turn on [OELogging startOpenEarsLogging] to learn more. -
AuthorPosts