This headset generally works fine with my iPhone – for playback, live voice and recording.
Below are the logs (with both logging and RE verbose logging turned on).
2014-12-08 18:16:04.528 MyAppName[250:17184] Starting OpenEars logging for OpenEars version 2.0 on 32-bit device (or build): iPhone running iOS version: 8.100000
2014-12-08 18:16:04.536 MyAppName[250:17184] Starting dynamic language model generation
2014-12-08 18:16:04.609 MyAppName[250:17184] Done creating language model with CMUCLMTK in 0.072115 seconds.
2014-12-08 18:16:04.725 MyAppName[250:17184] I’m done running performDictionaryLookup and it took 0.086781 seconds
2014-12-08 18:16:04.733 MyAppName[250:17184] I’m done running dynamic language model generation and it took 0.202271 seconds
2014-12-08 18:16:04.734 MyAppName[250:17184] suspendListening
2014-12-08 18:16:04.735 MyAppName[250:17184] Creating shared instance of OEPocketsphinxController
2014-12-08 18:16:12.282 MyAppName[250:17184] startListening
2014-12-08 18:16:12.283 MyAppName[250:17184] Attempting to start listening session from startRealtimeListeningWithLanguageModelAtPath:
2014-12-08 18:16:12.303 MyAppName[250:17184] User gave mic permission for this app.
2014-12-08 18:16:12.304 MyAppName[250:17184] Valid setSecondsOfSilence value of 1.500000 will be used.
2014-12-08 18:16:12.305 MyAppName[250:17184] Successfully started listening session from startRealtimeListeningWithLanguageModelAtPath:
2014-12-08 18:16:12.306 MyAppName[250:17289] Starting listening.
2014-12-08 18:16:12.307 MyAppName[250:17289] about to set up audio session
2014-12-08 18:16:13.941 MyAppName[250:17300] Audio route has changed for the following reason:
2014-12-08 18:16:13.959 MyAppName[250:17300] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2014-12-08 18:16:13.967 MyAppName[250:17300] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x1460e990,
inputs = (
“<AVAudioSessionPortDescription: 0x147014e0, type = BluetoothHFP; name = Powerbeats Wireless; UID = 04:88:E2:37:55:15-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1584a310, type = BluetoothHFP; name = Powerbeats Wireless; UID = 04:88:E2:37:55:15-tsco; selectedDataSource = (null)>”
)>.
2014-12-08 18:16:13.978 MyAppName[250:17300] Audio route has changed for the following reason:
2014-12-08 18:16:13.988 MyAppName[250:17300] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2014-12-08 18:16:14.002 MyAppName[250:17300] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x14633e40,
inputs = (
“<AVAudioSessionPortDescription: 0x15856d20, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x15864b70, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>”
)>.
2014-12-08 18:16:14.005 MyAppName[250:17289] done starting audio unit
INFO: cmd_ln.c(702): Parsing command line:
\
-lm /var/mobile/Containers/Data/Application/1F371ED4-3393-4462-8C8F-805328AB0229/Library/Caches/KokomotCommands.DMP \
-vad_threshold 1.500000 \
-remove_noise yes \
-remove_silence yes \
-bestpath yes \
-lw 6.500000 \
-dict /var/mobile/Containers/Data/Application/1F371ED4-3393-4462-8C8F-805328AB0229/Library/Caches/KokomotCommands.dic \
-hmm /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/1F371ED4-3393-4462-8C8F-805328AB0229/Library/Caches/KokomotCommands.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-keyphrase
-kws
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm /var/mobile/Containers/Data/Application/1F371ED4-3393-4462-8C8F-805328AB0229/Library/Caches/KokomotCommands.DMP
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 10000 10000
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-vad_postspeech 50 50
-vad_prespeech 10 10
-vad_threshold 2.0 1.500000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02
INFO: cmd_ln.c(702): Parsing command line:
\
-nfilt 25 \
-lowerf 130 \
-upperf 6800 \
-feat 1s_c_d_dd \
-svspec 0-12/13-25/26-38 \
-agc none \
-cmn current \
-varnorm no \
-transform dct \
-lifter 22 \
-cmninit 40
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 22
-logspec no no
-lowerf 133.33334 1.300000e+02
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-vad_postspeech 50 50
-vad_prespeech 10 10
-vad_threshold 2.0 1.500000e+00
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.562500e-02
INFO: acmod.c(252): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/feat.params
INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(124): Attempting to use SCHMM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: dict.c(320): Allocating 4125 * 20 bytes (80 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/1F371ED4-3393-4462-8C8F-805328AB0229/Library/Caches/KokomotCommands.dic
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(336): 20 words read
INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/36B1D4C2-AB0D-44BC-8059-34B83BB90229/MyAppName.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(345): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
INFO: dict2pid.c(132): Allocated 25576 bytes (24 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 25576 bytes (24 KiB) for single-phone word triphones
INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
INFO: ngram_model_dmp.c(220): ngrams 1=20, 2=36, 3=18
INFO: ngram_model_dmp.c(266): 20 = LM.unigrams(+trailer) read
INFO: ngram_model_dmp.c(312): 36 = LM.bigrams(+trailer) read
INFO: ngram_model_dmp.c(338): 18 = LM.trigrams read
INFO: ngram_model_dmp.c(363): 5 = LM.prob2 entries read
INFO: ngram_model_dmp.c(383): 4 = LM.bo_wt2 entries read
INFO: ngram_model_dmp.c(403): 3 = LM.prob3 entries read
INFO: ngram_model_dmp.c(431): 1 = LM.tseg_base entries read
INFO: ngram_model_dmp.c(487): 20 = ascii word strings read
INFO: ngram_search_fwdtree.c(99): 19 unique initial diphones
INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 148
INFO: ngram_search_fwdtree.c(339): after: 19 root, 20 non-root channels, 9 single-phone words
INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
2014-12-08 18:16:14.188 MyAppName[250:17289] There was no previous CMN value in the plist so we are using the fresh CMN value 42.000000.
2014-12-08 18:16:14.188 MyAppName[250:17289] Listening.
2014-12-08 18:16:14.190 MyAppName[250:17289] Project has these words or phrases in its dictionary:
BACK
DONE
… [OMITTING REST OF VOCAB] …
2014-12-08 18:16:14.191 MyAppName[250:17289] Recognition loop has started
2014-12-08 18:16:14.476 MyAppName[250:17184] resumeListening
2014-12-08 18:16:14.782 MyAppName[250:17184] pocketsphinxDidStartListening
2014-12-08 18:16:15.334 MyAppName[250:17184] resumeListening
2014-12-08 18:16:15.334 MyAppName[250:17184] Valid setSecondsOfSilence value of 1.500000 will be used.
2014-12-08 18:16:15.339 MyAppName[250:17184] suspendListening
2014-12-08 18:16:15.428 MyAppName[250:17184] pocketsphinxDidResumeRecognition
2014-12-08 18:16:15.429 MyAppName[250:17184] pocketsphinxDidSuspendRecognition
2014-12-08 18:16:17.068 MyAppName[250:17184] resumeListening
2014-12-08 18:16:17.068 MyAppName[250:17184] Valid setSecondsOfSilence value of 1.500000 will be used.
2014-12-08 18:16:17.069 MyAppName[250:17184] pocketsphinxDidResumeRecognition
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
2014-12-08 18:16:22.538 MyAppName[250:17184] suspendListening
2014-12-08 18:16:22.539 MyAppName[250:17184] pocketsphinxDidSuspendRecognition
2014-12-08 18:16:14.476 MyAppName[250:17184] resumeListening 2014-12-08 18:16:14.782 MyAppName[250:17184] pocketsphinxDidStartListening 2014-12-08 18:16:15.334 MyAppName[250:17184] resumeListening 2014-12-08 18:16:15.334 MyAppName[250:17184] Valid setSecondsOfSilence value of 1.500000 will be used. 2014-12-08 18:16:15.339 MyAppName[250:17184] suspendListening 2014-12-08 18:16:15.428 MyAppName[250:17184] pocketsphinxDidResumeRecognition 2014-12-08 18:16:15.429 MyAppName[250:17184] pocketsphinxDidSuspendRecognition 2014-12-08 18:16:17.068 MyAppName[250:17184] resumeListening 2014-12-08 18:16:17.068 MyAppName[250:17184] Valid setSecondsOfSilence value of 1.500000 will be used. 2014-12-08 18:16:17.069 MyAppName[250:17184] pocketsphinxDidResumeRecognition INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
What is causing the repeated suspend/resume in the timeframe in which you’re expecting speech?
Can you use your bluetooth device as input either with a tutorial app or the sample app? You can change the bundle ID of the sample app in its info.plist property “Bundle identifier” and the volume output of the sample app should make it clear whether the bluetooth mic has input.
OpenEars 2.0 works with my bluetooth devices and with this developer’s:
/forums/topic/small-bug-when-running-on-ios-8/#post-1023307
So if the headset follows standards and is able to use its mic input outside of calls (not every bluetooth headset can do that), it ought to work. Does it work with other 3rd-party apps which can use bluetooth mic input?
]]>What is causing the repeated suspend/resume in the timeframe in which you’re expecting speech?
That’s expected. My app has a call-and-response UI, so it’s constantly suspending (when it plays audio) and resuming (when it needs to listen).
For debugging, I set up a separate view controller in my app that lets me interactively enable Open Ears and play sounds from button presses. In that context as well, I’m getting neither input nor output from bluetooth. (It works fine with phone or earbuds).
Can you use your bluetooth device as input either with a tutorial app or the sample app?
No. I’ve got the sample app running (both with and without RapidEars) and it works fine with phone or earbuds, but not with bluetooth.
So if the headset follows standards and is able to use its mic input outside of calls (not every bluetooth headset can do that), it ought to work. Does it work with other 3rd-party apps which can use bluetooth mic input?
This is a well-known, fairly high-end headset. Sound quality is excellent and it works fine with a variety of Apple and 3rd party apps that I’ve tested it with (both input/output).
I would love to hear that this is just something stupid I’m doing :)
]]>I would love to hear that this is just something stupid I’m doing :)
I’m sure it isn’t, but the only area in which the framework can really affect bluetooth usage is during initialization since it is a standard that is implemented in the hardware layer by Apple, and initialization is apparently going fine to judge from the OpenEarsLogging output, so I don’t have a lot of suggestions – there’s no possibility of testing Bluetooth against all possible devices and it’s working with all the devices it’s been tested with.
Are you absolutely positive that there’s nothing special about the headset in this interaction (low on battery, far away, muted, being overridden by a different, nearby bluetooth device that you aren’t interacting with, something else similar)? When the sample app isn’t working, what does the decibel label read, is it moving or fixed? Can you try a different bluetooth device with the sample app?
]]>Are you absolutely positive that there’s nothing special about the headset in this interaction (low on battery, far away, muted, being overridden by a different, nearby bluetooth device that you aren’t interacting with, something else similar)?
I don’t think so, but I will keep looking.
When the sample app isn’t working, what does the decibel label read, is it moving or fixed? Can you try a different bluetooth device with the sample app?
In a separate post, I’ll give you logs for latest sample run. Short answer – decibel label doesn’t move at all.
It’s a good suggestion — I’ll have to get my hands on some other bluetooth devices.
]]>---BluetoothHFPBluetoothHFP---
)
2014-12-09 13:05:34.389 OpenEarsSampleApp[451:85137] Starting OpenEars logging for OpenEars version 2.0 on 32-bit device (or build): iPhone running iOS version: 8.100000
2014-12-09 13:05:34.392 OpenEarsSampleApp[451:85137] Creating shared instance of OEPocketsphinxController
2014-12-09 13:05:34.431 OpenEarsSampleApp[451:85137] Starting dynamic language model generation
INFO: cmd_ln.c(702): Parsing command line:
sphinx_lm_convert \
-i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa \
-o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
Current configuration:
[NAME] [DEFLT] [VALUE]
-case
-debug 0
-help no no
-i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa
-ienc
-ifmt
-logbase 1.0001 1.000100e+00
-mmap no no
-o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-oenc utf8 utf8
-ofmt
INFO: ngram_model_arpa.c(504): ngrams 1=10, 2=16, 3=8
INFO: ngram_model_arpa.c(137): Reading unigrams
INFO: ngram_model_arpa.c(543): 10 = #unigrams created
INFO: ngram_model_arpa.c(197): Reading bigrams
INFO: ngram_model_arpa.c(561): 16 = #bigrams created
INFO: ngram_model_arpa.c(562): 3 = #prob2 entries
INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
INFO: ngram_model_arpa.c(294): Reading trigrams
INFO: ngram_model_arpa.c(583): 8 = #trigrams created
INFO: ngram_model_arpa.c(584): 2 = #prob3 entries
INFO: ngram_model_dmp.c(518): Building DMP model…
INFO: ngram_model_dmp.c(548): 10 = #unigrams created
INFO: ngram_model_dmp.c(649): 16 = #bigrams created
INFO: ngram_model_dmp.c(650): 3 = #prob2 entries
INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
INFO: ngram_model_dmp.c(661): 8 = #trigrams created
INFO: ngram_model_dmp.c(662): 2 = #prob3 entries
2014-12-09 13:05:34.498 OpenEarsSampleApp[451:85137] Done creating language model with CMUCLMTK in 0.066862 seconds.
2014-12-09 13:05:34.602 OpenEarsSampleApp[451:85137] I’m done running performDictionaryLookup and it took 0.075391 seconds
2014-12-09 13:05:34.609 OpenEarsSampleApp[451:85137] I’m done running dynamic language model generation and it took 0.210020 seconds
2014-12-09 13:05:34.615 OpenEarsSampleApp[451:85137] Starting dynamic language model generation
INFO: cmd_ln.c(702): Parsing command line:
sphinx_lm_convert \
-i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.arpa \
-o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP
Current configuration:
[NAME] [DEFLT] [VALUE]
-case
-debug 0
-help no no
-i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.arpa
-ienc
-ifmt
-logbase 1.0001 1.000100e+00
-mmap no no
-o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP
-oenc utf8 utf8
-ofmt
INFO: ngram_model_arpa.c(504): ngrams 1=12, 2=19, 3=10
INFO: ngram_model_arpa.c(137): Reading unigrams
INFO: ngram_model_arpa.c(543): 12 = #unigrams created
INFO: ngram_model_arpa.c(197): Reading bigrams
INFO: ngram_model_arpa.c(561): 19 = #bigrams created
INFO: ngram_model_arpa.c(562): 3 = #prob2 entries
INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
INFO: ngram_model_arpa.c(294): Reading trigrams
INFO: ngram_model_arpa.c(583): 10 = #trigrams created
INFO: ngram_model_arpa.c(584): 2 = #prob3 entries
INFO: ngram_model_dmp.c(518): Building DMP model…
INFO: ngram_model_dmp.c(548): 12 = #unigrams created
INFO: ngram_model_dmp.c(649): 19 = #bigrams created
INFO: ngram_model_dmp.c(650): 3 = #prob2 entries
INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
INFO: ngram_model_dmp.c(661): 10 = #trigrams created
INFO: ngram_model_dmp.c(662): 2 = #prob3 entries
2014-12-09 13:05:34.682 OpenEarsSampleApp[451:85137] Done creating language model with CMUCLMTK in 0.066150 seconds.
2014-12-09 13:05:34.764 OpenEarsSampleApp[451:85137] The word QUIDNUNC was not found in the dictionary /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
2014-12-09 13:05:34.765 OpenEarsSampleApp[451:85137] Now using the fallback method to look up the word QUIDNUNC
2014-12-09 13:05:34.765 OpenEarsSampleApp[451:85137] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the English phonetic lookup dictionary is that your words are not in English or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
2014-12-09 13:05:34.766 OpenEarsSampleApp[451:85137] Using convertGraphemes for the word or phrase QUIDNUNC which doesn’t appear in the dictionary
2014-12-09 13:05:34.814 OpenEarsSampleApp[451:85137] I’m done running performDictionaryLookup and it took 0.121312 seconds
2014-12-09 13:05:34.822 OpenEarsSampleApp[451:85137] I’m done running dynamic language model generation and it took 0.212430 seconds
2014-12-09 13:05:34.823 OpenEarsSampleApp[451:85137]
Welcome to the OpenEars sample project. This project understands the words:
BACKWARD,
CHANGE,
FORWARD,
GO,
LEFT,
MODEL,
RIGHT,
TURN,
and if you say “CHANGE MODEL” it will switch to its dynamically-generated model which understands the words:
CHANGE,
MODEL,
MONDAY,
TUESDAY,
WEDNESDAY,
THURSDAY,
FRIDAY,
SATURDAY,
SUNDAY,
QUIDNUNC
2014-12-09 13:05:34.824 OpenEarsSampleApp[451:85137] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2014-12-09 13:05:34.832 OpenEarsSampleApp[451:85137] User gave mic permission for this app.
2014-12-09 13:05:34.833 OpenEarsSampleApp[451:85137] setSecondsOfSilence wasn’t set, using default of 0.700000.
2014-12-09 13:05:34.834 OpenEarsSampleApp[451:85137] Successfully started listening session from startListeningWithLanguageModelAtPath:
2014-12-09 13:05:34.835 OpenEarsSampleApp[451:85152] Starting listening.
2014-12-09 13:05:34.835 OpenEarsSampleApp[451:85152] about to set up audio session
2014-12-09 13:05:34.884 OpenEarsSampleApp[451:85165] Audio route has changed for the following reason:
2014-12-09 13:05:34.889 OpenEarsSampleApp[451:85165] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2014-12-09 13:05:36.248 OpenEarsSampleApp[451:85152] done starting audio unit
INFO: cmd_ln.c(702): Parsing command line:
\
-lm /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP \
-vad_threshold 1.500000 \
-remove_noise yes \
-remove_silence yes \
-bestpath yes \
-lw 6.500000 \
-dict /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic \
-hmm /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-keyphrase
-kws
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm /var/mobil2014-12-09 13:05:36.268 OpenEarsSampleApp[451:85165] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x146a82f0,
inputs = (null);
outputs = (
“<AVAudioSessionPortDescription: 0x146a81f0, type = BluetoothA2DPOutput; name = Powerbeats Wireless; UID = 04:88:E2:37:55:15-tacl; selectedDataSource = (null)>”
)>.
e/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 10000 10000
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-vad_postspeech 50 50
-vad_prespeech 10 10
-vad_threshold 2.0 1.500000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02
INFO: cmd_ln.c(702): Parsing command line:
\
-nfilt 25 \
-lowerf 130 \
-upperf 6800 \
-feat 1s_c_d_dd \
-svspec 0-12/13-25/26-38 \
-agc none \
-cmn current \
-varnorm no \
-transform dct \
-lifter 22 \
-cmninit 40
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 22
-logspec no no
-lowerf 133.33334 1.300000e+02
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-vad_postspeech 50 50
-vad_prespeech 10 10
-vad_threshold 2.0 1.500000e+00
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.562500e-02
INFO: acmod.c(252): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(124): Attempting to use SCHMM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: dict.c(320): Allocating 4113 * 20 bytes (80 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(336): 8 words read
INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(345): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
INFO: dict2pid.c(132): Allocated 25576 bytes (24 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 25576 bytes (24 KiB) for single-phone word triphones
INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
INFO: ngram_model_dmp.c(220): ngrams 1=10, 2=16, 3=8
INFO: ngram_model_dmp.c(266): 10 = LM.unigrams(+trailer) read
INFO: ngram_model_dmp.c(312): 16 = LM.bigrams(+trailer) read
INFO: ngram_model_dmp.c(338): 8 = LM.trigrams read
INFO: ngram_model_dmp.c(363): 3 = LM.prob2 entries read
INFO: ngram_model_dmp.c(383): 3 = LM.bo_wt2 entries read
INFO: ngram_model_dmp.c(403): 2 = LM.prob3 entries read
INFO: ngram_model_dmp.c(431): 1 = LM.tseg_base entries read
INFO: ngram_model_dmp.c(487): 10 = ascii word strings read
INFO: ngram_search_fwdtree.c(99): 8 unique initial diphones
INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 145
INFO: ngram_search_fwdtree.c(339): after: 8 root, 17 non-root channels, 9 single-phone words
INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
2014-12-09 13:05:36.431 OpenEarsSampleApp[451:85152] Restoring SmartCMN value of 18.854980
2014-12-09 13:05:36.433 OpenEarsSampleApp[451:85152] Listening.
2014-12-09 13:05:36.435 OpenEarsSampleApp[451:85152] Project has these words or phrases in its dictionary:
BACKWARD
CHANGE
FORWARD
GO
LEFT
MODEL
RIGHT
TURN
2014-12-09 13:05:36.435 OpenEarsSampleApp[451:85152] Recognition loop has started
2014-12-09 13:05:36.465 OpenEarsSampleApp[451:85137] Local callback: Pocketsphinx is now listening.
2014-12-09 13:05:36.469 OpenEarsSampleApp[451:85137] Local callback: Pocketsphinx started.
I read up on the device and many users were complaining that it couldn’t be used for watching video because the audio is very high-latency, so that sounds like it has an idiosyncratic i/o compared to the usual BT headset behavior. That means I can’t troubleshoot it from afar since I have no insight into the device, the device implementation, or Apple’s implementation of how it initializes bluetooth for an audio unit. I’d do the following:
1. Test with other (known-to-work-with 3rd-party audio input) BT devices to sanity-check. I have a Samsung HM1300 that is not high-end (in fact it cost €10) and it does i/o perfectly with OpenEars 2.0, so that’s a good test device, or you can ask the developers with working bluetooth what they’re using.
2. Check out whether you are running the current version of your headset firmware. They have firmware update instructions in the support section of their site.
3. See if you get any different results setting different values for OEPocketsphinxController’s audioMode property.
4. If you feel up to recompiling the framework, you can try to change things in this line:
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionMixWithOthers | AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionDefaultToSpeaker error:&error];
For instance, I would see what happens when you remove AVAudioSessionCategoryOptionMixWithOthers and AVAudioSessionCategoryOptionDefaultToSpeaker as options so it looks like this:
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:&error];
And you could try changing the settings in handleRouteChange: so that either all of them perform the route change operation, or all of them don’t, in order to see if it is related, by changing this line:
if(performChange) {
To either this:
performChange = FALSE; if(performChange) {
or this:
performChange = TRUE; if(performChange) {
Remember that the framework now needs to be built by choosing “Archive”. Let me know your results.
5. I’m not at all pushing this as a solution because the headset is very expensive, but if you are very committed to my being able to test it, I can add it to Politepix’s Amazon Wish List and you could buy one for Politepix (used is fine). This wouldn’t be an agreement on my part to make it work/always keep it working (this kind of situation and the expensiveness/diversity/closed-ness of bluetooth devices are the exact reason that bluetooth support is experimental in OpenEars), but my being able to run it would certainly be the most likely path to making it work and I would agree to give it some debugging time and see what’s possible. Before doing this, I would very strongly recommend that you verify that the input on your device works with another 3rd-party app as a low-latency audio input device on the same device and iOS version, i.e. recording voice memos or similar, keeping in mind that if a 3rd-party app can’t really use your headset, it is likely to default to the built-in mic and perform some kind of recording anyway, so it’s important to verify whether the recording is coming from your headset or the built-in mic.
]]>