Got a report from a user and confirmed it myself using the 1.66 provided sample app.
If you run the sample app on iOS 8 with a bluetooth connected headset, the continuous recognition loop will fail with this error in the console:
“cont_ad_calib failed, stopping.”
If I disconnect the bluetooth headset from the iOS device, everything seems to work correctly like it used to.
Any ideas I can try to get this bluetooth path working on iOS 8?
Wes
BTW: Congrats to you and OpenEars for the Space Station gig. That is very cool.
]]>Thanks! Unfortunately the bluetooth support is marked as experimental because I don’t keep a bluetooth testbed due to device diversity, but I can try to give you some help with this regardless – just setting expectations that we probably will have to troubleshoot it in tandem. Upcoming OpenEars 2.0 uses all of the current APIs for this but isn’t ready to ship yet, but we might be able to backport some code from it.
Step one is to upgrade to the current version of OpenEars and any plugins and show me the failure logs with OpenEarsLogging and verbosePocketsphinx on.
]]>Don’t know if you’d prefer an email on this, so, just let me know, but here’s the entire log output with OpenEars 1.7 with OpenEarsLogging and verbosePockersphinx turned on with a bluetooth headset connected to an iPhone with iOS8:
2014-09-22 10:11:40.085 OpenEarsSampleApp[197:5624] Starting OpenEars logging for OpenEars version 1.7 on 32-bit device: iPhone running iOS version: 8.000000
2014-09-22 10:11:40.090 OpenEarsSampleApp[197:5624] acousticModelPath is /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
2014-09-22 10:11:40.123 OpenEarsSampleApp[197:5624] Starting dynamic language model generation
2014-09-22 10:11:40.131 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.corpus for reading
2014-09-22 10:11:40.133 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel_pipe.txt for writing
2014-09-22 10:11:40.133 OpenEarsSampleApp[197:5624] Starting text2wfreq_impl
2014-09-22 10:11:40.142 OpenEarsSampleApp[197:5624] Done with text2wfreq_impl
2014-09-22 10:11:40.142 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel_pipe.txt for reading.
2014-09-22 10:11:40.144 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.vocab for reading.
2014-09-22 10:11:40.144 OpenEarsSampleApp[197:5624] Starting wfreq2vocab
2014-09-22 10:11:40.147 OpenEarsSampleApp[197:5624] Done with wfreq2vocab
2014-09-22 10:11:40.148 OpenEarsSampleApp[197:5624] Starting text2idngram
2014-09-22 10:11:40.163 OpenEarsSampleApp[197:5624] Done with text2idngram
2014-09-22 10:11:40.169 OpenEarsSampleApp[197:5624] Starting idngram2lm
2014-09-22 10:11:40.183 OpenEarsSampleApp[197:5624] Done with idngram2lm
2014-09-22 10:11:40.183 OpenEarsSampleApp[197:5624] Starting sphinx_lm_convert
2014-09-22 10:11:40.190 OpenEarsSampleApp[197:5624] Finishing sphinx_lm_convert
2014-09-22 10:11:40.193 OpenEarsSampleApp[197:5624] Done creating language model with CMUCLMTK in 0.069508 seconds.
2014-09-22 10:11:40.239 OpenEarsSampleApp[197:5624] I’m done running performDictionaryLookup and it took 0.034399 seconds
2014-09-22 10:11:40.246 OpenEarsSampleApp[197:5624] I’m done running dynamic language model generation and it took 0.156091 seconds
2014-09-22 10:11:40.247 OpenEarsSampleApp[197:5624] Dynamic language generator completed successfully, you can find your new files FirstOpenEarsDynamicLanguageModel.DMP
and
FirstOpenEarsDynamicLanguageModel.dic
at the paths
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
and
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
2014-09-22 10:11:40.247 OpenEarsSampleApp[197:5624] acousticModelPath is /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
2014-09-22 10:11:40.253 OpenEarsSampleApp[197:5624] Starting dynamic language model generation
2014-09-22 10:11:40.260 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.corpus for reading
2014-09-22 10:11:40.262 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel_pipe.txt for writing
2014-09-22 10:11:40.262 OpenEarsSampleApp[197:5624] Starting text2wfreq_impl
2014-09-22 10:11:40.271 OpenEarsSampleApp[197:5624] Done with text2wfreq_impl
2014-09-22 10:11:40.271 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel_pipe.txt for reading.
2014-09-22 10:11:40.273 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.vocab for reading.
2014-09-22 10:11:40.273 OpenEarsSampleApp[197:5624] Starting wfreq2vocab
2014-09-22 10:11:40.276 OpenEarsSampleApp[197:5624] Done with wfreq2vocab
2014-09-22 10:11:40.277 OpenEarsSampleApp[197:5624] Starting text2idngram
2014-09-22 10:11:40.293 OpenEarsSampleApp[197:5624] Done with text2idngram
2014-09-22 10:11:40.311 OpenEarsSampleApp[197:5624] Starting idngram2lm
2014-09-22 10:11:40.323 OpenEarsSampleApp[197:5624] Done with idngram2lm
2014-09-22 10:11:40.323 OpenEarsSampleApp[197:5624] Starting sphinx_lm_convert
2014-09-22 10:11:40.328 OpenEarsSampleApp[197:5624] Finishing sphinx_lm_convert
2014-09-22 10:11:40.330 OpenEarsSampleApp[197:5624] Done creating language model with CMUCLMTK in 0.076958 seconds.
2014-09-22 10:11:40.373 OpenEarsSampleApp[197:5624] The word QUIDNUNC was not found in the dictionary /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
2014-09-22 10:11:40.373 OpenEarsSampleApp[197:5624] Now using the fallback method to look up the word QUIDNUNC
2014-09-22 10:11:40.373 OpenEarsSampleApp[197:5624] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the English phonetic lookup dictionary is that your words are not in English or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
2014-09-22 10:11:40.377 OpenEarsSampleApp[197:5624] Using convertGraphemes for the word or phrase QUIDNUNC which doesn’t appear in the dictionary
2014-09-22 10:11:40.409 OpenEarsSampleApp[197:5624] I’m done running performDictionaryLookup and it took 0.072901 seconds
2014-09-22 10:11:40.420 OpenEarsSampleApp[197:5624] I’m done running dynamic language model generation and it took 0.172638 seconds
2014-09-22 10:11:40.421 OpenEarsSampleApp[197:5624] Dynamic language generator completed successfully, you can find your new files SecondOpenEarsDynamicLanguageModel.DMP
and
SecondOpenEarsDynamicLanguageModel.dic
at the paths
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP
and
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.dic
2014-09-22 10:11:40.421 OpenEarsSampleApp[197:5624]
Welcome to the OpenEars sample project. This project understands the words:
BACKWARD,
CHANGE,
FORWARD,
GO,
LEFT,
MODEL,
RIGHT,
TURN,
and if you say “CHANGE MODEL” it will switch to its dynamically-generated model which understands the words:
CHANGE,
MODEL,
MONDAY,
TUESDAY,
WEDNESDAY,
THURSDAY,
FRIDAY,
SATURDAY,
SUNDAY,
QUIDNUNC
2014-09-22 10:11:40.430 OpenEarsSampleApp[197:5624] User gave mic permission for this app.
2014-09-22 10:11:40.430 OpenEarsSampleApp[197:5624] Leaving sample rate at the default of 16000.
2014-09-22 10:11:40.431 OpenEarsSampleApp[197:5624] The audio session has never been initialized so we will do that now.
2014-09-22 10:11:40.431 OpenEarsSampleApp[197:5624] Checking and resetting all audio session settings.
2014-09-22 10:11:40.432 OpenEarsSampleApp[197:5624] audioCategory is incorrect, we will change it.
2014-09-22 10:11:40.432 OpenEarsSampleApp[197:5624] audioCategory is now on the correct setting of kAudioSessionCategory_PlayAndRecord.
2014-09-22 10:11:40.432 OpenEarsSampleApp[197:5624] bluetoothInput is incorrect, we will change it.
2014-09-22 10:11:40.433 OpenEarsSampleApp[197:5624] bluetooth input is now on the correct setting of 1.
2014-09-22 10:11:40.434 OpenEarsSampleApp[197:5624] Output Device: HeadsetBT.
2014-09-22 10:11:40.435 OpenEarsSampleApp[197:5624] preferredBufferSize is incorrect, we will change it.
2014-09-22 10:11:40.435 OpenEarsSampleApp[197:5624] PreferredBufferSize is now on the correct setting of 0.128000.
2014-09-22 10:11:40.435 OpenEarsSampleApp[197:5624] preferredSampleRateCheck is incorrect, we will change it.
2014-09-22 10:11:40.436 OpenEarsSampleApp[197:5624] preferred hardware sample rate is now on the correct setting of 16000.000000.
2014-09-22 10:11:40.454 OpenEarsSampleApp[197:5624] AudioSessionManager startAudioSession has reached the end of the initialization.
2014-09-22 10:11:40.454 OpenEarsSampleApp[197:5624] Exiting startAudioSession.
2014-09-22 10:11:40.458 OpenEarsSampleApp[197:5683] setSecondsOfSilence value of 0.000000 was too large or too small or was NULL, using default of 0.700000.
2014-09-22 10:11:40.459 OpenEarsSampleApp[197:5683] Project has these words or phrases in its dictionary:
BACKWARD
CHANGE
FORWARD
GO
LEFT
MODEL
RIGHT
TURN
2014-09-22 10:11:40.459 OpenEarsSampleApp[197:5683] Recognition loop has started
INFO: file_omitted(0): Parsing command line:
\
-lm /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP \
-beam 1e-66 \
-bestpath yes \
-dict /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic \
-hmm /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle \
-lw 6.500000 \
-samprate 16000
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-66
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-lmctl
-lmname default default
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf -1 -1
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02
INFO: file_omitted(0): Parsing command line:
\
-nfilt 20 \
-lowerf 1 \
-upperf 4000 \
-wlen 0.025 \
-transform dct \
-round_filters no \
-remove_dc yes \
-svspec 0-12/13-25/26-38 \
-feat 1s_c_d_dd \
-agc none \
-cmn current \
-cmninit 47 \
-varnorm no
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 47
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 0
-logspec no no
-lowerf 133.33334 1.000000e+00
-ncep 13 13
-nfft 512 512
-nfilt 40 20
-remove_dc no yes
-round_filters yes no
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 4.000000e+03
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.500000e-02
INFO: file_omitted(0): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
INFO: file_omitted(0): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: file_omitted(0): mean[0]= 12.00, mean[1..12]= 0.0
INFO: file_omitted(0): Using subvector specification 0-12/13-25/26-38
INFO: file_omitted(0): Reading model definition: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: file_omitted(0): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: file_omitted(0): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
2014-09-22 10:11:40.488 OpenEarsSampleApp[197:5678] Audio route has changed for the following reason:
2014-09-22 10:11:40.495 OpenEarsSampleApp[197:5678] There has been a change of category
2014-09-22 10:11:40.495 OpenEarsSampleApp[197:5678] The previous audio route was HeadphonesBT
2014-09-22 10:11:40.496 OpenEarsSampleApp[197:5678] This is not a case in which OpenEars performs a route change voluntarily. At the close of this function, the audio route is HeadsetBT
INFO: file_omitted(0): 50 CI-phone, 143047 CD-phone, 3 emitstate/phone, 150 CI-sen, 5150 Sen, 27135 Sen-Seq
INFO: file_omitted(0): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
INFO: file_omitted(0): Attempting to use SCHMM computation module
INFO: file_omitted(0): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
INFO: file_omitted(0): 1 codebook, 3 feature, size:
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
INFO: file_omitted(0): 1 codebook, 3 feature, size:
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 0 variance values floored
INFO: file_omitted(0): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
INFO: file_omitted(0): BEGIN FILE FORMAT DESCRIPTION
INFO: file_omitted(0): Using memory-mapped I/O for senones
INFO: file_omitted(0): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: file_omitted(0): Allocating 4115 * 20 bytes (80 KiB) for word entries
INFO: file_omitted(0): Reading main dictionary: /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
INFO: file_omitted(0): Allocated 0 KiB for strings, 0 KiB for phones
INFO: file_omitted(0): 8 words read
INFO: file_omitted(0): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
INFO: file_omitted(0): Allocated 0 KiB for strings, 0 KiB for phones
INFO: file_omitted(0): 11 words read
INFO: file_omitted(0): Building PID tables for dictionary
INFO: file_omitted(0): Allocating 50^3 * 2 bytes (244 KiB) for word-initial triphones
2014-09-22 10:11:40.537 OpenEarsSampleApp[197:5624] Pocketsphinx is starting up.
INFO: file_omitted(0): Allocated 30200 bytes (29 KiB) for word-final triphones
INFO: file_omitted(0): Allocated 30200 bytes (29 KiB) for single-phone word triphones
INFO: file_omitted(0): No \data\ mark in LM file
INFO: file_omitted(0): Will use memory-mapped I/O for LM file
INFO: file_omitted(0): ngrams 1=10, 2=16, 3=8
INFO: file_omitted(0): 10 = LM.unigrams(+trailer) read
INFO: file_omitted(0): 16 = LM.bigrams(+trailer) read
INFO: file_omitted(0): 8 = LM.trigrams read
INFO: file_omitted(0): 3 = LM.prob2 entries read
INFO: file_omitted(0): 3 = LM.bo_wt2 entries read
INFO: file_omitted(0): 2 = LM.prob3 entries read
INFO: file_omitted(0): 1 = LM.tseg_base entries read
INFO: file_omitted(0): 10 = ascii word strings read
INFO: file_omitted(0): 8 unique initial diphones
INFO: file_omitted(0): 0 root, 0 non-root channels, 12 single-phone words
INFO: file_omitted(0): Creating search tree
INFO: file_omitted(0): before: 0 root, 0 non-root channels, 12 single-phone words
INFO: file_omitted(0): after: max nonroot chan increased to 145
INFO: file_omitted(0): after: 8 root, 17 non-root channels, 11 single-phone words
INFO: file_omitted(0): fwdflat: min_ef_width = 4, max_sf_win = 25
2014-09-22 10:11:40.579 OpenEarsSampleApp[197:5683] Starting openAudioDevice on the device.
2014-09-22 10:11:40.579 OpenEarsSampleApp[197:5683] Audio unit wrapper successfully created.
2014-09-22 10:11:40.591 OpenEarsSampleApp[197:5683] Set audio route to HeadsetBT
2014-09-22 10:11:40.593 OpenEarsSampleApp[197:5683] There is no CMN plist so we are using the fresh CMN value 47.000000.
2014-09-22 10:11:40.594 OpenEarsSampleApp[197:5683] Checking and resetting all audio session settings.
2014-09-22 10:11:40.595 OpenEarsSampleApp[197:5683] audioCategory is correct, we will leave it as it is.
2014-09-22 10:11:40.596 OpenEarsSampleApp[197:5683] bluetoothInput is correct, we will leave it as it is.
2014-09-22 10:11:40.596 OpenEarsSampleApp[197:5683] Output Device: HeadsetBT.
2014-09-22 10:11:40.597 OpenEarsSampleApp[197:5683] preferredBufferSize is incorrect, we will change it.
2014-09-22 10:11:40.599 OpenEarsSampleApp[197:5683] PreferredBufferSize is now on the correct setting of 0.128000.
2014-09-22 10:11:40.600 OpenEarsSampleApp[197:5683] preferredSampleRateCheck is correct, we will leave it as it is.
2014-09-22 10:11:40.600 OpenEarsSampleApp[197:5683] Setting the variables for the device and starting it.
2014-09-22 10:11:40.601 OpenEarsSampleApp[197:5683] Looping through ringbuffer sections and pre-allocating them.
2014-09-22 10:11:42.219 OpenEarsSampleApp[197:5683] Started audio output unit.
2014-09-22 10:11:42.220 OpenEarsSampleApp[197:5683] Calibration has started
2014-09-22 10:11:42.220 OpenEarsSampleApp[197:5624] Pocketsphinx calibration has started.
2014-09-22 10:11:44.423 OpenEarsSampleApp[197:5683] cont_ad_calib failed, stopping.
2014-09-22 10:11:44.425 OpenEarsSampleApp[197:5624] Setting up the continuous recognition loop has failed for some reason, please turn on [OpenEarsLogging startOpenEarsLogging] in OpenEarsConfig.h to learn more.
“I unfortunately don’t have a device that replicates it”
– Does this mean your bluetooth test device works fine? Or does this mean, you don’t have a bluetooth headset to test with?
From the user reports and my testing, I believe any bluetooth headset will expose the issue.
Wes
]]>The first failure occurs in the find_thresh(cont_ad_t * r) function.
The issue here is the detected max input levels are way above the defined max_noise level of 70. The Bluetooth connected headset is coming in at 98 or so. So, the first thing I did was to up CONT_AD_MAX_NOISE to 100 up from 70.
That got me though the sound calibration, but there’s another problem and this one I have no idea how to solve.
The first vocal input seems to work, but after the first recognition, something happens to the input stream from the mic. The function getDecibels in ContinuousAudioUnit.m starts reporting that the sampleDB value is “-inf”. Can’t say I’ve seen that before.
The logic here in getDecibels is specifically filtering out for inf values so someone thought of this or has seen it before.
If I turn off the headset everything goes back to normal and works.
My assumption here is the inf value indicates that the mic data is trashed and shouldn’t be used. So, the big question is, any ideas on why that’s happening?
I’ve tried this on an iPhone and an iPad Mini running iOS 8.0. Same results.
Thanks Halle,
Wes
It is possible but maybe a little unlikely that bluetooth just got a lot louder in iOS8. What I think is more likely is that it isn’t working, and a non-audio signal is being passed.
The logic here in getDecibels is specifically filtering out for inf values so someone thought of this or has seen it before.
I have always checked for impossible values as part of the general error checking for bad values but I don’t recall that it was due to a particular symptom that I saw, sorry. Since that code is over four years old (but still chugging along nicely) it is unlikely that a reason for inf values then overlaps with one now, SDK-wise.
The new audio session management code in OpenEars 2.0 current sets up the audio category like this:
NSError *error = nil; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionMixWithOthers | AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionDefaultToSpeaker error:&error]; if(error) { NSLog(@"Error setting up audio session category with options: %@", error); }
If you use this modern category code rather than this in ContinuousAudioUnit.m:
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord; AudioSessionSetProperty (kAudioSessionProperty_AudioCategory, sizeof (sessionCategory), &sessionCategory);
Do you see any improvement? You may need to #import AVFoundation/AVFoundation.h.
Thanks Wes,
Halle
]]>I replaced the code as instructed, rebuilt and tested, but the result was the same.
What’s a little odd is that it works initially so that leads me to believe that the initial setup is correct or on the right track. It’s after pausing during the recognition state where it usually doesn’t come back (I get the inf values in getDecibels).
As soon as Apple gets iOS 8.0.1 figured out I’ll test on 8.0.1 to see if the issue persists.
]]>It’s after pausing during the recognition state where it usually doesn’t come back
What kind of pausing?
]]>In the sample app when you speak, the Pocketsphinx Input Levels will stop while Flite is speaking the recognized speech. After Flite is done speaking, I’ll see the Pocketsphinx Input Levels bounce around according to the DB levels of the mic input.
This all looks normal. Don’t want to try to recognize Flite speech.
With the Bluetooth mic attached, after Flite is done speaking on the first recognition, the Pocketphinx Input Levels goes to -120db and stays there. Meanwhile under the hood my custom debug statements are showing “inf” for the decibel levels.
]]>