Tagged: Bluetooth disconnect
- This topic has 37 replies, 5 voices, and was last updated 8 years ago by Halle Winkler.
-
AuthorPosts
-
September 20, 2014 at 1:11 am #1022591wfillemanParticipant
Hi Halle,
Got a report from a user and confirmed it myself using the 1.66 provided sample app.
If you run the sample app on iOS 8 with a bluetooth connected headset, the continuous recognition loop will fail with this error in the console:
“cont_ad_calib failed, stopping.”
If I disconnect the bluetooth headset from the iOS device, everything seems to work correctly like it used to.
Any ideas I can try to get this bluetooth path working on iOS 8?
Wes
BTW: Congrats to you and OpenEars for the Space Station gig. That is very cool.
September 20, 2014 at 10:11 am #1022594Halle WinklerPolitepixHi Wes,
Thanks! Unfortunately the bluetooth support is marked as experimental because I don’t keep a bluetooth testbed due to device diversity, but I can try to give you some help with this regardless – just setting expectations that we probably will have to troubleshoot it in tandem. Upcoming OpenEars 2.0 uses all of the current APIs for this but isn’t ready to ship yet, but we might be able to backport some code from it.
Step one is to upgrade to the current version of OpenEars and any plugins and show me the failure logs with OpenEarsLogging and verbosePocketsphinx on.
September 22, 2014 at 7:16 pm #1022603wfillemanParticipantThanks Halle,
Don’t know if you’d prefer an email on this, so, just let me know, but here’s the entire log output with OpenEars 1.7 with OpenEarsLogging and verbosePockersphinx turned on with a bluetooth headset connected to an iPhone with iOS8:
2014-09-22 10:11:40.085 OpenEarsSampleApp[197:5624] Starting OpenEars logging for OpenEars version 1.7 on 32-bit device: iPhone running iOS version: 8.000000
2014-09-22 10:11:40.090 OpenEarsSampleApp[197:5624] acousticModelPath is /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
2014-09-22 10:11:40.123 OpenEarsSampleApp[197:5624] Starting dynamic language model generation
2014-09-22 10:11:40.131 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.corpus for reading
2014-09-22 10:11:40.133 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel_pipe.txt for writing
2014-09-22 10:11:40.133 OpenEarsSampleApp[197:5624] Starting text2wfreq_impl
2014-09-22 10:11:40.142 OpenEarsSampleApp[197:5624] Done with text2wfreq_impl
2014-09-22 10:11:40.142 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel_pipe.txt for reading.
2014-09-22 10:11:40.144 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.vocab for reading.
2014-09-22 10:11:40.144 OpenEarsSampleApp[197:5624] Starting wfreq2vocab
2014-09-22 10:11:40.147 OpenEarsSampleApp[197:5624] Done with wfreq2vocab
2014-09-22 10:11:40.148 OpenEarsSampleApp[197:5624] Starting text2idngram
2014-09-22 10:11:40.163 OpenEarsSampleApp[197:5624] Done with text2idngram
2014-09-22 10:11:40.169 OpenEarsSampleApp[197:5624] Starting idngram2lm2014-09-22 10:11:40.183 OpenEarsSampleApp[197:5624] Done with idngram2lm
2014-09-22 10:11:40.183 OpenEarsSampleApp[197:5624] Starting sphinx_lm_convert
2014-09-22 10:11:40.190 OpenEarsSampleApp[197:5624] Finishing sphinx_lm_convert
2014-09-22 10:11:40.193 OpenEarsSampleApp[197:5624] Done creating language model with CMUCLMTK in 0.069508 seconds.
2014-09-22 10:11:40.239 OpenEarsSampleApp[197:5624] I’m done running performDictionaryLookup and it took 0.034399 seconds
2014-09-22 10:11:40.246 OpenEarsSampleApp[197:5624] I’m done running dynamic language model generation and it took 0.156091 seconds
2014-09-22 10:11:40.247 OpenEarsSampleApp[197:5624] Dynamic language generator completed successfully, you can find your new files FirstOpenEarsDynamicLanguageModel.DMP
and
FirstOpenEarsDynamicLanguageModel.dic
at the paths
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
and
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
2014-09-22 10:11:40.247 OpenEarsSampleApp[197:5624] acousticModelPath is /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
2014-09-22 10:11:40.253 OpenEarsSampleApp[197:5624] Starting dynamic language model generation
2014-09-22 10:11:40.260 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.corpus for reading
2014-09-22 10:11:40.262 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel_pipe.txt for writing
2014-09-22 10:11:40.262 OpenEarsSampleApp[197:5624] Starting text2wfreq_impl
2014-09-22 10:11:40.271 OpenEarsSampleApp[197:5624] Done with text2wfreq_impl
2014-09-22 10:11:40.271 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel_pipe.txt for reading.
2014-09-22 10:11:40.273 OpenEarsSampleApp[197:5624] Able to open /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.vocab for reading.
2014-09-22 10:11:40.273 OpenEarsSampleApp[197:5624] Starting wfreq2vocab
2014-09-22 10:11:40.276 OpenEarsSampleApp[197:5624] Done with wfreq2vocab
2014-09-22 10:11:40.277 OpenEarsSampleApp[197:5624] Starting text2idngram
2014-09-22 10:11:40.293 OpenEarsSampleApp[197:5624] Done with text2idngram
2014-09-22 10:11:40.311 OpenEarsSampleApp[197:5624] Starting idngram2lm2014-09-22 10:11:40.323 OpenEarsSampleApp[197:5624] Done with idngram2lm
2014-09-22 10:11:40.323 OpenEarsSampleApp[197:5624] Starting sphinx_lm_convert
2014-09-22 10:11:40.328 OpenEarsSampleApp[197:5624] Finishing sphinx_lm_convert
2014-09-22 10:11:40.330 OpenEarsSampleApp[197:5624] Done creating language model with CMUCLMTK in 0.076958 seconds.
2014-09-22 10:11:40.373 OpenEarsSampleApp[197:5624] The word QUIDNUNC was not found in the dictionary /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
2014-09-22 10:11:40.373 OpenEarsSampleApp[197:5624] Now using the fallback method to look up the word QUIDNUNC
2014-09-22 10:11:40.373 OpenEarsSampleApp[197:5624] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the English phonetic lookup dictionary is that your words are not in English or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
2014-09-22 10:11:40.377 OpenEarsSampleApp[197:5624] Using convertGraphemes for the word or phrase QUIDNUNC which doesn’t appear in the dictionary
2014-09-22 10:11:40.409 OpenEarsSampleApp[197:5624] I’m done running performDictionaryLookup and it took 0.072901 seconds
2014-09-22 10:11:40.420 OpenEarsSampleApp[197:5624] I’m done running dynamic language model generation and it took 0.172638 seconds
2014-09-22 10:11:40.421 OpenEarsSampleApp[197:5624] Dynamic language generator completed successfully, you can find your new files SecondOpenEarsDynamicLanguageModel.DMP
and
SecondOpenEarsDynamicLanguageModel.dic
at the paths
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP
and
/var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/SecondOpenEarsDynamicLanguageModel.dic
2014-09-22 10:11:40.421 OpenEarsSampleApp[197:5624]Welcome to the OpenEars sample project. This project understands the words:
BACKWARD,
CHANGE,
FORWARD,
GO,
LEFT,
MODEL,
RIGHT,
TURN,
and if you say “CHANGE MODEL” it will switch to its dynamically-generated model which understands the words:
CHANGE,
MODEL,
MONDAY,
TUESDAY,
WEDNESDAY,
THURSDAY,
FRIDAY,
SATURDAY,SUNDAY,
QUIDNUNC
2014-09-22 10:11:40.430 OpenEarsSampleApp[197:5624] User gave mic permission for this app.
2014-09-22 10:11:40.430 OpenEarsSampleApp[197:5624] Leaving sample rate at the default of 16000.
2014-09-22 10:11:40.431 OpenEarsSampleApp[197:5624] The audio session has never been initialized so we will do that now.
2014-09-22 10:11:40.431 OpenEarsSampleApp[197:5624] Checking and resetting all audio session settings.
2014-09-22 10:11:40.432 OpenEarsSampleApp[197:5624] audioCategory is incorrect, we will change it.
2014-09-22 10:11:40.432 OpenEarsSampleApp[197:5624] audioCategory is now on the correct setting of kAudioSessionCategory_PlayAndRecord.
2014-09-22 10:11:40.432 OpenEarsSampleApp[197:5624] bluetoothInput is incorrect, we will change it.
2014-09-22 10:11:40.433 OpenEarsSampleApp[197:5624] bluetooth input is now on the correct setting of 1.
2014-09-22 10:11:40.434 OpenEarsSampleApp[197:5624] Output Device: HeadsetBT.
2014-09-22 10:11:40.435 OpenEarsSampleApp[197:5624] preferredBufferSize is incorrect, we will change it.
2014-09-22 10:11:40.435 OpenEarsSampleApp[197:5624] PreferredBufferSize is now on the correct setting of 0.128000.
2014-09-22 10:11:40.435 OpenEarsSampleApp[197:5624] preferredSampleRateCheck is incorrect, we will change it.
2014-09-22 10:11:40.436 OpenEarsSampleApp[197:5624] preferred hardware sample rate is now on the correct setting of 16000.000000.
2014-09-22 10:11:40.454 OpenEarsSampleApp[197:5624] AudioSessionManager startAudioSession has reached the end of the initialization.
2014-09-22 10:11:40.454 OpenEarsSampleApp[197:5624] Exiting startAudioSession.
2014-09-22 10:11:40.458 OpenEarsSampleApp[197:5683] setSecondsOfSilence value of 0.000000 was too large or too small or was NULL, using default of 0.700000.
2014-09-22 10:11:40.459 OpenEarsSampleApp[197:5683] Project has these words or phrases in its dictionary:
BACKWARD
CHANGE
FORWARD
GO
LEFT
MODEL
RIGHT
TURN
2014-09-22 10:11:40.459 OpenEarsSampleApp[197:5683] Recognition loop has started
INFO: file_omitted(0): Parsing command line:
\
-lm /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP \
-beam 1e-66 \
-bestpath yes \
-dict /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic \
-hmm /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle \
-lw 6.500000 \
-samprate 16000Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-66
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-lmctl
-lmname default default
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf -1 -1
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02INFO: file_omitted(0): Parsing command line:
\
-nfilt 20 \
-lowerf 1 \
-upperf 4000 \
-wlen 0.025 \
-transform dct \
-round_filters no \
-remove_dc yes \
-svspec 0-12/13-25/26-38 \
-feat 1s_c_d_dd \
-agc none \
-cmn current \
-cmninit 47 \
-varnorm noCurrent configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 47
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 0
-logspec no no
-lowerf 133.33334 1.000000e+00
-ncep 13 13
-nfft 512 512
-nfilt 40 20
-remove_dc no yes
-round_filters yes no
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 4.000000e+03
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.500000e-02INFO: file_omitted(0): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
INFO: file_omitted(0): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: file_omitted(0): mean[0]= 12.00, mean[1..12]= 0.0
INFO: file_omitted(0): Using subvector specification 0-12/13-25/26-38
INFO: file_omitted(0): Reading model definition: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: file_omitted(0): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: file_omitted(0): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
2014-09-22 10:11:40.488 OpenEarsSampleApp[197:5678] Audio route has changed for the following reason:
2014-09-22 10:11:40.495 OpenEarsSampleApp[197:5678] There has been a change of category
2014-09-22 10:11:40.495 OpenEarsSampleApp[197:5678] The previous audio route was HeadphonesBT
2014-09-22 10:11:40.496 OpenEarsSampleApp[197:5678] This is not a case in which OpenEars performs a route change voluntarily. At the close of this function, the audio route is HeadsetBT
INFO: file_omitted(0): 50 CI-phone, 143047 CD-phone, 3 emitstate/phone, 150 CI-sen, 5150 Sen, 27135 Sen-Seq
INFO: file_omitted(0): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
INFO: file_omitted(0): Attempting to use SCHMM computation module
INFO: file_omitted(0): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
INFO: file_omitted(0): 1 codebook, 3 feature, size:
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
INFO: file_omitted(0): 1 codebook, 3 feature, size:
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 256×13
INFO: file_omitted(0): 0 variance values floored
INFO: file_omitted(0): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
INFO: file_omitted(0): BEGIN FILE FORMAT DESCRIPTION
INFO: file_omitted(0): Using memory-mapped I/O for senones
INFO: file_omitted(0): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: file_omitted(0): Allocating 4115 * 20 bytes (80 KiB) for word entries
INFO: file_omitted(0): Reading main dictionary: /var/mobile/Containers/Data/Application/CC4883AD-BF78-460E-A31A-91D93BECC7BD/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
INFO: file_omitted(0): Allocated 0 KiB for strings, 0 KiB for phones
INFO: file_omitted(0): 8 words read
INFO: file_omitted(0): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/16FB9F4F-0683-499B-A759-FB6928A35CFC/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
INFO: file_omitted(0): Allocated 0 KiB for strings, 0 KiB for phones
INFO: file_omitted(0): 11 words read
INFO: file_omitted(0): Building PID tables for dictionary
INFO: file_omitted(0): Allocating 50^3 * 2 bytes (244 KiB) for word-initial triphones
2014-09-22 10:11:40.537 OpenEarsSampleApp[197:5624] Pocketsphinx is starting up.
INFO: file_omitted(0): Allocated 30200 bytes (29 KiB) for word-final triphones
INFO: file_omitted(0): Allocated 30200 bytes (29 KiB) for single-phone word triphones
INFO: file_omitted(0): No \data\ mark in LM file
INFO: file_omitted(0): Will use memory-mapped I/O for LM file
INFO: file_omitted(0): ngrams 1=10, 2=16, 3=8
INFO: file_omitted(0): 10 = LM.unigrams(+trailer) read
INFO: file_omitted(0): 16 = LM.bigrams(+trailer) read
INFO: file_omitted(0): 8 = LM.trigrams read
INFO: file_omitted(0): 3 = LM.prob2 entries read
INFO: file_omitted(0): 3 = LM.bo_wt2 entries read
INFO: file_omitted(0): 2 = LM.prob3 entries read
INFO: file_omitted(0): 1 = LM.tseg_base entries read
INFO: file_omitted(0): 10 = ascii word strings read
INFO: file_omitted(0): 8 unique initial diphones
INFO: file_omitted(0): 0 root, 0 non-root channels, 12 single-phone words
INFO: file_omitted(0): Creating search tree
INFO: file_omitted(0): before: 0 root, 0 non-root channels, 12 single-phone words
INFO: file_omitted(0): after: max nonroot chan increased to 145
INFO: file_omitted(0): after: 8 root, 17 non-root channels, 11 single-phone words
INFO: file_omitted(0): fwdflat: min_ef_width = 4, max_sf_win = 25
2014-09-22 10:11:40.579 OpenEarsSampleApp[197:5683] Starting openAudioDevice on the device.
2014-09-22 10:11:40.579 OpenEarsSampleApp[197:5683] Audio unit wrapper successfully created.
2014-09-22 10:11:40.591 OpenEarsSampleApp[197:5683] Set audio route to HeadsetBT
2014-09-22 10:11:40.593 OpenEarsSampleApp[197:5683] There is no CMN plist so we are using the fresh CMN value 47.000000.
2014-09-22 10:11:40.594 OpenEarsSampleApp[197:5683] Checking and resetting all audio session settings.
2014-09-22 10:11:40.595 OpenEarsSampleApp[197:5683] audioCategory is correct, we will leave it as it is.
2014-09-22 10:11:40.596 OpenEarsSampleApp[197:5683] bluetoothInput is correct, we will leave it as it is.
2014-09-22 10:11:40.596 OpenEarsSampleApp[197:5683] Output Device: HeadsetBT.
2014-09-22 10:11:40.597 OpenEarsSampleApp[197:5683] preferredBufferSize is incorrect, we will change it.
2014-09-22 10:11:40.599 OpenEarsSampleApp[197:5683] PreferredBufferSize is now on the correct setting of 0.128000.
2014-09-22 10:11:40.600 OpenEarsSampleApp[197:5683] preferredSampleRateCheck is correct, we will leave it as it is.
2014-09-22 10:11:40.600 OpenEarsSampleApp[197:5683] Setting the variables for the device and starting it.
2014-09-22 10:11:40.601 OpenEarsSampleApp[197:5683] Looping through ringbuffer sections and pre-allocating them.
2014-09-22 10:11:42.219 OpenEarsSampleApp[197:5683] Started audio output unit.
2014-09-22 10:11:42.220 OpenEarsSampleApp[197:5683] Calibration has started
2014-09-22 10:11:42.220 OpenEarsSampleApp[197:5624] Pocketsphinx calibration has started.
2014-09-22 10:11:44.423 OpenEarsSampleApp[197:5683] cont_ad_calib failed, stopping.
2014-09-22 10:11:44.425 OpenEarsSampleApp[197:5624] Setting up the continuous recognition loop has failed for some reason, please turn on [OpenEarsLogging startOpenEarsLogging] in OpenEarsConfig.h to learn more.September 22, 2014 at 7:22 pm #1022604Halle WinklerPolitepixOK, sorry to be the bearer of bad news, but this log doesn’t describe any bluetooth issues and I unfortunately don’t have a device that replicates it. I implicitly believe it’s happening, but you’d need to do some troubleshooting in order to give me more to work with.
September 22, 2014 at 10:19 pm #1022605wfillemanParticipantI agree, it’s not a useful log dump. Luckily the failure seems to be limited in the cont_ad_calib call in the framework. When I get some time I’ll see if I can dig into this function and figure it out.
“I unfortunately don’t have a device that replicates it”
– Does this mean your bluetooth test device works fine? Or does this mean, you don’t have a bluetooth headset to test with?From the user reports and my testing, I believe any bluetooth headset will expose the issue.
Wes
September 23, 2014 at 8:37 pm #1022610wfillemanParticipantOk, got some info for you.
The first failure occurs in the find_thresh(cont_ad_t * r) function.
The issue here is the detected max input levels are way above the defined max_noise level of 70. The Bluetooth connected headset is coming in at 98 or so. So, the first thing I did was to up CONT_AD_MAX_NOISE to 100 up from 70.
That got me though the sound calibration, but there’s another problem and this one I have no idea how to solve.
The first vocal input seems to work, but after the first recognition, something happens to the input stream from the mic. The function getDecibels in ContinuousAudioUnit.m starts reporting that the sampleDB value is “-inf”. Can’t say I’ve seen that before.
The logic here in getDecibels is specifically filtering out for inf values so someone thought of this or has seen it before.
If I turn off the headset everything goes back to normal and works.
My assumption here is the inf value indicates that the mic data is trashed and shouldn’t be used. So, the big question is, any ideas on why that’s happening?
I’ve tried this on an iPhone and an iPad Mini running iOS 8.0. Same results.
Thanks Halle,
WesSeptember 24, 2014 at 10:20 am #1022616Halle WinklerPolitepixOK, well first of all, I’m sorry you’re seeing this issue and I appreciate your willingness to dig into it. I’m a bit torn here because both the ancient-code AudioSessionManager and cont_ad_calib are both 100% gone in the under-testing OpenEars 2.0 due to its new VAD/lack of calibration requirements/complete code modernization, so any serious work on this is a) going in the trash in the near term and b) adding up tasks before it is possible to ship the version that probably doesn’t have this issue. However, it’s going to be a bit before it is ready so I can’t comfortably recommend waiting for the release or possible beta release if this is a serious shipping issue for you. These are the kinds of tricky situations which come up when doing a major upgrade but I think it is going to be worth it with the improvements that are coming. I definitely want to see if I can help you with this and if it’s all right, let’s see if we can keep making progress on it together without my putting it into my current task list for investigation. If that is possible, I appreciate it.
It is possible but maybe a little unlikely that bluetooth just got a lot louder in iOS8. What I think is more likely is that it isn’t working, and a non-audio signal is being passed.
The logic here in getDecibels is specifically filtering out for inf values so someone thought of this or has seen it before.
I have always checked for impossible values as part of the general error checking for bad values but I don’t recall that it was due to a particular symptom that I saw, sorry. Since that code is over four years old (but still chugging along nicely) it is unlikely that a reason for inf values then overlaps with one now, SDK-wise.
The new audio session management code in OpenEars 2.0 current sets up the audio category like this:
NSError *error = nil; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionMixWithOthers | AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionDefaultToSpeaker error:&error]; if(error) { NSLog(@"Error setting up audio session category with options: %@", error); }
If you use this modern category code rather than this in ContinuousAudioUnit.m:
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord; AudioSessionSetProperty (kAudioSessionProperty_AudioCategory, sizeof (sessionCategory), &sessionCategory);
Do you see any improvement? You may need to #import AVFoundation/AVFoundation.h.
Thanks Wes,
Halle
September 24, 2014 at 10:11 pm #1022620wfillemanParticipantThanks Halle, but unfortunately this didn’t have any effect.
I replaced the code as instructed, rebuilt and tested, but the result was the same.
What’s a little odd is that it works initially so that leads me to believe that the initial setup is correct or on the right track. It’s after pausing during the recognition state where it usually doesn’t come back (I get the inf values in getDecibels).
As soon as Apple gets iOS 8.0.1 figured out I’ll test on 8.0.1 to see if the issue persists.
September 24, 2014 at 10:24 pm #1022621Halle WinklerPolitepixSorry to hear it, can you describe this to me in more depth:
It’s after pausing during the recognition state where it usually doesn’t come back
What kind of pausing?
September 24, 2014 at 10:53 pm #1022622wfillemanParticipantSure thing:
In the sample app when you speak, the Pocketsphinx Input Levels will stop while Flite is speaking the recognized speech. After Flite is done speaking, I’ll see the Pocketsphinx Input Levels bounce around according to the DB levels of the mic input.
This all looks normal. Don’t want to try to recognize Flite speech.
With the Bluetooth mic attached, after Flite is done speaking on the first recognition, the Pocketphinx Input Levels goes to -120db and stays there. Meanwhile under the hood my custom debug statements are showing “inf” for the decibel levels.
September 24, 2014 at 10:55 pm #1022623Halle WinklerPolitepixDoes it recognize speech successfully from the period before flite speaks?
September 24, 2014 at 11:11 pm #1022625wfillemanParticipantYes, it does. The first one through works. After that, it dies as described in my previous post. That’s what’s odd. It does absolutely initially work. But after the first one it fails.
September 24, 2014 at 11:19 pm #1022626Halle WinklerPolitepixWhat happens if you don’t use flite speech at all in the sample app (i.e. don’t cause anything to suspend for any reason). Can it make multiple recognitions?
September 24, 2014 at 11:27 pm #1022627wfillemanParticipantHold on…Checking now.
September 24, 2014 at 11:46 pm #1022628wfillemanParticipantOk, more results:
Interestingly enough, it looks like there’s some back room communication between Flite and Pocketsphinx when Flite is speaking as the suspend call isn’t coming from the ViewController.
Anyways, I took out all the calls to Flite to speak text and while pocketsphinx now never gets suspended, the result was, the mic stream never configures (always inf for decibel values) when using a bluetooth headset. Built-in mic works fine.
When I added back in the initial Flite speak text @”Welcome to OpenEars.” then the bluetooth connected mic configures but then fails as described above where the first recognized speech seems to work and then fails after with the decibel values going to inf.
So, it seems like it’s necessary to have some audio output to get the mic to configure. That’s quite strange. Not sure what to make of that result.
September 25, 2014 at 10:37 am #1022631Halle WinklerPolitepixInterestingly enough, it looks like there’s some back room communication between Flite and Pocketsphinx when Flite is speaking as the suspend call isn’t coming from the ViewController.
That’s normal – a feature of OpenEars is that it does smart management of suspend/resume during speech that it creates, depending on the audio route. It is expected that it will do its own suspend/resume call before and after flite speech if there is the possibility that the audio route includes a speaker that emits sound into the open air where it can be picked up by the recognition engine. Otherwise TTS would usually result in the TTS being analyzed by the recognition engine. Manual suspend/resume is for you to avoid recognizing outgoing sound that your app creates and OpenEars handles it for outgoing sound that it is responsible for.
What is an example of your speech that it can recognize when Flite speech is on and the first recognition works?
September 25, 2014 at 4:47 pm #1022632wfillemanParticipantIt’ll recognize any of the words in the sample app. Example: “MODEL” or “CHANGE”
September 25, 2014 at 4:54 pm #1022633Halle WinklerPolitepixI guess I’m having some trouble understanding how it can recognize speech when there is a suspended TTS utterance before you start speaking, but not if you just start talking to it after you get the OpenEarsEventsObserver callback that listening has started. You comment out all the incidences of “say:” etc, and you start the app, and wait until it says “listening”, and you speak words that it knows and nothing happens, but if it first says “Welcome to OpenEars” there is a different result?
September 25, 2014 at 5:24 pm #1022634wfillemanParticipantYes, it is quite odd.
Summary of tests:
Test Case 1:
Comment out all cases of “say”.
Start sample app with NO bluetooth connected mic.
Wait for “listening”
Say “change”, app recognizes “change”
Say “model”, app recognizes “model”.Test Case 2:
Comment out all cases of “say”.
Start sample app WITH bluetooth connected mic.
Wait for “listening”
Say “change”, nothing happens. Decibel value is -120.
Say “model”, nothing happens. Decibel value is -120.Test Case 3:
Comment out all cases of “say” except for the first “Welcome to OpenEars”
Start sample app WITH bluetooth connected mic.
Wait for “listening”
Say “change”, app recognizes “change”.
Say “model”, nothing happens. Decibel value is -120.
Decibel value stays at -120 (internally it’s inf).Test Case 4:
Leave in all cases of “say”.
Start sample app WITH bluetooth connected mic.
Wait for “listening”
Say “change”, app recognizes “change”.
Say “model”, nothing happens. Decibel value is -120.
Decibel value stays at -120 (internally it’s inf).September 26, 2014 at 11:25 am #1022641Halle WinklerPolitepixOK, this will be investigated as time allows.
September 26, 2014 at 3:14 pm #1022645wfillemanParticipantThanks Halle, two more data points for you:
1. I tried these tests again on iOS 8.0.2. Same results. iOS 8.0.2 didn’t fix what I’m seeing.
2. I noticed something this morning that I’m sure now is likely why getDecibels is going to inf. In the AudioUnitRenderCallback, the inNumberFrames is usually 2048, except with bluetooth in the failure scenarios:
– Using the internal mic, I see 2048 frames in each callback. Everything works as intended.
– With the bluetooth headset connected, on initial startup I see 4096 frames in the AudioUnitRenderCallback UNTIL Flite says “Welcome to OpenEars”. Then I see 2048 frames in each callback. I can then say “CHANGE” into the bluetooth headset and have it recognized. After the recognized speech and AudioUnitRenderCallback is fired continuously again, I see the number of frames jump back to 4096 and getDecibels goes to inf. This is the failure scenario.
Hopefully this helps, but there is a correlation between 4096 inframes in the AudioUnitRenderCallback and failing to recognize speech. When AudioUnitRenderCallback is producing 2048 inframes then everything is working fine.
Also, just to confirm, when flite speaks, I’ll see the inframes go back to 2048 from 4096. So there is something that flite is doing that’s positively impacting the number of frames going into the AudioUnitRenderCallback when a bluetooth headset is connected.
Wes
October 11, 2014 at 10:15 pm #1022716wfillemanParticipantHi Halle,
I just tried this test on iOS 8.1 beta 2. Same results as my previous post. I had heard there were some BT bugs in iOS 8, fixed in 8.1, but the fixes didn’t change what I’m seeing in the OpenEars sample app. I was hoping that would have been the answer, but unfortunately it’s not going to be that easy.
Wes
October 17, 2014 at 9:13 pm #1022793morchellaParticipantAs another data point, I’m seeing the same behavior with a Beats Powerbeats bluetooth headset. I see the cont_ad_calib failed message, both in sample app and my own. I’m on 1.71 (also, using RapidEars in my own app).
October 24, 2014 at 7:54 pm #1022843jackflipsParticipantHey Halle,
We’re also seeing the same cont_ad_calib problem with bluetooth headsets. Has there been any progress in fixing this issue?
October 25, 2014 at 10:16 pm #1022854Halle WinklerPolitepixBecause this is not an issue with the next major version that is in current development since it doesn’t use calibration, I unfortunately have to mark this “won’t fix” so that it is possible to get out the next major version sooner rather than delaying it to patch this issue in the current version that still uses calibration.
October 26, 2014 at 12:15 am #1022856jackflipsParticipantDo you have any rough ETA for the next version? Our app relies on a bluetooth headset for input into OpenEars so this feature is essential.
October 26, 2014 at 8:53 am #1022860Halle WinklerPolitepixNo, sorry. I hear you and understand that it’s important to your apps, and the new version will be out as soon as it’s finished. It’s a bummer that an audio API Apple was supporting stopped working the same way in iOS8, and caused the experimental bluetooth support in OpenEars to change its behavior, although it’s also a positive thing that at the point that it happened I was already heavily into development of a version which doesn’t have the same dependencies, so it won’t be an issue for very much longer.
You may also wish to take a look at the fixes I suggested earlier in this discussion to see if they have a different effect in your app, or if you can see a fix yourself based on them, since the class which has changed its behavior has visible and changeable source.
December 5, 2014 at 11:27 pm #1023225Halle WinklerPolitepixBluetooth works for me in just-released OpenEars 2.0 (https://www.politepix.com/2014/12/05/openears-2-0-and-version-2-0-of-all-plugins-out-now/) let me know if you are seeing something different.
December 8, 2014 at 9:27 pm #1023307wfillemanParticipantGreat work Halle! Bluetooth works for me perfectly in OE 2.0.
I’ve got an unrelated question that I’ll start a new topic on.
Wes
December 8, 2014 at 9:31 pm #1023309Halle WinklerPolitepixSuper, glad to hear it’s working for you and thanks for letting me know.
April 14, 2016 at 10:06 pm #1030059ulyssesParticipantDear all,
Running the Sample App I still have this issue with the following configuration:
- OpenEars version 2.501
- Xcode 7.3
- iOS 9.3.1
- iPhone 6s
A beep tone is played in the Bluetooth headset as soon as the Sample App starts, and the device is disconnected.
The Bluetooth headset is functional (YouTube videos as played without any problem).
The Sample App works fine without the Bluetooth headset (e. g. with an ear phone).
Any idea?BR
ulyssesApril 14, 2016 at 10:13 pm #1030060Halle WinklerPolitepixPlease check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue.
April 15, 2016 at 10:03 am #1030071ulyssesParticipantHallo Halle,
here is the logging Xcode generated with the same configuration as above.
At 2016-04-15 09:57:14.608 there is indeed a hint according the Bluetooth connection:
2016-04-15 09:57:12.297 OpenEarsSampleApp[1025:510715] Starting OpenEars logging for OpenEars version 2.501 on 64-bit device (or build): iPhone running iOS version: 9.300000
2016-04-15 09:57:12.298 OpenEarsSampleApp[1025:510715] Creating shared instance of OEPocketsphinxController
2016-04-15 09:57:12.327 OpenEarsSampleApp[1025:510715] Starting dynamic language model generationINFO: ngram_model_arpa_legacy.c(504): ngrams 1=10, 2=16, 3=8
INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
INFO: ngram_model_arpa_legacy.c(543): 10 = #unigrams created
INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
INFO: ngram_model_arpa_legacy.c(561): 16 = #bigrams created
INFO: ngram_model_arpa_legacy.c(562): 3 = #prob2 entries
INFO: ngram_model_arpa_legacy.c(570): 3 = #bo_wt2 entries
INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
INFO: ngram_model_arpa_legacy.c(583): 8 = #trigrams created
INFO: ngram_model_arpa_legacy.c(584): 2 = #prob3 entries
INFO: ngram_model_dmp_legacy.c(521): Building DMP model…
INFO: ngram_model_dmp_legacy.c(551): 10 = #unigrams created
INFO: ngram_model_dmp_legacy.c(652): 16 = #bigrams created
INFO: ngram_model_dmp_legacy.c(653): 3 = #prob2 entries
INFO: ngram_model_dmp_legacy.c(660): 3 = #bo_wt2 entries
INFO: ngram_model_dmp_legacy.c(664): 8 = #trigrams created
INFO: ngram_model_dmp_legacy.c(665): 2 = #prob3 entries
2016-04-15 09:57:12.353 OpenEarsSampleApp[1025:510715] Done creating language model with CMUCLMTK in 0.026485 seconds.
2016-04-15 09:57:12.354 OpenEarsSampleApp[1025:510715] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish
2016-04-15 09:57:12.386 OpenEarsSampleApp[1025:510715] I’m done running performDictionaryLookup and it took 0.025546 seconds
2016-04-15 09:57:12.414 OpenEarsSampleApp[1025:510715] I’m done running dynamic language model generation and it took 0.111125 seconds
2016-04-15 09:57:12.418 OpenEarsSampleApp[1025:510715] Starting dynamic language model generationINFO: ngram_model_arpa_legacy.c(504): ngrams 1=12, 2=19, 3=10
INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
INFO: ngram_model_arpa_legacy.c(543): 12 = #unigrams created
INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
INFO: ngram_model_arpa_legacy.c(561): 19 = #bigrams created
INFO: ngram_model_arpa_legacy.c(562): 3 = #prob2 entries
INFO: ngram_model_arpa_legacy.c(570): 3 = #bo_wt2 entries
INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
INFO: ngram_model_arpa_legacy.c(583): 10 = #trigrams created
INFO: ngram_model_arpa_legacy.c(584): 2 = #prob3 entries
INFO: ngram_model_dmp_legacy.c(521): Building DMP model…
INFO: ngram_model_dmp_legacy.c(551): 12 = #unigrams created
INFO: ngram_model_dmp_legacy.c(652): 19 = #bigrams created
INFO: ngram_model_dmp_legacy.c(653): 3 = #prob2 entries
INFO: ngram_model_dmp_legacy.c(660): 3 = #bo_wt2 entries
INFO: ngram_model_dmp_legacy.c(664): 10 = #trigrams created
INFO: ngram_model_dmp_legacy.c(665): 2 = #prob3 entries
2016-04-15 09:57:12.444 OpenEarsSampleApp[1025:510715] Done creating language model with CMUCLMTK in 0.025300 seconds.
2016-04-15 09:57:12.444 OpenEarsSampleApp[1025:510715] Returning a cached version of LanguageModelGeneratorLookupList.text
2016-04-15 09:57:12.470 OpenEarsSampleApp[1025:510715] The word QUIDNUNC was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
2016-04-15 09:57:12.471 OpenEarsSampleApp[1025:510715] Using convertGraphemes for the word or phrase quidnunc which doesn’t appear in the dictionary
2016-04-15 09:57:12.479 OpenEarsSampleApp[1025:510715] the graphemes “K W IH D N AH NG K” were created for the word QUIDNUNC using the fallback method.
2016-04-15 09:57:12.488 OpenEarsSampleApp[1025:510715] I’m done running performDictionaryLookup and it took 0.043651 seconds
2016-04-15 09:57:12.492 OpenEarsSampleApp[1025:510715] I’m done running dynamic language model generation and it took 0.077773 seconds
2016-04-15 09:57:12.492 OpenEarsSampleApp[1025:510715]Welcome to the OpenEars sample project. This project understands the words:
BACKWARD,
CHANGE,
FORWARD,
GO,
LEFT,
MODEL,
RIGHT,
TURN,
and if you say “CHANGE MODEL” it will switch to its dynamically-generated model which understands the words:
CHANGE,
MODEL,
MONDAY,
TUESDAY,
WEDNESDAY,
THURSDAY,
FRIDAY,
SATURDAY,
SUNDAY,
QUIDNUNC
2016-04-15 09:57:12.492 OpenEarsSampleApp[1025:510715] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2016-04-15 09:57:12.494 OpenEarsSampleApp[1025:510715] User gave mic permission for this app.
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510715] setSecondsOfSilence wasn’t set, using default of 0.700000.
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510715] Successfully started listening session from startListeningWithLanguageModelAtPath:
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510748] Starting listening.
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510748] about to set up audio session
2016-04-15 09:57:12.496 OpenEarsSampleApp[1025:510748] Creating audio session with default settings.
2016-04-15 09:57:12.552 OpenEarsSampleApp[1025:510755] Audio route has changed for the following reason:
2016-04-15 09:57:12.557 OpenEarsSampleApp[1025:510755] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-04-15 09:57:13.579 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:13.707 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:13.835 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:13.963 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.091 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.219 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.347 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.475 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.603 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.606 OpenEarsSampleApp[1025:510748] done starting audio unit
2016-04-15 09:57:14.608 OpenEarsSampleApp[1025:510755] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x12fe6d760,
inputs = (
“<AVAudioSessionPortDescription: 0x12fe55cb0, type = MicrophoneBuiltIn; name = iPhone Mikrofon; UID = Built-In Microphone; selectedDataSource = Vorne>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x12fe4f630, type = BluetoothA2DPOutput; name = JABRA EASYGO; UID = 50:C9:71:5B:F3:10-tacl; selectedDataSource = (null)>”
)>.
INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/B7A799C5-B8DB-4279-8B4D-FB0E79FF0EC5/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-keyphrase
-kws
-kws_delay 10 10
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lifter 0 22
-lm /var/mobile/Containers/Data/Application/B7A799C5-B8DB-4279-8B4D-FB0E79FF0EC5/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.300000e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 30000 30000
-maxwpf -1 -1
-mdef /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
-mean /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-10 1.000000e-10
-pl_pip 1.0 1.000000e+00
-pl_weight 3.0 3.000000e+00
-pl_window 5 5
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec 0-12/13-25/26-38
-tmat /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-uw 1.0 1.000000e+00
-vad_postspeech 50 69
-vad_prespeech 20 10
-vad_startspeech 10 10
-vad_threshold 2.0 2.000000e+00
-var /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(164): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(117): Attempting to use PTM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: ptm_mgau.c(805): Number of codebooks doesn’t match number of ciphones, doesn’t look like PTM: 1 != 46
INFO: acmod.c(119): Attempting to use semi-continuous computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
INFO: dict.c(320): Allocating 4113 * 32 bytes (128 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/B7A799C5-B8DB-4279-8B4D-FB0E79FF0EC5/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(336): 8 words read
INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(361): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
INFO: ngram_model_trie.c(424): Trying to read LM in bin format
INFO: ngram_model_trie.c(457): Header doesn’t match
INFO: ngram_model_trie.c(180): Trying to read LM in arpa format
INFO: ngram_model_trie.c(71): No \data\ mark in LM file
INFO: ngram_model_trie.c(537): Trying to read LM in DMP format
INFO: ngram_model_trie.c(632): ngrams 1=10, 2=16, 3=8
INFO: lm_trie.c(317): Training quantizer
INFO: lm_trie.c(323): Building LM trie
INFO: ngram_search_fwdtree.c(99): 8 unique initial diphones
INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 145
INFO: ngram_search_fwdtree.c(339): after: 8 root, 17 non-root channels, 9 single-phone words
INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
2016-04-15 09:57:14.661 OpenEarsSampleApp[1025:510748] Restoring SmartCMN value of 38.152100
2016-04-15 09:57:14.661 OpenEarsSampleApp[1025:510748] Listening.
2016-04-15 09:57:14.662 OpenEarsSampleApp[1025:510748] Project has these words or phrases in its dictionary:
BACKWARD
CHANGE
FORWARD
GO
LEFT
MODEL
RIGHT
TURN2016-04-15 09:57:14.662 OpenEarsSampleApp[1025:510748] Recognition loop has started
2016-04-15 09:57:14.679 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx is now listening.
2016-04-15 09:57:14.680 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx started.
2016-04-15 09:57:19.128 OpenEarsSampleApp[1025:510748] Speech detected…
2016-04-15 09:57:19.128 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected speech.
2016-04-15 09:57:19.986 OpenEarsSampleApp[1025:510748] End of speech detected…
2016-04-15 09:57:19.987 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 38.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 12.77 -4.23 21.79 -7.52 0.79 -19.79 -8.02 -14.35 -10.77 -6.51 1.36 2.91 -0.27 >
INFO: ngram_search_fwdtree.c(1553): 663 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 6294 senones evaluated (75/fr)
INFO: ngram_search_fwdtree.c(1559): 2184 channels searched (26/fr), 640 1st, 1122 last
INFO: ngram_search_fwdtree.c(1562): 737 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 47 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.17 CPU 0.197 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 4.88 wall 5.806 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 659 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1689 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 723 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 723 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 60 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.009 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.015 xRT
INFO: ngram_search.c(1280): lattice start node <s>.0 end node </s>.8
INFO: ngram_search.c(1306): Eliminated 5 nodes before end node
INFO: ngram_search.c(1411): Lattice has 380 nodes, 15 links
INFO: ps_lattice.c(1380): Bestpath score: -44039
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:8:82) = -3080252
INFO: ps_lattice.c(1441): Joint P(O,S) = -3090321 P(S|O) = -10069
INFO: ngram_search.c(899): bestpath 0.00 CPU 0.001 xRT
INFO: ngram_search.c(902): bestpath 0.00 wall 0.001 xRT
2016-04-15 09:57:20.011 OpenEarsSampleApp[1025:510748] Pocketsphinx heard “” with a score of (-10069) and an utterance ID of 0.
2016-04-15 09:57:20.012 OpenEarsSampleApp[1025:510748] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-04-15 09:57:27.424 OpenEarsSampleApp[1025:510749] Speech detected…
2016-04-15 09:57:27.425 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected speech.
2016-04-15 09:57:28.176 OpenEarsSampleApp[1025:510748] End of speech detected…
2016-04-15 09:57:28.176 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 12.77 -4.23 21.79 -7.52 0.79 -19.79 -8.02 -14.35 -10.77 -6.51 1.36 2.91 -0.27 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 17.27 8.05 3.50 1.03 -4.17 -18.24 -14.59 -21.55 -6.39 -6.44 5.02 -6.10 -6.00 >
INFO: ngram_search_fwdtree.c(1553): 683 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 7577 senones evaluated (89/fr)
INFO: ngram_search_fwdtree.c(1559): 2999 channels searched (35/fr), 648 1st, 1881 last
INFO: ngram_search_fwdtree.c(1562): 773 words for which last channels evaluated (9/fr)
INFO: ngram_search_fwdtree.c(1564): 49 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.27 CPU 0.322 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 8.17 wall 9.610 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 4 words
INFO: ngram_search_fwdflat.c(948): 679 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 4838 senones evaluated (57/fr)
INFO: ngram_search_fwdflat.c(952): 2628 channels searched (30/fr)
INFO: ngram_search_fwdflat.c(954): 807 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 135 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.007 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1280): lattice start node <s>.0 end node </s>.26
INFO: ngram_search.c(1306): Eliminated 5 nodes before end node
INFO: ngram_search.c(1411): Lattice has 291 nodes, 114 links
INFO: ps_lattice.c(1380): Bestpath score: -47026
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:26:83) = -3116896
INFO: ps_lattice.c(1441): Joint P(O,S) = -3132151 P(S|O) = -15255
INFO: ngram_search.c(899): bestpath 0.00 CPU 0.005 xRT
INFO: ngram_search.c(902): bestpath 0.00 wall 0.004 xRT
2016-04-15 09:57:28.201 OpenEarsSampleApp[1025:510748] Pocketsphinx heard “” with a score of (-15255) and an utterance ID of 1.
2016-04-15 09:57:28.201 OpenEarsSampleApp[1025:510748] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-04-15 09:57:32.792 OpenEarsSampleApp[1025:510748] Speech detected…
2016-04-15 09:57:32.793 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected speech.
2016-04-15 09:57:33.425 OpenEarsSampleApp[1025:510748] End of speech detected…
2016-04-15 09:57:33.425 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 17.27 8.05 3.50 1.03 -4.17 -18.24 -14.59 -21.55 -6.39 -6.44 5.02 -6.10 -6.00 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 15.13 1.01 14.42 3.27 -6.59 -6.49 -10.41 -11.99 -0.76 -8.26 2.78 -5.85 -5.77 >
INFO: ngram_search_fwdtree.c(1553): 666 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 6499 senones evaluated (77/fr)
INFO: ngram_search_fwdtree.c(1559): 2265 channels searched (26/fr), 640 1st, 1187 last
INFO: ngram_search_fwdtree.c(1562): 739 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 55 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.18 CPU 0.215 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 5.23 wall 6.224 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 660 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1689 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 723 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 723 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 61 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.009 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1280): lattice start node <s>.0 end node </s>.9
INFO: ngram_search.c(1306): Eliminated 5 nodes before end node
INFO: ngram_search.c(1411): Lattice has 388 nodes, 64 links
INFO: ps_lattice.c(1380): Bestpath score: -44225
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:9:82) = -3085812
INFO: ps_lattice.c(1441): Joint P(O,S) = -3101073 P(S|O) = -15261
INFO: ngram_search.c(899): bestpath 0.00 CPU 0.002 xRT
INFO: ngram_search.c(902): bestpath 0.00 wall 0.003 xRT
2016-04-15 09:57:33.452 OpenEarsSampleApp[1025:510748] Pocketsphinx heard “” with a score of (-15261) and an utterance ID of 2.
2016-04-15 09:57:33.452 OpenEarsSampleApp[1025:510748] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.BR
ulyssesApril 15, 2016 at 10:18 am #1030072Halle WinklerPolitepixHave you seen the headset record successfully with any other 3rd-party apps that could be expected to use a low-level recording API (for instance a VOIP app)? IME not every headset is compatible with Apple’s Bluetooth audio APIs. That is (unfortunately) the reason that Bluetooth support is still marked as experimental in OpenEars.
April 15, 2016 at 12:31 pm #1030076ulyssesParticipantMy Jabra Bluetooth headset works fine with YouTube, but it is a few years old and might not support all iOS9 Bluetooth features.
Tomorrow I will test this with http://www.amazon.de/Bluetooth-Kopfh%C3%B6rer-Headset-Ohrh%C3%B6rer-Mikrofon-Schwarz/dp/B014QZ5SCO and will come back to you afterwards.
April 15, 2016 at 12:39 pm #1030079Halle WinklerPolitepixYup, YouTube may not be a reliable test since it probably uses a fully-wrapped video API to do playback only (as far as I know), and we’re more concerned with the ability to do low-latency recording.
Much better than testing another headset (which could easily have the same issue with 3rd-party apps) would be to check out your current headset with some 3rd-party apps that do low-latency recording (VOIP or other form of real-time audio chat is a safe bet) and see if it works.
April 16, 2016 at 11:42 am #1030085ulyssesParticipantHi Halle,
my old Jabra Bluetooth headset works with WhatsApp. Also a simple Swift 2 program including
let string = "Hello World!"
let utterance = AVSpeechUtterance(string: string)
utterance.voice = AVSpeechSynthesisVoice(language: "en-US")
let synthesizer = AVSpeechSynthesizer()
synthesizer.speakUtterance(utterance)
is functional.
But my new Bluetooth headset (see http://www.amazon.de/Bluetooth-Kopfh%C3%B6rer-Headset-Ohrh%C3%B6rer-Mikrofon-Schwarz/dp/B014QZ5SCO) works fine with the OpenEars Sample App!
Thank you for you fast and competent response!
Best Regards
DirkApril 16, 2016 at 12:16 pm #1030086Halle WinklerPolitepixHi,
You’re welcome!
my old Jabra Bluetooth headset works with WhatsApp. Also a simple Swift 2 program including
let string = "Hello World!" let utterance = AVSpeechUtterance(string: string) utterance.voice = AVSpeechSynthesisVoice(language: "en-US") let synthesizer = AVSpeechSynthesizer() synthesizer.speakUtterance(utterance)
is functional.
OK, but I think these are playback examples (maybe the Whatsapp example is recording?) while the logging shows an issue with recording only, so that’s really all we want to look into.
Generally, the option if you have a headset that you’d like to be able to support in your app and you can see it doing low-latency recording with another 3rd-party app, is that you can send me an example of the headset and I can investigate a bit more what is going on. Unfortunately given the range and expense of BT devices and the spotty compatibility with 3rd-party apps, it isn’t something where I can attempt to maintain a testbed or commit to support (or even very heavy troubleshooting) of any one device. But if you wanted to send it over, I’d be willing to look into it and see what’s going on, let me know.
-
AuthorPosts
- You must be logged in to reply to this topic.