Forum Replies Created
-
AuthorPosts
-
ulyssesParticipant
Hi Halle,
my old Jabra Bluetooth headset works with WhatsApp. Also a simple Swift 2 program including
let string = "Hello World!"
let utterance = AVSpeechUtterance(string: string)
utterance.voice = AVSpeechSynthesisVoice(language: "en-US")
let synthesizer = AVSpeechSynthesizer()
synthesizer.speakUtterance(utterance)
is functional.
But my new Bluetooth headset (see http://www.amazon.de/Bluetooth-Kopfh%C3%B6rer-Headset-Ohrh%C3%B6rer-Mikrofon-Schwarz/dp/B014QZ5SCO) works fine with the OpenEars Sample App!
Thank you for you fast and competent response!
Best Regards
DirkulyssesParticipantMy Jabra Bluetooth headset works fine with YouTube, but it is a few years old and might not support all iOS9 Bluetooth features.
Tomorrow I will test this with http://www.amazon.de/Bluetooth-Kopfh%C3%B6rer-Headset-Ohrh%C3%B6rer-Mikrofon-Schwarz/dp/B014QZ5SCO and will come back to you afterwards.
ulyssesParticipantHallo Halle,
here is the logging Xcode generated with the same configuration as above.
At 2016-04-15 09:57:14.608 there is indeed a hint according the Bluetooth connection:
2016-04-15 09:57:12.297 OpenEarsSampleApp[1025:510715] Starting OpenEars logging for OpenEars version 2.501 on 64-bit device (or build): iPhone running iOS version: 9.300000
2016-04-15 09:57:12.298 OpenEarsSampleApp[1025:510715] Creating shared instance of OEPocketsphinxController
2016-04-15 09:57:12.327 OpenEarsSampleApp[1025:510715] Starting dynamic language model generationINFO: ngram_model_arpa_legacy.c(504): ngrams 1=10, 2=16, 3=8
INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
INFO: ngram_model_arpa_legacy.c(543): 10 = #unigrams created
INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
INFO: ngram_model_arpa_legacy.c(561): 16 = #bigrams created
INFO: ngram_model_arpa_legacy.c(562): 3 = #prob2 entries
INFO: ngram_model_arpa_legacy.c(570): 3 = #bo_wt2 entries
INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
INFO: ngram_model_arpa_legacy.c(583): 8 = #trigrams created
INFO: ngram_model_arpa_legacy.c(584): 2 = #prob3 entries
INFO: ngram_model_dmp_legacy.c(521): Building DMP model…
INFO: ngram_model_dmp_legacy.c(551): 10 = #unigrams created
INFO: ngram_model_dmp_legacy.c(652): 16 = #bigrams created
INFO: ngram_model_dmp_legacy.c(653): 3 = #prob2 entries
INFO: ngram_model_dmp_legacy.c(660): 3 = #bo_wt2 entries
INFO: ngram_model_dmp_legacy.c(664): 8 = #trigrams created
INFO: ngram_model_dmp_legacy.c(665): 2 = #prob3 entries
2016-04-15 09:57:12.353 OpenEarsSampleApp[1025:510715] Done creating language model with CMUCLMTK in 0.026485 seconds.
2016-04-15 09:57:12.354 OpenEarsSampleApp[1025:510715] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish
2016-04-15 09:57:12.386 OpenEarsSampleApp[1025:510715] I’m done running performDictionaryLookup and it took 0.025546 seconds
2016-04-15 09:57:12.414 OpenEarsSampleApp[1025:510715] I’m done running dynamic language model generation and it took 0.111125 seconds
2016-04-15 09:57:12.418 OpenEarsSampleApp[1025:510715] Starting dynamic language model generationINFO: ngram_model_arpa_legacy.c(504): ngrams 1=12, 2=19, 3=10
INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
INFO: ngram_model_arpa_legacy.c(543): 12 = #unigrams created
INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
INFO: ngram_model_arpa_legacy.c(561): 19 = #bigrams created
INFO: ngram_model_arpa_legacy.c(562): 3 = #prob2 entries
INFO: ngram_model_arpa_legacy.c(570): 3 = #bo_wt2 entries
INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
INFO: ngram_model_arpa_legacy.c(583): 10 = #trigrams created
INFO: ngram_model_arpa_legacy.c(584): 2 = #prob3 entries
INFO: ngram_model_dmp_legacy.c(521): Building DMP model…
INFO: ngram_model_dmp_legacy.c(551): 12 = #unigrams created
INFO: ngram_model_dmp_legacy.c(652): 19 = #bigrams created
INFO: ngram_model_dmp_legacy.c(653): 3 = #prob2 entries
INFO: ngram_model_dmp_legacy.c(660): 3 = #bo_wt2 entries
INFO: ngram_model_dmp_legacy.c(664): 10 = #trigrams created
INFO: ngram_model_dmp_legacy.c(665): 2 = #prob3 entries
2016-04-15 09:57:12.444 OpenEarsSampleApp[1025:510715] Done creating language model with CMUCLMTK in 0.025300 seconds.
2016-04-15 09:57:12.444 OpenEarsSampleApp[1025:510715] Returning a cached version of LanguageModelGeneratorLookupList.text
2016-04-15 09:57:12.470 OpenEarsSampleApp[1025:510715] The word QUIDNUNC was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
2016-04-15 09:57:12.471 OpenEarsSampleApp[1025:510715] Using convertGraphemes for the word or phrase quidnunc which doesn’t appear in the dictionary
2016-04-15 09:57:12.479 OpenEarsSampleApp[1025:510715] the graphemes “K W IH D N AH NG K” were created for the word QUIDNUNC using the fallback method.
2016-04-15 09:57:12.488 OpenEarsSampleApp[1025:510715] I’m done running performDictionaryLookup and it took 0.043651 seconds
2016-04-15 09:57:12.492 OpenEarsSampleApp[1025:510715] I’m done running dynamic language model generation and it took 0.077773 seconds
2016-04-15 09:57:12.492 OpenEarsSampleApp[1025:510715]Welcome to the OpenEars sample project. This project understands the words:
BACKWARD,
CHANGE,
FORWARD,
GO,
LEFT,
MODEL,
RIGHT,
TURN,
and if you say “CHANGE MODEL” it will switch to its dynamically-generated model which understands the words:
CHANGE,
MODEL,
MONDAY,
TUESDAY,
WEDNESDAY,
THURSDAY,
FRIDAY,
SATURDAY,
SUNDAY,
QUIDNUNC
2016-04-15 09:57:12.492 OpenEarsSampleApp[1025:510715] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2016-04-15 09:57:12.494 OpenEarsSampleApp[1025:510715] User gave mic permission for this app.
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510715] setSecondsOfSilence wasn’t set, using default of 0.700000.
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510715] Successfully started listening session from startListeningWithLanguageModelAtPath:
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510748] Starting listening.
2016-04-15 09:57:12.495 OpenEarsSampleApp[1025:510748] about to set up audio session
2016-04-15 09:57:12.496 OpenEarsSampleApp[1025:510748] Creating audio session with default settings.
2016-04-15 09:57:12.552 OpenEarsSampleApp[1025:510755] Audio route has changed for the following reason:
2016-04-15 09:57:12.557 OpenEarsSampleApp[1025:510755] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-04-15 09:57:13.579 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:13.707 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:13.835 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:13.963 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.091 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.219 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.347 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.475 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.603 OpenEarsSampleApp[1025:510773] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-04-15 09:57:14.606 OpenEarsSampleApp[1025:510748] done starting audio unit
2016-04-15 09:57:14.608 OpenEarsSampleApp[1025:510755] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x12fe6d760,
inputs = (
“<AVAudioSessionPortDescription: 0x12fe55cb0, type = MicrophoneBuiltIn; name = iPhone Mikrofon; UID = Built-In Microphone; selectedDataSource = Vorne>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x12fe4f630, type = BluetoothA2DPOutput; name = JABRA EASYGO; UID = 50:C9:71:5B:F3:10-tacl; selectedDataSource = (null)>”
)>.
INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/B7A799C5-B8DB-4279-8B4D-FB0E79FF0EC5/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-keyphrase
-kws
-kws_delay 10 10
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lifter 0 22
-lm /var/mobile/Containers/Data/Application/B7A799C5-B8DB-4279-8B4D-FB0E79FF0EC5/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.300000e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 30000 30000
-maxwpf -1 -1
-mdef /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
-mean /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-10 1.000000e-10
-pl_pip 1.0 1.000000e+00
-pl_weight 3.0 3.000000e+00
-pl_window 5 5
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec 0-12/13-25/26-38
-tmat /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-uw 1.0 1.000000e+00
-vad_postspeech 50 69
-vad_prespeech 20 10
-vad_startspeech 10 10
-vad_threshold 2.0 2.000000e+00
-var /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(164): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(117): Attempting to use PTM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: ptm_mgau.c(805): Number of codebooks doesn’t match number of ciphones, doesn’t look like PTM: 1 != 46
INFO: acmod.c(119): Attempting to use semi-continuous computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
INFO: dict.c(320): Allocating 4113 * 32 bytes (128 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/B7A799C5-B8DB-4279-8B4D-FB0E79FF0EC5/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(336): 8 words read
INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/143E8FA7-F0A9-4BBE-B210-2C73A0C4E38E/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(361): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
INFO: ngram_model_trie.c(424): Trying to read LM in bin format
INFO: ngram_model_trie.c(457): Header doesn’t match
INFO: ngram_model_trie.c(180): Trying to read LM in arpa format
INFO: ngram_model_trie.c(71): No \data\ mark in LM file
INFO: ngram_model_trie.c(537): Trying to read LM in DMP format
INFO: ngram_model_trie.c(632): ngrams 1=10, 2=16, 3=8
INFO: lm_trie.c(317): Training quantizer
INFO: lm_trie.c(323): Building LM trie
INFO: ngram_search_fwdtree.c(99): 8 unique initial diphones
INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 145
INFO: ngram_search_fwdtree.c(339): after: 8 root, 17 non-root channels, 9 single-phone words
INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
2016-04-15 09:57:14.661 OpenEarsSampleApp[1025:510748] Restoring SmartCMN value of 38.152100
2016-04-15 09:57:14.661 OpenEarsSampleApp[1025:510748] Listening.
2016-04-15 09:57:14.662 OpenEarsSampleApp[1025:510748] Project has these words or phrases in its dictionary:
BACKWARD
CHANGE
FORWARD
GO
LEFT
MODEL
RIGHT
TURN2016-04-15 09:57:14.662 OpenEarsSampleApp[1025:510748] Recognition loop has started
2016-04-15 09:57:14.679 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx is now listening.
2016-04-15 09:57:14.680 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx started.
2016-04-15 09:57:19.128 OpenEarsSampleApp[1025:510748] Speech detected…
2016-04-15 09:57:19.128 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected speech.
2016-04-15 09:57:19.986 OpenEarsSampleApp[1025:510748] End of speech detected…
2016-04-15 09:57:19.987 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 38.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 12.77 -4.23 21.79 -7.52 0.79 -19.79 -8.02 -14.35 -10.77 -6.51 1.36 2.91 -0.27 >
INFO: ngram_search_fwdtree.c(1553): 663 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 6294 senones evaluated (75/fr)
INFO: ngram_search_fwdtree.c(1559): 2184 channels searched (26/fr), 640 1st, 1122 last
INFO: ngram_search_fwdtree.c(1562): 737 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 47 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.17 CPU 0.197 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 4.88 wall 5.806 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 659 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1689 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 723 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 723 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 60 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.009 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.015 xRT
INFO: ngram_search.c(1280): lattice start node <s>.0 end node </s>.8
INFO: ngram_search.c(1306): Eliminated 5 nodes before end node
INFO: ngram_search.c(1411): Lattice has 380 nodes, 15 links
INFO: ps_lattice.c(1380): Bestpath score: -44039
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:8:82) = -3080252
INFO: ps_lattice.c(1441): Joint P(O,S) = -3090321 P(S|O) = -10069
INFO: ngram_search.c(899): bestpath 0.00 CPU 0.001 xRT
INFO: ngram_search.c(902): bestpath 0.00 wall 0.001 xRT
2016-04-15 09:57:20.011 OpenEarsSampleApp[1025:510748] Pocketsphinx heard “” with a score of (-10069) and an utterance ID of 0.
2016-04-15 09:57:20.012 OpenEarsSampleApp[1025:510748] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-04-15 09:57:27.424 OpenEarsSampleApp[1025:510749] Speech detected…
2016-04-15 09:57:27.425 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected speech.
2016-04-15 09:57:28.176 OpenEarsSampleApp[1025:510748] End of speech detected…
2016-04-15 09:57:28.176 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 12.77 -4.23 21.79 -7.52 0.79 -19.79 -8.02 -14.35 -10.77 -6.51 1.36 2.91 -0.27 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 17.27 8.05 3.50 1.03 -4.17 -18.24 -14.59 -21.55 -6.39 -6.44 5.02 -6.10 -6.00 >
INFO: ngram_search_fwdtree.c(1553): 683 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 7577 senones evaluated (89/fr)
INFO: ngram_search_fwdtree.c(1559): 2999 channels searched (35/fr), 648 1st, 1881 last
INFO: ngram_search_fwdtree.c(1562): 773 words for which last channels evaluated (9/fr)
INFO: ngram_search_fwdtree.c(1564): 49 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.27 CPU 0.322 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 8.17 wall 9.610 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 4 words
INFO: ngram_search_fwdflat.c(948): 679 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 4838 senones evaluated (57/fr)
INFO: ngram_search_fwdflat.c(952): 2628 channels searched (30/fr)
INFO: ngram_search_fwdflat.c(954): 807 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 135 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.007 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1280): lattice start node <s>.0 end node </s>.26
INFO: ngram_search.c(1306): Eliminated 5 nodes before end node
INFO: ngram_search.c(1411): Lattice has 291 nodes, 114 links
INFO: ps_lattice.c(1380): Bestpath score: -47026
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:26:83) = -3116896
INFO: ps_lattice.c(1441): Joint P(O,S) = -3132151 P(S|O) = -15255
INFO: ngram_search.c(899): bestpath 0.00 CPU 0.005 xRT
INFO: ngram_search.c(902): bestpath 0.00 wall 0.004 xRT
2016-04-15 09:57:28.201 OpenEarsSampleApp[1025:510748] Pocketsphinx heard “” with a score of (-15255) and an utterance ID of 1.
2016-04-15 09:57:28.201 OpenEarsSampleApp[1025:510748] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-04-15 09:57:32.792 OpenEarsSampleApp[1025:510748] Speech detected…
2016-04-15 09:57:32.793 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected speech.
2016-04-15 09:57:33.425 OpenEarsSampleApp[1025:510748] End of speech detected…
2016-04-15 09:57:33.425 OpenEarsSampleApp[1025:510715] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 17.27 8.05 3.50 1.03 -4.17 -18.24 -14.59 -21.55 -6.39 -6.44 5.02 -6.10 -6.00 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 15.13 1.01 14.42 3.27 -6.59 -6.49 -10.41 -11.99 -0.76 -8.26 2.78 -5.85 -5.77 >
INFO: ngram_search_fwdtree.c(1553): 666 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 6499 senones evaluated (77/fr)
INFO: ngram_search_fwdtree.c(1559): 2265 channels searched (26/fr), 640 1st, 1187 last
INFO: ngram_search_fwdtree.c(1562): 739 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 55 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.18 CPU 0.215 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 5.23 wall 6.224 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 660 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1689 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 723 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 723 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 61 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.009 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1280): lattice start node <s>.0 end node </s>.9
INFO: ngram_search.c(1306): Eliminated 5 nodes before end node
INFO: ngram_search.c(1411): Lattice has 388 nodes, 64 links
INFO: ps_lattice.c(1380): Bestpath score: -44225
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:9:82) = -3085812
INFO: ps_lattice.c(1441): Joint P(O,S) = -3101073 P(S|O) = -15261
INFO: ngram_search.c(899): bestpath 0.00 CPU 0.002 xRT
INFO: ngram_search.c(902): bestpath 0.00 wall 0.003 xRT
2016-04-15 09:57:33.452 OpenEarsSampleApp[1025:510748] Pocketsphinx heard “” with a score of (-15261) and an utterance ID of 2.
2016-04-15 09:57:33.452 OpenEarsSampleApp[1025:510748] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.BR
ulyssesulyssesParticipantDear all,
Running the Sample App I still have this issue with the following configuration:
- OpenEars version 2.501
- Xcode 7.3
- iOS 9.3.1
- iPhone 6s
A beep tone is played in the Bluetooth headset as soon as the Sample App starts, and the device is disconnected.
The Bluetooth headset is functional (YouTube videos as played without any problem).
The Sample App works fine without the Bluetooth headset (e. g. with an ear phone).
Any idea?BR
ulysses -
AuthorPosts