- This topic has 3 replies, 2 voices, and was last updated 9 years, 2 months ago by Halle Winkler.
-
AuthorPosts
-
February 17, 2015 at 12:13 pm #1024902dima.nikParticipant
Hi!
I work on project that use Open Ears/Papid Ears 2.0 + Rejecto.
Also in app we have ability to play video or audio.Every time when I open presentation screen (controller that use Open Ears) we setup Pocketsphinx Controller and call startRealtimeListeningWithLanguageModelAtPath method.
When I go to this screen with Bluetooth headset connected for the first time (I mean that we open app for first time) Pocketsphinx Controller starts listening and I get live hypothesis.
Also if I left this screen and open again it works fine too.But when I play some video/audio within app and then go to this controller – Pocketsphinx are setuped well, but it doesn’t return any live hypotesis.
Also it’s not suspended.I really hope that issue happens because some bad staff in my code.
Here is log:
2015-02-17 11:39:30.790 Prompt Smart[3085:990682] Audio route has changed for the following reason:
2015-02-17 11:39:30.794 Prompt Smart[3085:990682] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-02-17 11:39:30.797 Prompt Smart[3085:990538] Starting OpenEars logging for OpenEars version 2.0 on 64-bit device (or build): iPhone running iOS version: 8.100000
2015-02-17 11:39:30.799 Prompt Smart[3085:990682] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x17421b4a0,
inputs = (
“<AVAudioSessionPortDescription: 0x17421b500, type = MicrophoneWired; name = \U041c\U0438\U043a\U0440\U043e\U0444\U043e\U043d \U0433\U0430\U0440\U043d\U0438\U0442\U0443\U0440\U044b; UID = Wired Microphone; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x17421b560, type = Headphones; name = \U041d\U0430\U0443\U0448\U043d\U0438\U043a\U0438; UID = Wired Headphones; selectedDataSource = (null)>”
)>.
2015-02-17 11:39:30.924 Prompt Smart[3085:990587] The word DRAKE’S was not found in the dictionary /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/LanguageModelGeneratorLookupList.text.
2015-02-17 11:39:30.925 Prompt Smart[3085:990587] Now using the fallback method to look up the word DRAKE’S
2015-02-17 11:39:30.926 Prompt Smart[3085:990587] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the English phonetic lookup dictionary is that your words are not in English or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
2015-02-17 11:39:30.926 Prompt Smart[3085:990587] Using convertGraphemes for the word or phrase DRAKE’S which doesn’t appear in the dictionary
2015-02-17 11:39:31.046 Prompt Smart[3085:990587] I’m done running performDictionaryLookup and it took 0.186240 seconds
2015-02-17 11:39:31.058 Prompt Smart[3085:990587] Starting dynamic language model generationINFO: cmd_ln.c(702): Parsing command line:
sphinx_lm_convert \
-i /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.arpa \
-o /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.DMPCurrent configuration:
[NAME] [DEFLT] [VALUE]
-case
-debug 0
-help no no
-i /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.arpa
-ienc
-ifmt
-logbase 1.0001 1.000100e+00
-mmap no no
-o /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.DMP
-oenc utf8 utf8
-ofmtINFO: ngram_model_arpa.c(504): ngrams 1=327, 2=640, 3=619
INFO: ngram_model_arpa.c(137): Reading unigrams
INFO: ngram_model_arpa.c(543): 327 = #unigrams created
INFO: ngram_model_arpa.c(197): Reading bigrams
INFO: ngram_model_arpa.c(561): 640 = #bigrams created
INFO: ngram_model_arpa.c(562): 44 = #prob2 entries
INFO: ngram_model_arpa.c(570): 55 = #bo_wt2 entries
INFO: ngram_model_arpa.c(294): Reading trigrams
INFO: ngram_model_arpa.c(583): 619 = #trigrams created
INFO: ngram_model_arpa.c(584): 14 = #prob3 entries
INFO: ngram_model_dmp.c(518): Building DMP model…
INFO: ngram_model_dmp.c(548): 327 = #unigrams created
INFO: ngram_model_dmp.c(649): 640 = #bigrams created
INFO: ngram_model_dmp.c(650): 44 = #prob2 entries
INFO: ngram_model_dmp.c(657): 55 = #bo_wt2 entries
INFO: ngram_model_dmp.c(661): 619 = #trigrams created
INFO: ngram_model_dmp.c(662): 14 = #prob3 entries
2015-02-17 11:39:31.145 Prompt Smart[3085:990587] Done creating language model with CMUCLMTK in 0.086565 seconds.
INFO: cmd_ln.c(702): Parsing command line:
sphinx_lm_convert \
-i /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.arpa \
-o /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.DMPCurrent configuration:
[NAME] [DEFLT] [VALUE]
-case
-debug 0
-help no no
-i /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.arpa
-ienc
-ifmt
-logbase 1.0001 1.000100e+00
-mmap no no
-o /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.DMP
-oenc utf8 utf8
-ofmtINFO: ngram_model_arpa.c(504): ngrams 1=327, 2=640, 3=619
INFO: ngram_model_arpa.c(137): Reading unigrams
INFO: ngram_model_arpa.c(543): 327 = #unigrams created
INFO: ngram_model_arpa.c(197): Reading bigrams
INFO: ngram_model_arpa.c(561): 640 = #bigrams created
INFO: ngram_model_arpa.c(562): 44 = #prob2 entries
INFO: ngram_model_arpa.c(570): 55 = #bo_wt2 entries
INFO: ngram_model_arpa.c(294): Reading trigrams
INFO: ngram_model_arpa.c(583): 619 = #trigrams created
INFO: ngram_model_arpa.c(584): 14 = #prob3 entries
INFO: ngram_model_dmp.c(518): Building DMP model…
INFO: ngram_model_dmp.c(548): 327 = #unigrams created
INFO: ngram_model_dmp.c(649): 640 = #bigrams created
INFO: ngram_model_dmp.c(650): 44 = #prob2 entries
INFO: ngram_model_dmp.c(657): 55 = #bo_wt2 entries
INFO: ngram_model_dmp.c(661): 619 = #trigrams created
INFO: ngram_model_dmp.c(662): 14 = #prob3 entries
2015-02-17 11:39:31.155 Prompt Smart[3085:990587] I’m done running dynamic language model generation and it took 0.351656 seconds
2015-02-17 11:39:31.157 Prompt Smart[3085:990538] User gave mic permission for this app.
2015-02-17 11:39:31.157 Prompt Smart[3085:990774] [info] Engage Apptentive event: local#app#Start_a_Script_Presentation
2015-02-17 11:39:31.198 Prompt Smart[3085:990774] [info] –Found 1 downloaded and available interaction targeted at the event “local#app#Start_a_Script_Presentation”.
2015-02-17 11:39:31.213 Prompt Smart[3085:990773] [info] –Criteria not met for available interaction.
2015-02-17 11:39:31.214 Prompt Smart[3085:990773] [info] –If you are expecting an interaction to be shown at this time, make sure you have fully met the interaction’s requirements as set on your Apptentive dashboard.
2015-02-17 11:39:31.214 Prompt Smart[3085:990538] User gave mic permission for this app.
2015-02-17 11:39:31.215 Prompt Smart[3085:990538] setSecondsOfSilence wasn’t set, using default of 0.700000.
2015-02-17 11:39:31.215 Prompt Smart[3085:990774] Starting listening.
2015-02-17 11:39:31.215 Prompt Smart[3085:990774] about to set up audio session
2015-02-17 11:39:33.329 Prompt Smart[3085:990682] Audio route has changed for the following reason:
2015-02-17 11:39:33.334 Prompt Smart[3085:990682] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-02-17 11:39:33.338 Prompt Smart[3085:990682] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x1742181e0,
inputs = (
“<AVAudioSessionPortDescription: 0x174404060, type = BluetoothHFP; name = PLT_ML20; UID = 0C:E0:E4:14:87:71-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x17421c080, type = BluetoothHFP; name = PLT_ML20; UID = 0C:E0:E4:14:87:71-tsco; selectedDataSource = (null)>”
)>.
2015-02-17 11:39:33.347 Prompt Smart[3085:990682] Audio route has changed for the following reason:
2015-02-17 11:39:33.348 Prompt Smart[3085:990682] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-02-17 11:39:33.352 Prompt Smart[3085:990774] done starting audio unit
INFO: cmd_ln.c(702): Parsing command line:
\
-lm /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.DMP \
-vad_threshold 2.000000 \
-remove_noise yes \
-remove_silence yes \
-bestpath no \
-lw 6.500000 \
-dict /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.dic \
-hmm /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundleCurrent configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes no
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-keyphrase
-kws
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.DMP
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 10000 10000
-maxnewoov 20 20
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-vad_postspeech 50 50
-vad_prespeech 10 10
-vad_threshold 2.0 2.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02INFO: cmd_ln.c(702): Parsing command line:
\
-nfilt 25 \
-lowerf 130 \
-upperf 6800 \
-feat 1s_c_d_dd \
-svspec 0-12/13-25/26-38 \
-agc none \
-cmn current \
-varnorm no \
-transform dct \
-lifter 22 \
-cmninit 40Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 22
-logspec no no
-lowerf 133.33334 1.300000e+02
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-vad_postspeech 50 50
-vad_prespeech 10 10
-vad_threshold 2.0 2.000000e+00
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.562500e-02INFO: acmod.c(252): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/feat.params
INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(124): Attempting to use SCHMM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0INFO: dict.c(320): Allocating 4501 * 32 bytes (140 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/481009D2-30B3-45CD-8B0A-3777F37BC994/Library/Caches/1515205191-552694214-Ronald Reagan – Challenger.dic
INFO: dict.c(213): Allocated 2 KiB for strings, 3 KiB for phones
INFO: dict.c(336): 396 words read
INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/D5D39C40-7A9D-4C00-BB1B-1FC20B07EF5A/Prompt Smart.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(345): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
2015-02-17 11:39:33.409 Prompt Smart[3085:990682] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x1742181b0,
inputs = (
“<AVAudioSessionPortDescription: 0x17421b400, type = MicrophoneWired; name = \U041c\U0438\U043a\U0440\U043e\U0444\U043e\U043d \U0433\U0430\U0440\U043d\U0438\U0442\U0443\U0440\U044b; UID = Wired Microphone; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x17421c0b0, type = Headphones; name = \U041d\U0430\U0443\U0448\U043d\U0438\U043a\U0438; UID = Wired Headphones; selectedDataSource = (null)>”
)>.
INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
INFO: ngram_model_dmp.c(220): ngrams 1=327, 2=640, 3=619
INFO: ngram_model_dmp.c(266): 327 = LM.unigrams(+trailer) read
INFO: ngram_model_dmp.c(312): 640 = LM.bigrams(+trailer) read
INFO: ngram_model_dmp.c(338): 619 = LM.trigrams read
INFO: ngram_model_dmp.c(363): 44 = LM.prob2 entries read
INFO: ngram_model_dmp.c(383): 55 = LM.bo_wt2 entries read
INFO: ngram_model_dmp.c(403): 14 = LM.prob3 entries read
INFO: ngram_model_dmp.c(431): 2 = LM.tseg_base entries read
INFO: ngram_model_dmp.c(487): 327 = ascii word strings read
INFO: ngram_search_fwdtree.c(99): 181 unique initial diphones
INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 50 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 50 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 870
INFO: ngram_search_fwdtree.c(339): after: 181 root, 742 non-root channels, 49 single-phone words
INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
2015-02-17 11:39:33.428 Prompt Smart[3085:990774] Restoring SmartCMN value of 31.927734
2015-02-17 11:39:33.428 Prompt Smart[3085:990774] Listening.
2015-02-17 11:39:33.430 Prompt Smart[3085:990774] Project has these words or phrases in its dictionary:
___REJ_AA
___REJ_AE
___REJ_AO
___REJ_AW
___REJ_B
___REJ_CH
___REJ_D
___REJ_DH
___REJ_EH
___REJ_F
___REJ_G
___REJ_HH
___REJ_IH
___REJ_IY
___REJ_JH
___REJ_K
___REJ_L
___REJ_M
___REJ_N
___REJ_NG
___REJ_OW
___REJ_OY
___REJ_P
___REJ_R
___REJ_S
___REJ_SH
___REJ_T
___REJ_TH
___REJ_UH
___REJ_UW
___REJ_V
…and 366 more.
2015-02-17 11:39:33.430 Prompt Smart[3085:990774] Recognition loop has startedFebruary 17, 2015 at 12:16 pm #1024903Halle WinklerPolitepixWelcome,
What happens under the exact same circumstances if you are using the built-in mic or the Apple headset mic?
February 17, 2015 at 1:13 pm #1024904dima.nikParticipantThanks for response!
Under same circumstances, but with mic or headset connected, Pocketsphinx works fine.
Also I notice one thing: if I turn off bluetooth headset during open ears listening audio route is changed, and I receive live hypotesis. When I turn it on, I don’t receive hypotesis again.February 17, 2015 at 1:21 pm #1024905Halle WinklerPolitepixOK, thank you for your report. I will enter this as a bug and look into it for the next version.
-
AuthorPosts
- You must be logged in to reply to this topic.