stopListening hangs using iOS9

Home Forums OpenEars stopListening hangs using iOS9

Viewing 23 posts - 1 through 23 (of 23 total)

  • Author
    Posts
  • #1030260
    roche314
    Participant

    I have seen some posts about this in the forums, but couldn’t find an answer.

      The goal:

    After the user says a phrase, I want OpenEars to stop listening.

      The problem:

    When calling stopListening from pocketsphinxDidReceiveHypothesis, stopListening will take a long time to return. If I turn logging on, I will see the following error reported numerous times:
    Unable to stop listening because because an utterance is still in progress; trying again.
    Note: this works fine when I use a device with iOS8 but fails on multiple devices
    using iOS9

    In order to rule out other things my app is doing, I modified the sample app and can reproduce the problem.
    The only thing I changed, was to make it use a grammar instead of a language:

    NSDictionary *firstGrammar = @{OneOfTheseWillBeSaidOnce:@[
                                                                    @"MY LIBRARY",
                                                                    @"SETTINGS",
                                                                    @"HELP",
                                                                    @"LANGUAGE",
                                                                    @"SAVE",
                                                                    @"NEW DOCUMENT",
                                                                    @"LIST COMMANDS",
                                                                    @"STOP",
                                                                    @"CHANGE MODEL"]};

    And in pocketsphinxDidReceiveHypothesis call stopListening if the STOP command was called.

    #1030261
    roche314
    Participant

    HERE IS THE COMPLETE LOG FROM APP START TO THE BUG
    [spoiler]

    2016-05-05 11:10:23.624 OpenEarsSampleApp[449:87641] Starting OpenEars logging for OpenEars version 2.501 on 64-bit device (or build): iPhone running iOS version: 9.300000
    2016-05-05 11:10:23.625 OpenEarsSampleApp[449:87641] Creating shared instance of OEPocketsphinxController
    2016-05-05 11:10:23.638 OpenEarsSampleApp[449:87641] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish
    2016-05-05 11:10:23.666 OpenEarsSampleApp[449:87641] I'm done running performDictionaryLookup and it took 0.021279 seconds
    2016-05-05 11:10:23.693 OpenEarsSampleApp[449:87641] Returning a cached version of LanguageModelGeneratorLookupList.text
    2016-05-05 11:10:23.718 OpenEarsSampleApp[449:87641] The word BEELD was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2016-05-05 11:10:23.718 OpenEarsSampleApp[449:87641] Using convertGraphemes for the word or phrase beeld which doesn't appear in the dictionary
    2016-05-05 11:10:23.725 OpenEarsSampleApp[449:87641] the graphemes "B IY L D" were created for the word BEELD using the fallback method.
    2016-05-05 11:10:23.749 OpenEarsSampleApp[449:87641] The word TAH was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2016-05-05 11:10:23.749 OpenEarsSampleApp[449:87641] Using convertGraphemes for the word or phrase tah which doesn't appear in the dictionary
    2016-05-05 11:10:23.752 OpenEarsSampleApp[449:87641] the graphemes "T AA" were created for the word TAH using the fallback method.
    2016-05-05 11:10:23.753 OpenEarsSampleApp[449:87641] I'm done running performDictionaryLookup and it took 0.059339 seconds
    2016-05-05 11:10:23.762 OpenEarsSampleApp[449:87641] 
    
    Welcome to the OpenEars sample project. This project understands the words:
    BACKWARD,
    CHANGE,
    FORWARD,
    GO,
    LEFT,
    MODEL,
    RIGHT,
    TURN,
    and if you say "CHANGE MODEL" it will switch to its dynamically-generated model which understands the words:
    CHANGE,
    MODEL,
    MONDAY,
    TUESDAY,
    WEDNESDAY,
    THURSDAY,
    FRIDAY,
    SATURDAY,
    SUNDAY,
    QUIDNUNC
    2016-05-05 11:10:23.763 OpenEarsSampleApp[449:87641] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2016-05-05 11:10:23.765 OpenEarsSampleApp[449:87641] User gave mic permission for this app.
    2016-05-05 11:10:23.765 OpenEarsSampleApp[449:87641] setSecondsOfSilence wasn't set, using default of 0.700000.
    2016-05-05 11:10:23.765 OpenEarsSampleApp[449:87641] Successfully started listening session from startListeningWithLanguageModelAtPath:
    2016-05-05 11:10:23.765 OpenEarsSampleApp[449:87667] Starting listening.
    2016-05-05 11:10:23.766 OpenEarsSampleApp[449:87667] about to set up audio session
    2016-05-05 11:10:23.766 OpenEarsSampleApp[449:87667] Creating audio session with default settings.
    2016-05-05 11:10:23.805 OpenEarsSampleApp[449:87683] Audio route has changed for the following reason:
    2016-05-05 11:10:23.809 OpenEarsSampleApp[449:87683] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2016-05-05 11:10:24.008 OpenEarsSampleApp[449:87667] done starting audio unit
    2016-05-05 11:10:24.010 OpenEarsSampleApp[449:87683] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is ---SpeakerMicrophoneBuiltIn---. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x14d572c60, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x14d575670, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x14d576130, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>.
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone				
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		40
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/A7977349-D883-4517-9A1A-657ABF5E5A06/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg					
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
    -input_endian		little		little
    -jsgf					/var/mobile/Containers/Data/Application/A7977349-D883-4517-9A1A-657ABF5E5A06/Library/Caches/FirstOpenEarsDynamicLanguageModel.gram
    -keyphrase				
    -kws					
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda					
    -ldadim			0		0
    -lifter			0		22
    -lm					
    -lmctl					
    -lmname					
    -logbase		1.0001		1.000100e+00
    -logfn					
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		1.000000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    -mean					/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
    -mfclogdir				
    -min_endfr		0		0
    -mixw					
    -mixwfloor		0.0000001	1.000000e-07
    -mllr					
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir				
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump				/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
    -senlogdir				
    -senmgau				
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec					0-12/13-25/26-38
    -tmat					/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule				
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		2.000000e+00
    -var					/var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params				
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: acmod.c(164): Using subvector specification 0-12/13-25/26-38
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
    INFO: bin_mdef.c(336): Reading binary model definition: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(354): 0 variance values floored
    INFO: ptm_mgau.c(805): Number of codebooks doesn't match number of ciphones, doesn't look like PTM: 1 != 46
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(354): 0 variance values floored
    INFO: s2_semi_mgau.c(904): Loading senones from dump file /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
    INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
    INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
    INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
    INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4121 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/A7977349-D883-4517-9A1A-657ABF5E5A06/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 16 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/5A0344CC-11CB-442F-8C8C-727CE76276BA/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
    INFO: jsgf.c(691): Defined rule: <FirstOpenEarsDynamicLanguageModel.g00000>
    INFO: jsgf.c(691): Defined rule: PUBLIC <FirstOpenEarsDynamicLanguageModel.rule_0>
    INFO: fsg_model.c(215): Computing transitive closure for null transitions
    INFO: fsg_model.c(277): 0 null transitions added
    INFO: fsg_search.c(227): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -5, pip: 0)
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [BREATH] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [COUGH] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [NOISE] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [SMACK] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [UH] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_search.c(173): Added 3 alternate word transitions
    INFO: fsg_lextree.c(110): Allocated 564 bytes (0 KiB) for left and right context phones
    INFO: fsg_lextree.c(256): 117 HMM nodes in lextree (57 leaves)
    INFO: fsg_lextree.c(259): Allocated 16848 bytes (16 KiB) for all lextree nodes
    INFO: fsg_lextree.c(262): Allocated 8208 bytes (8 KiB) for lextree leafnodes
    2016-05-05 11:10:24.054 OpenEarsSampleApp[449:87667] Restoring SmartCMN value of 40.770752
    2016-05-05 11:10:24.054 OpenEarsSampleApp[449:87667] Listening.
    2016-05-05 11:10:24.055 OpenEarsSampleApp[449:87667] Project has these words or phrases in its dictionary:
    CHANGE
    COMMANDS
    DOCUMENT
    DOCUMENT(2)
    HELP
    LANGUAGE
    LANGUAGE(2)
    LIBRARY
    LIST
    MODEL
    MY
    NEW
    NEW(2)
    SAVE
    SETTINGS
    STOP
    2016-05-05 11:10:24.055 OpenEarsSampleApp[449:87667] Recognition loop has started
    2016-05-05 11:10:24.069 OpenEarsSampleApp[449:87641] Local callback: Pocketsphinx is now listening.
    2016-05-05 11:10:24.070 OpenEarsSampleApp[449:87641] Local callback: Pocketsphinx started.
    2016-05-05 11:10:24.496 OpenEarsSampleApp[449:87667] Speech detected...
    2016-05-05 11:10:24.497 OpenEarsSampleApp[449:87641] Local callback: Pocketsphinx has detected speech.
    2016-05-05 11:10:27.510 OpenEarsSampleApp[449:87668] End of speech detected...
    2016-05-05 11:10:27.511 OpenEarsSampleApp[449:87641] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 40.77  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 38.24 -7.00 -6.37 -0.57 -5.08  3.29 -1.93 -2.60 -0.31 -4.48  0.15 -3.28 -0.11 >
    INFO: fsg_search.c(843): 314 frames, 10431 HMMs (33/fr), 25558 senones (81/fr), 785 history entries (2/fr)
    
    2016-05-05 11:10:27.518 OpenEarsSampleApp[449:87668] Pocketsphinx heard "STOP" with a score of (0) and an utterance ID of 0.
    2016-05-05 11:10:27.519 OpenEarsSampleApp[449:87641] Local callback: The received hypothesis is STOP with a score of 0 and an ID of 0
    2016-05-05 11:10:27.519 OpenEarsSampleApp[449:87641] ?????: CALLING: stopListening....
    2016-05-05 11:10:27.520 OpenEarsSampleApp[449:87641] Stopping listening.
    2016-05-05 11:10:27.783 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:27.797 OpenEarsSampleApp[449:87683] Audio route has changed for the following reason:
    2016-05-05 11:10:27.798 OpenEarsSampleApp[449:87683] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2016-05-05 11:10:27.800 OpenEarsSampleApp[449:87683] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is ---SpeakerMicrophoneBuiltIn---. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x14d5a4920, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x14d5a3f80, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x14d5a6410, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>.
    2016-05-05 11:10:27.835 OpenEarsSampleApp[449:87641] Attempting to stop an unstopped utterance so listening can stop.
    2016-05-05 11:10:27.835 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:27.886 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:27.937 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:27.988 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.039 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.090 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.141 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.193 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.244 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.295 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.347 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.398 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.450 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.501 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.552 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.603 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.655 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.706 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.757 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.809 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.860 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.912 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:28.963 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.014 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.065 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.116 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.168 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.219 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.270 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.321 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.373 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.424 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.476 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.527 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.578 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.630 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.682 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.733 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.784 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.836 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.887 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.938 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:29.990 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.041 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.093 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.144 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.195 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.247 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.298 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.349 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.401 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.451 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.503 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.554 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.605 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.657 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.708 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.760 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.811 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.862 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.914 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:30.965 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.016 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.067 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.118 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.169 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.220 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.271 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.323 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.374 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.425 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.476 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.527 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.579 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.630 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.682 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.733 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.785 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.836 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.887 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.938 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:31.989 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.041 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.092 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.143 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.194 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.246 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.297 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.348 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.400 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.451 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.503 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.554 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.605 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.657 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.707 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.759 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.810 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.861 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.912 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:32.964 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.015 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.067 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.118 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.170 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.221 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.272 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.324 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.375 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.426 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.478 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.529 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.581 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.632 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.684 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.735 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.786 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.837 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.889 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.940 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:33.992 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.043 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.095 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.146 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.197 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.249 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.300 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.351 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.403 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.454 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.505 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.557 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.608 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.660 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.711 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.762 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.814 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.866 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.917 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:34.968 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.019 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.071 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.122 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.174 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.225 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.276 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.328 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.379 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.431 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.482 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.533 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.585 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.636 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.687 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.739 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.790 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.842 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.893 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.944 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:35.996 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.047 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.099 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.150 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.201 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.253 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.304 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.355 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.407 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.458 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.509 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.561 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.612 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.663 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.715 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.766 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.817 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.868 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.919 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:36.970 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.022 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.073 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.125 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.176 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.228 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.279 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.331 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.382 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.433 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.484 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.535 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.586 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.637 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.689 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.740 OpenEarsSampleApp[449:87641] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 11:10:37.792 OpenEarsSampleApp[449:87641] Because the utterance couldn't be stopped in a reasonable timeframe, we will break but prefer to let the decoder leak than force an exception by freeing it when it's unsafe. If you see this message regularly, it is a bug, so please report the specific circumstances under which you are regularly seeing it.
    2016-05-05 11:10:37.792 OpenEarsSampleApp[449:87641] No longer listening.
    2016-05-05 11:10:37.792 OpenEarsSampleApp[449:87641] ?????: RETURNED FROM: stopListening....error = NONE
    2016-05-05 11:10:37.796 OpenEarsSampleApp[449:87641] Local callback: Pocketsphinx has stopped listening.
    

    [/spoiler]

    #1030262
    roche314
    Participant

    HERE IS THE MODIFIED COPY OF ViewController.m
    [spoiler]

    //  ViewController.m
    //  OpenEarsSampleApp
    //
    //  ViewController.m demonstrates the use of the OpenEars framework. 
    //
    //  Copyright Politepix UG (haftungsbeschränkt) 2014. All rights reserved.
    //  https://www.politepix.com
    //  Contact at https://www.politepix.com/contact
    //
    //  This file is licensed under the Politepix Shared Source license found in the root of the source distribution.
    
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // IMPORTANT NOTE: Audio driver and hardware behavior is completely different between the Simulator and a real device. It is not informative to test OpenEars' accuracy on the Simulator, and please do not report Simulator-only bugs since I only actively support 
    // the device driver. Please only do testing/bug reporting based on results on a real device such as an iPhone or iPod Touch. Thanks!
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    
    #import "ViewController.h"
    #import <OpenEars/OEPocketsphinxController.h>
    #import <OpenEars/OEFliteController.h>
    #import <OpenEars/OELanguageModelGenerator.h>
    #import <OpenEars/OELogging.h>
    #import <OpenEars/OEAcousticModel.h>
    #import <Slt/Slt.h>
    
    @interface ViewController()
    
    // UI actions, not specifically related to OpenEars other than the fact that they invoke OpenEars methods.
    - (IBAction) stopButtonAction;
    - (IBAction) startButtonAction;
    - (IBAction) suspendListeningButtonAction;
    - (IBAction) resumeListeningButtonAction;
    
    // Example for reading out the input audio levels without locking the UI using an NSTimer
    
    - (void) startDisplayingLevels;
    - (void) stopDisplayingLevels;
    
    // These three are the important OpenEars objects that this class demonstrates the use of.
    //? @property (nonatomic, strong) Slt *slt;
    
    @property (nonatomic, strong) OEEventsObserver *openEarsEventsObserver;
    @property (nonatomic, strong) OEPocketsphinxController *pocketsphinxController;
    //? @property (nonatomic, strong) OEFliteController *fliteController;
    
    // Some UI, not specifically related to OpenEars.
    @property (nonatomic, strong) IBOutlet UIButton *stopButton;
    @property (nonatomic, strong) IBOutlet UIButton *startButton;
    @property (nonatomic, strong) IBOutlet UIButton *suspendListeningButton;	
    @property (nonatomic, strong) IBOutlet UIButton *resumeListeningButton;	
    @property (nonatomic, strong) IBOutlet UITextView *statusTextView;
    @property (nonatomic, strong) IBOutlet UITextView *heardTextView;
    @property (nonatomic, strong) IBOutlet UILabel *pocketsphinxDbLabel;
    @property (nonatomic, strong) IBOutlet UILabel *fliteDbLabel;
    @property (nonatomic, assign) BOOL usingStartingLanguageModel;
    @property (nonatomic, assign) int restartAttemptsDueToPermissionRequests;
    @property (nonatomic, assign) BOOL startupFailedDueToLackOfPermissions;
    
    // Things which help us show off the dynamic language features.
    // @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedGrammar;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedDictionary;
    //@property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedGrammar;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedDictionary;
    
    // Our NSTimer that will help us read and display the input and output levels without locking the UI
    @property (nonatomic, strong) 	NSTimer *uiUpdateTimer;
    
    @end
    
    @implementation ViewController
    
    #define kLevelUpdatesPerSecond 18 // We'll have the ui update 18 times a second to show some fluidity without hitting the CPU too hard.
    
    //#define kGetNbest // Uncomment this if you want to try out nbest
    #pragma mark - 
    #pragma mark Memory Management
    
    - (void)dealloc {
        [self stopDisplayingLevels];
    }
    
    #pragma mark -
    #pragma mark View Lifecycle
    
    - (void)viewDidLoad {
        [super viewDidLoad];
    //?    self.fliteController = [[OEFliteController alloc] init];
        self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
        self.openEarsEventsObserver.delegate = self;
    //?    self.slt = [[Slt alloc] init];
        
        self.restartAttemptsDueToPermissionRequests = 0;
        self.startupFailedDueToLackOfPermissions = FALSE;
        
         [OELogging startOpenEarsLogging]; // Uncomment me for OELogging, which is verbose logging about internal OpenEars operations such as audio settings. If you have issues, show this logging in the forums.
        [OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE; // Uncomment this for much more verbose speech recognition engine output. If you have issues, show this logging in the forums.
        
        [self.openEarsEventsObserver setDelegate:self]; // Make this class the delegate of OpenEarsObserver so we can get all of the messages about what OpenEars is doing.
        
        [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this before setting any OEPocketsphinxController characteristics
        
        // This is the language model we're going to start up with. The only reason I'm making it a class property is that I reuse it a bunch of times in this example, 
        // but you can pass the string contents directly to OEPocketsphinxController:startListeningWithLanguageModelAtPath:dictionaryAtPath:languageModelIsJSGF:
    
        
    //?: CHANGED TO USE GRAMMAR
    //    NSArray *firstLanguageArray = @[@"BACKWARD",
    //                                    @"CHANGE",
    //                                    @"FORWARD",
    //                                    @"GO",
    //                                    @"LEFT",
    //                                    @"MODEL",
    //                                    @"RIGHT",
    //                                    @"TURN"];
        NSDictionary *firstGrammar = @{OneOfTheseWillBeSaidOnce:@[
                                                                    @"MY LIBRARY",
                                                                    @"SETTINGS",
                                                                    @"HELP",
                                                                    @"LANGUAGE",
                                                                    @"SAVE",
                                                                    @"NEW DOCUMENT",
                                                                    @"LIST COMMANDS",
                                                                    @"STOP",
                                                                    @"CHANGE MODEL"]};
        
        OELanguageModelGenerator *languageModelGenerator = [[OELanguageModelGenerator alloc] init]; 
        
        // languageModelGenerator.verboseLanguageModelGenerator = TRUE; // Uncomment me for verbose language model generator debug output.
    
    //?: CHANGED TO USE GRAMMAR
    //    NSError *error = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
    
        NSError *error = [languageModelGenerator generateGrammarFromDictionary:firstGrammar
                                                                withFilesNamed:@"FirstOpenEarsDynamicLanguageModel"
                                                        forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        } else {
            //?: CHANGED TO USE GRAMMAR
    //        self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            self.pathToFirstDynamicallyGeneratedGrammar = [languageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            
            self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
        }
        
        self.usingStartingLanguageModel = TRUE; // This is not an OpenEars thing, this is just so I can switch back and forth between the two models in this sample app.
        
        // Here is an example of dynamically creating an in-app grammar.
        
        // We want it to be able to response to the speech "CHANGE MODEL" and a few other things.  Items we want to have recognized as a whole phrase (like "CHANGE MODEL") 
        // we put into the array as one string (e.g. "CHANGE MODEL" instead of "CHANGE" and "MODEL"). This increases the probability that they will be recognized as a phrase. This works even better starting with version 1.0 of OpenEars.
        
    //?: CHANGED TO USE GRAMMAR
    //    NSArray *secondLanguageArray = @[@"SUNDAY",
    //                                     @"MONDAY",
    //                                     @"TUESDAY",
    //                                     @"WEDNESDAY",
    //                                     @"THURSDAY",
    //                                     @"FRIDAY",
    //                                     @"SATURDAY",
    //                                     @"QUIDNUNC",
    //                                     @"CHANGE MODEL"];
        NSDictionary *secondGrammar = @{OneOfTheseWillBeSaidOnce:@[
                                                @"CAMERA ROLL",
                                                @"TAKE PICTURE",
                                                @"TAH BEELD",
                                                @"FLASH ON",
                                                @"FLASH AUTO",
                                                @"FLASH OFF",
                                                @"MULTI PAGE",
                                                @"SINGLE PAGE",
                                                @"READ",
                                                @"LIST COMMANDS",
                                                @"STOP",
                                                @"CHANGE MODEL"]};
        
        
        // The last entry, quidnunc, is an example of a word which will not be found in the lookup dictionary and will be passed to the fallback method. The fallback method is slower,
        // so, for instance, creating a new language model from dictionary words will be pretty fast, but a model that has a lot of unusual names in it or invented/rare/recent-slang
        // words will be slower to generate. You can use this information to give your users good UI feedback about what the expectations for wait times should be.
        
        // I don't think it's beneficial to lazily instantiate OELanguageModelGenerator because you only need to give it a single message and then release it.
        // If you need to create a very large model or any size of model that has many unusual words that have to make use of the fallback generation method,
        // you will want to run this on a background thread so you can give the user some UI feedback that the task is in progress.
        
        // generateLanguageModelFromArray:withFilesNamed returns an NSError which will either have a value of noErr if everything went fine or a specific error if it didn't.
        
    //?: CHANGED TO USE GRAMMAR
    //    error = [languageModelGenerator generateLanguageModelFromArray:secondLanguageArray withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
      
        error = [languageModelGenerator generateGrammarFromDictionary:secondGrammar withFilesNamed:@"SecondOpenEarsDynamicLanguageModel"
                                               forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
        
        
        //    NSError *error = [languageModelGenerator generateLanguageModelFromTextFile:[NSString stringWithFormat:@"%@/%@",[[NSBundle mainBundle] resourcePath], @"OpenEarsCorpus.txt"] withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Try this out to see how generating a language model from a corpus works.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        }	else {
            
    //        self.pathToSecondDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"]; // We'll set our new .languagemodel file to be the one to get switched to when the words "CHANGE MODEL" are recognized.
      
            self.pathToSecondDynamicallyGeneratedGrammar = [languageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];
            
            self.pathToSecondDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];; // We'll set our new dictionary to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            
            // Next, an informative message.
            
            NSLog(@"\n\nWelcome to the OpenEars sample project. This project understands the words:\nBACKWARD,\nCHANGE,\nFORWARD,\nGO,\nLEFT,\nMODEL,\nRIGHT,\nTURN,\nand if you say \"CHANGE MODEL\" it will switch to its dynamically-generated model which understands the words:\nCHANGE,\nMODEL,\nMONDAY,\nTUESDAY,\nWEDNESDAY,\nTHURSDAY,\nFRIDAY,\nSATURDAY,\nSUNDAY,\nQUIDNUNC");
            
            // This is how to start the continuous listening loop of an available instance of OEPocketsphinxController. We won't do this if the language generation failed since it will be listening for a command to change over to the generated language.
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this once before setting properties of the OEPocketsphinxController instance.
            
            //   [OEPocketsphinxController sharedInstance].pathToTestFile = [[NSBundle mainBundle] pathForResource:@"change_model_short" ofType:@"wav"];  // This is how you could use a test WAV (mono/16-bit/16k) rather than live recognition. Don't forget to add your WAV to your app bundle.
            
            if(![OEPocketsphinxController sharedInstance].isListening)
            {
    //?: CHANGED TO USE GRAMMAR
    //            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
      
                [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar
                                        dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:TRUE];
            
            }
            // [self startDisplayingLevels] is not an OpenEars method, just a very simple approach for level reading
            // that I've included with this sample app. My example implementation does make use of two OpenEars
            // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
            // method of fliteController. 
            //
            // The example is meant to show one way that you can read those levels continuously without locking the UI, 
            // by using an NSTimer, but the OpenEars level-reading methods 
            // themselves do not include multithreading code since I believe that you will want to design your own 
            // code approaches for level display that are tightly-integrated with your interaction design and the  
            // graphics API you choose. 
            
            [self startDisplayingLevels];
            
            // Here is some UI stuff that has nothing specifically to do with OpenEars implementation
            self.startButton.hidden = TRUE;
            self.stopButton.hidden = TRUE;
            self.suspendListeningButton.hidden = TRUE;
            self.resumeListeningButton.hidden = TRUE;
        }
    }
    
    #pragma mark -
    #pragma mark OEEventsObserver delegate methods
    
    // What follows are all of the delegate methods you can optionally use once you've instantiated an OEEventsObserver and set its delegate to self. 
    // I've provided some pretty granular information about the exact phase of the Pocketsphinx listening loop, the Audio Session, and Flite, but I'd expect 
    // that the ones that will really be needed by most projects are the following:
    //
    //- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID;
    //- (void) audioSessionInterruptionDidBegin;
    //- (void) audioSessionInterruptionDidEnd;
    //- (void) audioRouteDidChangeToRoute:(NSString *)newRoute;
    //- (void) pocketsphinxDidStartListening;
    //- (void) pocketsphinxDidStopListening;
    //
    // It isn't necessary to have a OEPocketsphinxController or a OEFliteController instantiated in order to use these methods.  If there isn't anything instantiated that will
    // send messages to an OEEventsObserver, all that will happen is that these methods will never fire.  You also do not have to create a OEEventsObserver in
    // the same class or view controller in which you are doing things with a OEPocketsphinxController or OEFliteController; you can receive updates from those objects in
    // any class in which you instantiate an OEEventsObserver and set its delegate to self.
    
    // This is an optional delegate method of OEEventsObserver which delivers the text of speech that Pocketsphinx heard and analyzed, along with its accuracy score and utterance ID.
    - (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
        
        NSLog(@"Local callback: The received hypothesis is %@ with a score of %@ and an ID of %@", hypothesis, recognitionScore, utteranceID); // Log it.
        if([hypothesis isEqualToString:@"CHANGE MODEL"]) { // If the user says "CHANGE MODEL", we will switch to the alternate model (which happens to be the dynamically generated model).
            
            // Here is an example of language model switching in OpenEars. Deciding on what logical basis to switch models is your responsibility.
            // For instance, when you call a customer service line and get a response tree that takes you through different options depending on what you say to it,
            // the models are being switched as you progress through it so that only relevant choices can be understood. The construction of that logical branching and 
            // how to react to it is your job; OpenEars just lets you send the signal to switch the language model when you've decided it's the right time to do so.
            
            if(self.usingStartingLanguageModel)
            { // If we're on the starting model, switch to the dynamically generated one.
            
    //?: CHANGED TO USE GRAMMAR
    //            [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedLanguageModel withDictionary:self.pathToSecondDynamicallyGeneratedDictionary];
      
                [[OEPocketsphinxController sharedInstance]changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedGrammar
                                                                     withDictionary:self.pathToSecondDynamicallyGeneratedDictionary];
                
                self.usingStartingLanguageModel = FALSE;
                
            } else { // If we're on the dynamically generated model, switch to the start model (this is an example of a trigger and method for switching models).
                
    //?: CHANGED TO USE GRAMMAR
    //            [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedLanguageModel withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
    
                [[OEPocketsphinxController sharedInstance]changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedGrammar withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
    
                self.usingStartingLanguageModel = TRUE;
            }
        }
        else if([hypothesis isEqualToString:@"STOP"])
        {
            [self stopButtonAction];
        }
        self.heardTextView.text = [NSString stringWithFormat:@"Heard: \"%@\"", hypothesis]; // Show it in the status box.
        
        // This is how to use an available instance of OEFliteController. We're going to repeat back the command that we heard with the voice we've chosen.
    //?    [self.fliteController say:[NSString stringWithFormat:@"You said %@",hypothesis] withVoice:self.slt];
    }
    
    #ifdef kGetNbest   
    - (void) pocketsphinxDidReceiveNBestHypothesisArray:(NSArray *)hypothesisArray { // Pocketsphinx has an n-best hypothesis dictionary.
        NSLog(@"Local callback:  hypothesisArray is %@",hypothesisArray);   
    }
    #endif
    // An optional delegate method of OEEventsObserver which informs that there was an interruption to the audio session (e.g. an incoming phone call).
    - (void) audioSessionInterruptionDidBegin {
        NSLog(@"Local callback:  AudioSession interruption began."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption began."; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) {
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening (if it is listening) since it will need to restart its loop after an interruption.
            if(error) NSLog(@"Error while stopping listening in audioSessionInterruptionDidBegin: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the interruption to the audio session ended.
    - (void) audioSessionInterruptionDidEnd {
        NSLog(@"Local callback:  AudioSession interruption ended."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption ended."; // Show it in the status box.
        // We're restarting the previously-stopped listening loop.
        if(![OEPocketsphinxController sharedInstance].isListening)
        {
    //?: CHANGED TO USE GRAMMAR
    //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't currently listening.
            
        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
            
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the audio input became unavailable.
    - (void) audioInputDidBecomeUnavailable {
        NSLog(@"Local callback:  The audio input has become unavailable"); // Log it.
        self.statusTextView.text = @"Status: The audio input has become unavailable"; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening){
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening since there is no available input (but only if we are listening).
            if(error) NSLog(@"Error while stopping listening in audioInputDidBecomeUnavailable: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the unavailable audio input became available again.
    - (void) audioInputDidBecomeAvailable {
        NSLog(@"Local callback: The audio input is available"); // Log it.
        self.statusTextView.text = @"Status: The audio input is available"; // Show it in the status box.
        if(![OEPocketsphinxController sharedInstance].isListening)
        {
    //?: CHANGED TO USE GRAMMAR
    
    //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition, but only if we aren't already listening.
      
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:TRUE];
    
        }
    }
    // An optional delegate method of OEEventsObserver which informs that there was a change to the audio route (e.g. headphones were plugged in or unplugged).
    - (void) audioRouteDidChangeToRoute:(NSString *)newRoute {
        NSLog(@"Local callback: Audio route change. The new audio route is %@", newRoute); // Log it.
        self.statusTextView.text = [NSString stringWithFormat:@"Status: Audio route change. The new audio route is %@",newRoute]; // Show it in the status box.
        
        NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling the Pocketsphinx loop to shut down and then start listening again on the new route
        
        if(error)NSLog(@"Local callback: error while stopping listening in audioRouteDidChangeToRoute: %@",error);
        
        if(![OEPocketsphinxController sharedInstance].isListening)
        {
    //?: CHANGED TO USE GRAMMAR
    //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
      }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
    // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
    - (void) pocketsphinxRecognitionLoopDidStart {
        
        NSLog(@"Local callback: Pocketsphinx started."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx started."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
    - (void) pocketsphinxDidStartListening {
        
        NSLog(@"Local callback: Pocketsphinx is now listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx is now listening."; // Show it in the status box.
        
        self.startButton.hidden = TRUE; // React to it with some UI changes.
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
    - (void) pocketsphinxDidDetectSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected speech."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance. 
    // This was added because developers requested being able to time the recognition speed without the speech time. The processing time is the time between 
    // this method being called and the hypothesis being returned.
    - (void) pocketsphinxDidDetectFinishedSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected a second of silence, concluding an utterance."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected finished speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most 
    // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
    - (void) pocketsphinxDidStopListening {
        NSLog(@"Local callback: Pocketsphinx has stopped listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has stopped listening."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
    // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
    // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
    - (void) pocketsphinxDidSuspendRecognition {
        NSLog(@"Local callback: Pocketsphinx has suspended recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has suspended recognition."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
    // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
    // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
    - (void) pocketsphinxDidResumeRecognition {
        NSLog(@"Local callback: Pocketsphinx has resumed recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has resumed recognition."; // Show it in the status box.
    }
    
    // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
    // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
    - (void) pocketsphinxDidChangeLanguageModelToFile:(NSString *)newLanguageModelPathAsString andDictionary:(NSString *)newDictionaryPathAsString {
        NSLog(@"Local callback: Pocketsphinx is now using the following language model: \n%@ and the following dictionary: %@",newLanguageModelPathAsString,newDictionaryPathAsString);
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
    // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
    - (void) fliteDidStartSpeaking {
        NSLog(@"Local callback: Flite has started speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has started speaking."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
    // complex interaction between sound classes.
    - (void) fliteDidFinishSpeaking {
        NSLog(@"Local callback: Flite has finished speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has finished speaking."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousSetupDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Setting up the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to start recognition loop."; // Show it in the status box.	
    }
    
    - (void) pocketSphinxContinuousTeardownDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Tearing down the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to cleanly end recognition loop."; // Show it in the status box.	
    }
    
    - (void) testRecognitionCompleted { // A test file which was submitted for direct recognition via the audio driver is done.
        NSLog(@"Local callback: A test file which was submitted for direct recognition via the audio driver is done."); // Log it.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // If we're listening, stop listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error) NSLog(@"Error while stopping listening in testRecognitionCompleted: %@", error);
        }
        
    }
    /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
    - (void) pocketsphinxFailedNoMicPermissions {
        NSLog(@"Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.");
        self.startupFailedDueToLackOfPermissions = TRUE;
        if([OEPocketsphinxController sharedInstance].isListening){
            NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // Stop listening if we are listening.
            if(error) NSLog(@"Error while stopping listening in micPermissionCheckCompleted: %@", error);
        }
    }
    
    /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a TRUE or a FALSE result  (will only be returned on iOS7 or later).*/
    - (void) micPermissionCheckCompleted:(BOOL)result {
        if(result) {
            self.restartAttemptsDueToPermissionRequests++;
            if(self.restartAttemptsDueToPermissionRequests == 1 && self.startupFailedDueToLackOfPermissions) { // If we get here because there was an attempt to start which failed due to lack of permissions, and now permissions have been requested and they returned true, we restart exactly once with the new permissions.
    
                if(![OEPocketsphinxController sharedInstance].isListening) { // If there was no error and we aren't listening, start listening.
    //?: CHANGED TO USE GRAMMAR
    //                [[OEPocketsphinxController sharedInstance]
    //                 startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel
    //                 dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary 
    //                 acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] 
    //                 languageModelIsJSGF:FALSE]; // Start speech recognition.
                    [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
                    
                    self.startupFailedDueToLackOfPermissions = FALSE;
                }
            }
        }
    }
    
    #pragma mark -
    #pragma mark UI
    
    // This is not OpenEars-specific stuff, just some UI behavior
    
    - (IBAction) suspendListeningButtonAction { // This is the action for the button which suspends listening without ending the recognition loop
        [[OEPocketsphinxController sharedInstance] suspendRecognition];	
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = FALSE;
    }
    
    - (IBAction) resumeListeningButtonAction { // This is the action for the button which resumes listening if it has been suspended
        [[OEPocketsphinxController sharedInstance] resumeRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;	
    }
    
    - (IBAction) stopButtonAction { // This is the action for the button which shuts down the recognition loop.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // Stop if we are currently listening.
            NSLog(@"?????: CALLING: stopListening....");
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error)NSLog(@"Error stopping listening in stopButtonAction: %@", error);
            NSLog(@"?????: RETURNED FROM: stopListening....error = %@", error ? error.description : @"NONE");
        }
        self.startButton.hidden = FALSE;
        self.stopButton.hidden = TRUE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) startButtonAction { // This is the action for the button which starts up the recognition loop again if it has been shut down.
        if(![OEPocketsphinxController sharedInstance].isListening) {
    //?: CHANGED TO USE GRAMMAR
            
    //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
        }
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    #pragma mark -
    #pragma mark Example for reading out Pocketsphinx and Flite audio levels without locking the UI by using an NSTimer
    
    // What follows are not OpenEars methods, just an approach for level reading
    // that I've included with this sample app. My example implementation does make use of two OpenEars
    // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
    // method of OEFliteController. 
    //
    // The example is meant to show one way that you can read those levels continuously without locking the UI, 
    // by using an NSTimer, but the OpenEars level-reading methods 
    // themselves do not include multithreading code since I believe that you will want to design your own 
    // code approaches for level display that are tightly-integrated with your interaction design and the  
    // graphics API you choose. 
    // 
    // Please note that if you use my sample approach, you should pay attention to the way that the timer is always stopped in
    // dealloc. This should prevent you from having any difficulties with deallocating a class due to a running NSTimer process.
    
    - (void) startDisplayingLevels { // Start displaying the levels using a timer
        [self stopDisplayingLevels]; // We never want more than one timer valid so we'll stop any running timers first.
        self.uiUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/kLevelUpdatesPerSecond target:self selector:@selector(updateLevelsUI) userInfo:nil repeats:YES];
    }
    
    - (void) stopDisplayingLevels { // Stop displaying the levels by stopping the timer if it's running.
        if(self.uiUpdateTimer && [self.uiUpdateTimer isValid]) { // If there is a running timer, we'll stop it here.
            [self.uiUpdateTimer invalidate];
            self.uiUpdateTimer = nil;
        }
    }
    
    - (void) updateLevelsUI { // And here is how we obtain the levels.  This method includes the actual OpenEars methods and uses their results to update the UI of this view controller.
        
        self.pocketsphinxDbLabel.text = [NSString stringWithFormat:@"Pocketsphinx Input level:%f",[[OEPocketsphinxController sharedInstance] pocketsphinxInputLevel]];  //pocketsphinxInputLevel is an OpenEars method of the class OEPocketsphinxController.
        
    //?    if(self.fliteController.speechInProgress) {
    //?        self.fliteDbLabel.text = [NSString stringWithFormat:@"Flite Output level: %f",[self.fliteController fliteOutputLevel]]; // fliteOutputLevel is an OpenEars method of the class OEFliteController.
    //?    }
    }
    
    @end
    

    [/spoiler]

    #1030263
    Halle Winkler
    Politepix

    Welcome,

    Thanks for letting me know you are also experiencing this – it is a high priority to fix but I haven’t had the same success reproducing it. I am currently trying to reproduce this from a submitted recording (I haven’t been able to reliably reproduce it with a local recording which is a prerequisite for adding a test to prevent this generally). Is there any chance you could take a look at this post about replication cases with audio recordings and consider making me a recording to go with your sample app that reliably causes this when run with pathToTestFile?

    https://www.politepix.com/forums/topic/how-to-create-a-minimal-case-for-replication/

    Note that this requires a) installing the SaveThatWave demo, b) getting a complete audio dump of the session in which this happens, c) adding the audio file to your sample app code above by using pathToTestFile, and d) making sure that the issue replicates when you run the app, so we’ll know it will replicate for me as well. If possible, this would be extremely helpful towards fixing this issue faster.

    #1030264
    roche314
    Participant

    Thanks for the quick response.
    I will attempt to make you an audio recording.

    #1030266
    roche314
    Participant

    Halle,

    I was able to reproduce this with a wav file.
    You can download it at: StopListeningBug.wav
    [spoiler]
    ViewController.m

    #if 1
    
    //  ViewController.m
    //  OpenEarsSampleApp
    //
    //  ViewController.m demonstrates the use of the OpenEars framework.
    //
    //  Copyright Politepix UG (haftungsbeschränkt) 2014. All rights reserved.
    //  https://www.politepix.com
    //  Contact at https://www.politepix.com/contact
    //
    //  This file is licensed under the Politepix Shared Source license found in the root of the source distribution.
    
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // IMPORTANT NOTE: Audio driver and hardware behavior is completely different between the Simulator and a real device. It is not informative to test OpenEars' accuracy on the Simulator, and please do not report Simulator-only bugs since I only actively support
    // the device driver. Please only do testing/bug reporting based on results on a real device such as an iPhone or iPod Touch. Thanks!
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    
    // Some defines so I can switch between recording wav, and testing with wav
    //#define USING_SAVE_THAT_WAVE
    #define USE_DEBUG_WAV_FILE
    
    #import "ViewController.h"
    #import <OpenEars/OEPocketsphinxController.h>
    #import <OpenEars/OEFliteController.h>
    #import <OpenEars/OELanguageModelGenerator.h>
    #import <OpenEars/OELogging.h>
    #import <OpenEars/OEAcousticModel.h>
    #import <Slt/Slt.h>
    
    #ifdef USING_SAVE_THAT_WAVE
    #import <SaveThatWaveDemo/OEEventsObserver+SaveThatWave.h>
    #import <SaveThatWaveDemo/SaveThatWaveController.h>
    #endif
    
    @interface ViewController()
    
    // UI actions, not specifically related to OpenEars other than the fact that they invoke OpenEars methods.
    - (IBAction) stopButtonAction;
    - (IBAction) startButtonAction;
    - (IBAction) suspendListeningButtonAction;
    - (IBAction) resumeListeningButtonAction;
    
    // Example for reading out the input audio levels without locking the UI using an NSTimer
    
    - (void) startDisplayingLevels;
    - (void) stopDisplayingLevels;
    
    // These three are the important OpenEars objects that this class demonstrates the use of.
    //? @property (nonatomic, strong) Slt *slt;
    
    @property (nonatomic, strong) OEEventsObserver *openEarsEventsObserver;
    @property (nonatomic, strong) OEPocketsphinxController *pocketsphinxController;
    //? @property (nonatomic, strong) OEFliteController *fliteController;
    
    // Some UI, not specifically related to OpenEars.
    @property (nonatomic, strong) IBOutlet UIButton *stopButton;
    @property (nonatomic, strong) IBOutlet UIButton *startButton;
    @property (nonatomic, strong) IBOutlet UIButton *suspendListeningButton;
    @property (nonatomic, strong) IBOutlet UIButton *resumeListeningButton;
    @property (nonatomic, strong) IBOutlet UITextView *statusTextView;
    @property (nonatomic, strong) IBOutlet UITextView *heardTextView;
    @property (nonatomic, strong) IBOutlet UILabel *pocketsphinxDbLabel;
    @property (nonatomic, strong) IBOutlet UILabel *fliteDbLabel;
    @property (nonatomic, assign) BOOL usingStartingLanguageModel;
    @property (nonatomic, assign) int restartAttemptsDueToPermissionRequests;
    @property (nonatomic, assign) BOOL startupFailedDueToLackOfPermissions;
    
    // Things which help us show off the dynamic language features.
    // @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedGrammar;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedDictionary;
    //@property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedGrammar;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedDictionary;
    
    // Our NSTimer that will help us read and display the input and output levels without locking the UI
    @property (nonatomic, strong) 	NSTimer *uiUpdateTimer;
    
    #ifdef USING_SAVE_THAT_WAVE
    @property (strong, nonatomic) SaveThatWaveController *saveThatWaveController;
    #endif
    
    @end
    
    @implementation ViewController
    
    #define kLevelUpdatesPerSecond 18 // We'll have the ui update 18 times a second to show some fluidity without hitting the CPU too hard.
    
    //#define kGetNbest // Uncomment this if you want to try out nbest
    #pragma mark -
    #pragma mark Memory Management
    
    - (void)dealloc {
        [self stopDisplayingLevels];
    }
    
    #pragma mark -
    #pragma mark View Lifecycle
    
    - (void)viewDidLoad {
        [super viewDidLoad];
    
    #ifdef USING_SAVE_THAT_WAVE
        self.saveThatWaveController = [[SaveThatWaveController alloc] init];
    #endif
        
        
        //?    self.fliteController = [[OEFliteController alloc] init];
        self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
        self.openEarsEventsObserver.delegate = self;
        //?    self.slt = [[Slt alloc] init];
        
        
        self.restartAttemptsDueToPermissionRequests = 0;
        self.startupFailedDueToLackOfPermissions = FALSE;
        
        [OELogging startOpenEarsLogging]; // Uncomment me for OELogging, which is verbose logging about internal OpenEars operations such as audio settings. If you have issues, show this logging in the forums.
        [OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE; // Uncomment this for much more verbose speech recognition engine output. If you have issues, show this logging in the forums.
        
        [self.openEarsEventsObserver setDelegate:self]; // Make this class the delegate of OpenEarsObserver so we can get all of the messages about what OpenEars is doing.
        
        [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this before setting any OEPocketsphinxController characteristics
        
        // This is the language model we're going to start up with. The only reason I'm making it a class property is that I reuse it a bunch of times in this example,
        // but you can pass the string contents directly to OEPocketsphinxController:startListeningWithLanguageModelAtPath:dictionaryAtPath:languageModelIsJSGF:
        
        
        //?: CHANGED TO USE GRAMMAR
        //    NSArray *firstLanguageArray = @[@"BACKWARD",
        //                                    @"CHANGE",
        //                                    @"FORWARD",
        //                                    @"GO",
        //                                    @"LEFT",
        //                                    @"MODEL",
        //                                    @"RIGHT",
        //                                    @"TURN"];
        NSDictionary *firstGrammar = @{OneOfTheseWillBeSaidOnce:@[
                                               @"MY LIBRARY",
                                               @"SETTINGS",
                                               @"HELP",
                                               @"LANGUAGE",
                                               @"SAVE",
                                               @"NEW DOCUMENT",
                                               @"LIST COMMANDS",
                                               @"STOP",
                                               @"CHANGE MODEL"]};
        
        OELanguageModelGenerator *languageModelGenerator = [[OELanguageModelGenerator alloc] init];
        
        // languageModelGenerator.verboseLanguageModelGenerator = TRUE; // Uncomment me for verbose language model generator debug output.
        
        //?: CHANGED TO USE GRAMMAR
        //    NSError *error = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        NSError *error = [languageModelGenerator generateGrammarFromDictionary:firstGrammar
                                                                withFilesNamed:@"FirstOpenEarsDynamicLanguageModel"
                                                        forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);
        } else {
            //?: CHANGED TO USE GRAMMAR
            //        self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            self.pathToFirstDynamicallyGeneratedGrammar = [languageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            
            self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
        }
        
        self.usingStartingLanguageModel = TRUE; // This is not an OpenEars thing, this is just so I can switch back and forth between the two models in this sample app.
        
        // Here is an example of dynamically creating an in-app grammar.
        
        // We want it to be able to response to the speech "CHANGE MODEL" and a few other things.  Items we want to have recognized as a whole phrase (like "CHANGE MODEL")
        // we put into the array as one string (e.g. "CHANGE MODEL" instead of "CHANGE" and "MODEL"). This increases the probability that they will be recognized as a phrase. This works even better starting with version 1.0 of OpenEars.
        
        //?: CHANGED TO USE GRAMMAR
        //    NSArray *secondLanguageArray = @[@"SUNDAY",
        //                                     @"MONDAY",
        //                                     @"TUESDAY",
        //                                     @"WEDNESDAY",
        //                                     @"THURSDAY",
        //                                     @"FRIDAY",
        //                                     @"SATURDAY",
        //                                     @"QUIDNUNC",
        //                                     @"CHANGE MODEL"];
        NSDictionary *secondGrammar = @{OneOfTheseWillBeSaidOnce:@[
                                                @"CAMERA ROLL",
                                                @"TAKE PICTURE",
                                                @"TAH BEELD",
                                                @"FLASH ON",
                                                @"FLASH AUTO",
                                                @"FLASH OFF",
                                                @"MULTI PAGE",
                                                @"SINGLE PAGE",
                                                @"READ",
                                                @"LIST COMMANDS",
                                                @"STOP",
                                                @"CHANGE MODEL"]};
        
        
        // The last entry, quidnunc, is an example of a word which will not be found in the lookup dictionary and will be passed to the fallback method. The fallback method is slower,
        // so, for instance, creating a new language model from dictionary words will be pretty fast, but a model that has a lot of unusual names in it or invented/rare/recent-slang
        // words will be slower to generate. You can use this information to give your users good UI feedback about what the expectations for wait times should be.
        
        // I don't think it's beneficial to lazily instantiate OELanguageModelGenerator because you only need to give it a single message and then release it.
        // If you need to create a very large model or any size of model that has many unusual words that have to make use of the fallback generation method,
        // you will want to run this on a background thread so you can give the user some UI feedback that the task is in progress.
        
        // generateLanguageModelFromArray:withFilesNamed returns an NSError which will either have a value of noErr if everything went fine or a specific error if it didn't.
        
        //?: CHANGED TO USE GRAMMAR
        //    error = [languageModelGenerator generateLanguageModelFromArray:secondLanguageArray withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        error = [languageModelGenerator generateGrammarFromDictionary:secondGrammar withFilesNamed:@"SecondOpenEarsDynamicLanguageModel"
                                               forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
        
        
        //    NSError *error = [languageModelGenerator generateLanguageModelFromTextFile:[NSString stringWithFormat:@"%@/%@",[[NSBundle mainBundle] resourcePath], @"OpenEarsCorpus.txt"] withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Try this out to see how generating a language model from a corpus works.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);
        }	else {
            
            //        self.pathToSecondDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"]; // We'll set our new .languagemodel file to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            
            self.pathToSecondDynamicallyGeneratedGrammar = [languageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];
            
            self.pathToSecondDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];; // We'll set our new dictionary to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            
            // Next, an informative message.
            
            NSLog(@"\n\nWelcome to the OpenEars sample project. This project understands the words:\nBACKWARD,\nCHANGE,\nFORWARD,\nGO,\nLEFT,\nMODEL,\nRIGHT,\nTURN,\nand if you say \"CHANGE MODEL\" it will switch to its dynamically-generated model which understands the words:\nCHANGE,\nMODEL,\nMONDAY,\nTUESDAY,\nWEDNESDAY,\nTHURSDAY,\nFRIDAY,\nSATURDAY,\nSUNDAY,\nQUIDNUNC");
            
            // This is how to start the continuous listening loop of an available instance of OEPocketsphinxController. We won't do this if the language generation failed since it will be listening for a command to change over to the generated language.
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this once before setting properties of the OEPocketsphinxController instance.
            
    #ifdef USE_DEBUG_WAV_FILE
            [OEPocketsphinxController sharedInstance].pathToTestFile = [[NSBundle mainBundle] pathForResource:@"StopListeningBug" ofType:@"wav"];
            // This is how you could use a test WAV (mono/16-bit/16k) rather than live recognition. Don't forget to add your WAV to your app bundle.
    #endif
            if(![OEPocketsphinxController sharedInstance].isListening)
            {
                //?: CHANGED TO USE GRAMMAR
                //            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
                
                [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar
                                                                                dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:TRUE];
                
    #ifdef USING_SAVE_THAT_WAVE
                [self.saveThatWaveController startSessionDebugRecord];
    #endif
            }
            
            
            
            // [self startDisplayingLevels] is not an OpenEars method, just a very simple approach for level reading
            // that I've included with this sample app. My example implementation does make use of two OpenEars
            // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
            // method of fliteController.
            //
            // The example is meant to show one way that you can read those levels continuously without locking the UI,
            // by using an NSTimer, but the OpenEars level-reading methods
            // themselves do not include multithreading code since I believe that you will want to design your own
            // code approaches for level display that are tightly-integrated with your interaction design and the
            // graphics API you choose.
            
            [self startDisplayingLevels];
            
            // Here is some UI stuff that has nothing specifically to do with OpenEars implementation
            self.startButton.hidden = TRUE;
            self.stopButton.hidden = TRUE;
            self.suspendListeningButton.hidden = TRUE;
            self.resumeListeningButton.hidden = TRUE;
        }
    }
    
    #pragma mark -
    #pragma mark OEEventsObserver delegate methods
    
    // What follows are all of the delegate methods you can optionally use once you've instantiated an OEEventsObserver and set its delegate to self.
    // I've provided some pretty granular information about the exact phase of the Pocketsphinx listening loop, the Audio Session, and Flite, but I'd expect
    // that the ones that will really be needed by most projects are the following:
    //
    //- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID;
    //- (void) audioSessionInterruptionDidBegin;
    //- (void) audioSessionInterruptionDidEnd;
    //- (void) audioRouteDidChangeToRoute:(NSString *)newRoute;
    //- (void) pocketsphinxDidStartListening;
    //- (void) pocketsphinxDidStopListening;
    //
    // It isn't necessary to have a OEPocketsphinxController or a OEFliteController instantiated in order to use these methods.  If there isn't anything instantiated that will
    // send messages to an OEEventsObserver, all that will happen is that these methods will never fire.  You also do not have to create a OEEventsObserver in
    // the same class or view controller in which you are doing things with a OEPocketsphinxController or OEFliteController; you can receive updates from those objects in
    // any class in which you instantiate an OEEventsObserver and set its delegate to self.
    
    // This is an optional delegate method of OEEventsObserver which delivers the text of speech that Pocketsphinx heard and analyzed, along with its accuracy score and utterance ID.
    - (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
        
        NSLog(@"Local callback: The received hypothesis is %@ with a score of %@ and an ID of %@", hypothesis, recognitionScore, utteranceID); // Log it.
        if([hypothesis isEqualToString:@"CHANGE MODEL"]) { // If the user says "CHANGE MODEL", we will switch to the alternate model (which happens to be the dynamically generated model).
            
            // Here is an example of language model switching in OpenEars. Deciding on what logical basis to switch models is your responsibility.
            // For instance, when you call a customer service line and get a response tree that takes you through different options depending on what you say to it,
            // the models are being switched as you progress through it so that only relevant choices can be understood. The construction of that logical branching and
            // how to react to it is your job; OpenEars just lets you send the signal to switch the language model when you've decided it's the right time to do so.
            
            if(self.usingStartingLanguageModel)
            { // If we're on the starting model, switch to the dynamically generated one.
                
                //?: CHANGED TO USE GRAMMAR
                //            [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedLanguageModel withDictionary:self.pathToSecondDynamicallyGeneratedDictionary];
                
                [[OEPocketsphinxController sharedInstance]changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedGrammar
                                                                     withDictionary:self.pathToSecondDynamicallyGeneratedDictionary];
                
                self.usingStartingLanguageModel = FALSE;
                
            } else { // If we're on the dynamically generated model, switch to the start model (this is an example of a trigger and method for switching models).
                
                //?: CHANGED TO USE GRAMMAR
                //            [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedLanguageModel withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
                
                [[OEPocketsphinxController sharedInstance]changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedGrammar withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
                
                self.usingStartingLanguageModel = TRUE;
            }
        }
        else if([hypothesis isEqualToString:@"STOP"])
        {
            [self stopButtonAction];
        }
        self.heardTextView.text = [NSString stringWithFormat:@"Heard: \"%@\"", hypothesis]; // Show it in the status box.
        
        // This is how to use an available instance of OEFliteController. We're going to repeat back the command that we heard with the voice we've chosen.
        //?    [self.fliteController say:[NSString stringWithFormat:@"You said %@",hypothesis] withVoice:self.slt];
    }
    
    #ifdef kGetNbest
    - (void) pocketsphinxDidReceiveNBestHypothesisArray:(NSArray *)hypothesisArray { // Pocketsphinx has an n-best hypothesis dictionary.
        NSLog(@"Local callback:  hypothesisArray is %@",hypothesisArray);
    }
    #endif
    // An optional delegate method of OEEventsObserver which informs that there was an interruption to the audio session (e.g. an incoming phone call).
    - (void) audioSessionInterruptionDidBegin {
        NSLog(@"Local callback:  AudioSession interruption began."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption began."; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) {
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening (if it is listening) since it will need to restart its loop after an interruption.
            if(error) NSLog(@"Error while stopping listening in audioSessionInterruptionDidBegin: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the interruption to the audio session ended.
    - (void) audioSessionInterruptionDidEnd {
        NSLog(@"Local callback:  AudioSession interruption ended."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption ended."; // Show it in the status box.
        // We're restarting the previously-stopped listening loop.
        if(![OEPocketsphinxController sharedInstance].isListening)
        {
            //?: CHANGED TO USE GRAMMAR
            //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't currently listening.
            
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
            
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the audio input became unavailable.
    - (void) audioInputDidBecomeUnavailable {
        NSLog(@"Local callback:  The audio input has become unavailable"); // Log it.
        self.statusTextView.text = @"Status: The audio input has become unavailable"; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening){
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening since there is no available input (but only if we are listening).
            if(error) NSLog(@"Error while stopping listening in audioInputDidBecomeUnavailable: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the unavailable audio input became available again.
    - (void) audioInputDidBecomeAvailable {
        NSLog(@"Local callback: The audio input is available"); // Log it.
        self.statusTextView.text = @"Status: The audio input is available"; // Show it in the status box.
        if(![OEPocketsphinxController sharedInstance].isListening)
        {
            //?: CHANGED TO USE GRAMMAR
            
            //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition, but only if we aren't already listening.
            
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:TRUE];
            
        }
    }
    // An optional delegate method of OEEventsObserver which informs that there was a change to the audio route (e.g. headphones were plugged in or unplugged).
    - (void) audioRouteDidChangeToRoute:(NSString *)newRoute {
        NSLog(@"Local callback: Audio route change. The new audio route is %@", newRoute); // Log it.
        self.statusTextView.text = [NSString stringWithFormat:@"Status: Audio route change. The new audio route is %@",newRoute]; // Show it in the status box.
        
        NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling the Pocketsphinx loop to shut down and then start listening again on the new route
        
        if(error)NSLog(@"Local callback: error while stopping listening in audioRouteDidChangeToRoute: %@",error);
        
        if(![OEPocketsphinxController sharedInstance].isListening)
        {
            //?: CHANGED TO USE GRAMMAR
            //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
    // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
    - (void) pocketsphinxRecognitionLoopDidStart {
        
        NSLog(@"Local callback: Pocketsphinx started."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx started."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
    - (void) pocketsphinxDidStartListening {
        
        NSLog(@"Local callback: Pocketsphinx is now listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx is now listening."; // Show it in the status box.
        
        self.startButton.hidden = TRUE; // React to it with some UI changes.
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
    - (void) pocketsphinxDidDetectSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected speech."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
    // This was added because developers requested being able to time the recognition speed without the speech time. The processing time is the time between
    // this method being called and the hypothesis being returned.
    - (void) pocketsphinxDidDetectFinishedSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected a second of silence, concluding an utterance."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected finished speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
    // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
    - (void) pocketsphinxDidStopListening {
        NSLog(@"Local callback: Pocketsphinx has stopped listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has stopped listening."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
    // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
    // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
    - (void) pocketsphinxDidSuspendRecognition {
        NSLog(@"Local callback: Pocketsphinx has suspended recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has suspended recognition."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
    // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
    // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
    - (void) pocketsphinxDidResumeRecognition {
        NSLog(@"Local callback: Pocketsphinx has resumed recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has resumed recognition."; // Show it in the status box.
    }
    
    // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
    // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
    - (void) pocketsphinxDidChangeLanguageModelToFile:(NSString *)newLanguageModelPathAsString andDictionary:(NSString *)newDictionaryPathAsString {
        NSLog(@"Local callback: Pocketsphinx is now using the following language model: \n%@ and the following dictionary: %@",newLanguageModelPathAsString,newDictionaryPathAsString);
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
    // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
    - (void) fliteDidStartSpeaking {
        NSLog(@"Local callback: Flite has started speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has started speaking."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
    // complex interaction between sound classes.
    - (void) fliteDidFinishSpeaking {
        NSLog(@"Local callback: Flite has finished speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has finished speaking."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousSetupDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Setting up the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to start recognition loop."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousTeardownDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Tearing down the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to cleanly end recognition loop."; // Show it in the status box.
    }
    
    - (void) testRecognitionCompleted { // A test file which was submitted for direct recognition via the audio driver is done.
        NSLog(@"Local callback: A test file which was submitted for direct recognition via the audio driver is done."); // Log it.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // If we're listening, stop listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error) NSLog(@"Error while stopping listening in testRecognitionCompleted: %@", error);
        }
        
    }
    /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
    - (void) pocketsphinxFailedNoMicPermissions {
        NSLog(@"Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.");
        self.startupFailedDueToLackOfPermissions = TRUE;
        if([OEPocketsphinxController sharedInstance].isListening){
            NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // Stop listening if we are listening.
            if(error) NSLog(@"Error while stopping listening in micPermissionCheckCompleted: %@", error);
        }
    }
    
    /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a TRUE or a FALSE result  (will only be returned on iOS7 or later).*/
    - (void) micPermissionCheckCompleted:(BOOL)result {
        if(result) {
            self.restartAttemptsDueToPermissionRequests++;
            if(self.restartAttemptsDueToPermissionRequests == 1 && self.startupFailedDueToLackOfPermissions) { // If we get here because there was an attempt to start which failed due to lack of permissions, and now permissions have been requested and they returned true, we restart exactly once with the new permissions.
                
                if(![OEPocketsphinxController sharedInstance].isListening) { // If there was no error and we aren't listening, start listening.
                    //?: CHANGED TO USE GRAMMAR
                    //                [[OEPocketsphinxController sharedInstance]
                    //                 startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel
                    //                 dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary
                    //                 acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]
                    //                 languageModelIsJSGF:FALSE]; // Start speech recognition.
                    [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
                    
                    self.startupFailedDueToLackOfPermissions = FALSE;
                }
            }
        }
    }
    
    #pragma mark -
    #pragma mark UI
    
    // This is not OpenEars-specific stuff, just some UI behavior
    
    - (IBAction) suspendListeningButtonAction { // This is the action for the button which suspends listening without ending the recognition loop
        [[OEPocketsphinxController sharedInstance] suspendRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = FALSE;
    }
    
    - (IBAction) resumeListeningButtonAction { // This is the action for the button which resumes listening if it has been suspended
        [[OEPocketsphinxController sharedInstance] resumeRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) stopButtonAction { // This is the action for the button which shuts down the recognition loop.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // Stop if we are currently listening.
            NSLog(@"?????: CALLING: stopListening....");
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error)NSLog(@"Error stopping listening in stopButtonAction: %@", error);
            NSLog(@"?????: RETURNED FROM: stopListening....error = %@", error ? error.description : @"NONE");
        }
        self.startButton.hidden = FALSE;
        self.stopButton.hidden = TRUE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) startButtonAction { // This is the action for the button which starts up the recognition loop again if it has been shut down.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            //?: CHANGED TO USE GRAMMAR
            
            //        [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedGrammar dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:YES];
        }
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    #pragma mark -
    #pragma mark Example for reading out Pocketsphinx and Flite audio levels without locking the UI by using an NSTimer
    
    // What follows are not OpenEars methods, just an approach for level reading
    // that I've included with this sample app. My example implementation does make use of two OpenEars
    // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
    // method of OEFliteController.
    //
    // The example is meant to show one way that you can read those levels continuously without locking the UI,
    // by using an NSTimer, but the OpenEars level-reading methods
    // themselves do not include multithreading code since I believe that you will want to design your own
    // code approaches for level display that are tightly-integrated with your interaction design and the
    // graphics API you choose.
    //
    // Please note that if you use my sample approach, you should pay attention to the way that the timer is always stopped in
    // dealloc. This should prevent you from having any difficulties with deallocating a class due to a running NSTimer process.
    
    - (void) startDisplayingLevels { // Start displaying the levels using a timer
        [self stopDisplayingLevels]; // We never want more than one timer valid so we'll stop any running timers first.
        self.uiUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/kLevelUpdatesPerSecond target:self selector:@selector(updateLevelsUI) userInfo:nil repeats:YES];
    }
    
    - (void) stopDisplayingLevels { // Stop displaying the levels by stopping the timer if it's running.
        if(self.uiUpdateTimer && [self.uiUpdateTimer isValid]) { // If there is a running timer, we'll stop it here.
            [self.uiUpdateTimer invalidate];
            self.uiUpdateTimer = nil;
        }
    }
    
    - (void) updateLevelsUI { // And here is how we obtain the levels.  This method includes the actual OpenEars methods and uses their results to update the UI of this view controller.
        
        self.pocketsphinxDbLabel.text = [NSString stringWithFormat:@"Pocketsphinx Input level:%f",[[OEPocketsphinxController sharedInstance] pocketsphinxInputLevel]];  //pocketsphinxInputLevel is an OpenEars method of the class OEPocketsphinxController.
        
        //?    if(self.fliteController.speechInProgress) {
        //?        self.fliteDbLabel.text = [NSString stringWithFormat:@"Flite Output level: %f",[self.fliteController fliteOutputLevel]]; // fliteOutputLevel is an OpenEars method of the class OEFliteController.
        //?    }
    }
    
    #ifdef USING_SAVE_THAT_WAVE
    - (void) wavWasSavedAtLocation:(NSString *)location
    {
        NSLog(@"???  WAV was saved at the path %@", location);
        
    }
    #endif
    
    @end
    #endif

    LOG

    
    2016-05-05 21:01:00.975 OpenEarsSampleApp[733:193018] Starting OpenEars logging for OpenEars version 2.501 on 64-bit device (or build): iPhone running iOS version: 9.300000
    2016-05-05 21:01:00.976 OpenEarsSampleApp[733:193018] Creating shared instance of OEPocketsphinxController
    2016-05-05 21:01:00.991 OpenEarsSampleApp[733:193018] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish
    2016-05-05 21:01:01.013 OpenEarsSampleApp[733:193018] I'm done running performDictionaryLookup and it took 0.021142 seconds
    2016-05-05 21:01:01.042 OpenEarsSampleApp[733:193018] Returning a cached version of LanguageModelGeneratorLookupList.text
    2016-05-05 21:01:01.066 OpenEarsSampleApp[733:193018] The word BEELD was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2016-05-05 21:01:01.066 OpenEarsSampleApp[733:193018] Using convertGraphemes for the word or phrase beeld which doesn't appear in the dictionary
    2016-05-05 21:01:01.072 OpenEarsSampleApp[733:193018] the graphemes "B IY L D" were created for the word BEELD using the fallback method.
    2016-05-05 21:01:01.096 OpenEarsSampleApp[733:193018] The word TAH was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2016-05-05 21:01:01.096 OpenEarsSampleApp[733:193018] Using convertGraphemes for the word or phrase tah which doesn't appear in the dictionary
    2016-05-05 21:01:01.098 OpenEarsSampleApp[733:193018] the graphemes "T AA" were created for the word TAH using the fallback method.
    2016-05-05 21:01:01.099 OpenEarsSampleApp[733:193018] I'm done running performDictionaryLookup and it took 0.056759 seconds
    2016-05-05 21:01:01.110 OpenEarsSampleApp[733:193018] 
    
    Welcome to the OpenEars sample project. This project understands the words:
    BACKWARD,
    CHANGE,
    FORWARD,
    GO,
    LEFT,
    MODEL,
    RIGHT,
    TURN,
    and if you say "CHANGE MODEL" it will switch to its dynamically-generated model which understands the words:
    CHANGE,
    MODEL,
    MONDAY,
    TUESDAY,
    WEDNESDAY,
    THURSDAY,
    FRIDAY,
    SATURDAY,
    SUNDAY,
    QUIDNUNC
    2016-05-05 21:01:01.110 OpenEarsSampleApp[733:193018] A request was made to set the path to the test file to the following path: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/StopListeningBug.wav
    2016-05-05 21:01:01.111 OpenEarsSampleApp[733:193018] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2016-05-05 21:01:01.113 OpenEarsSampleApp[733:193018] User gave mic permission for this app.
    2016-05-05 21:01:01.113 OpenEarsSampleApp[733:193018] setSecondsOfSilence wasn't set, using default of 0.700000.
    2016-05-05 21:01:01.113 OpenEarsSampleApp[733:193018] Successfully started listening session from startListeningWithLanguageModelAtPath:
    2016-05-05 21:01:01.114 OpenEarsSampleApp[733:193049] Starting listening.
    2016-05-05 21:01:01.114 OpenEarsSampleApp[733:193049] about to set up audio session
    2016-05-05 21:01:01.114 OpenEarsSampleApp[733:193049] Creating audio session with default settings.
    2016-05-05 21:01:01.153 OpenEarsSampleApp[733:193074] Audio route has changed for the following reason:
    2016-05-05 21:01:01.156 OpenEarsSampleApp[733:193074] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2016-05-05 21:01:01.159 OpenEarsSampleApp[733:193074] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is ---SpeakerMicrophoneBuiltIn---. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x12760fe40, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x1276820d0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1276a5710, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>.
    2016-05-05 21:01:01.354 OpenEarsSampleApp[733:193049] done starting audio unit
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone				
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		40
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/CE91B684-7226-483F-B7FD-DB4AED84522A/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg					
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
    -input_endian		little		little
    -jsgf					/var/mobile/Containers/Data/Application/CE91B684-7226-483F-B7FD-DB4AED84522A/Library/Caches/FirstOpenEarsDynamicLanguageModel.gram
    -keyphrase				
    -kws					
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda					
    -ldadim			0		0
    -lifter			0		22
    -lm					
    -lmctl					
    -lmname					
    -logbase		1.0001		1.000100e+00
    -logfn					
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		1.000000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    -mean					/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
    -mfclogdir				
    -min_endfr		0		0
    -mixw					
    -mixwfloor		0.0000001	1.000000e-07
    -mllr					
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir				
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump				/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
    -senlogdir				
    -senmgau				
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec					0-12/13-25/26-38
    -tmat					/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule				
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		2.000000e+00
    -var					/var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params				
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: acmod.c(164): Using subvector specification 0-12/13-25/26-38
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
    INFO: bin_mdef.c(336): Reading binary model definition: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(354): 0 variance values floored
    INFO: ptm_mgau.c(805): Number of codebooks doesn't match number of ciphones, doesn't look like PTM: 1 != 46
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size: 
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(354): 0 variance values floored
    INFO: s2_semi_mgau.c(904): Loading senones from dump file /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
    INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
    INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
    INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
    INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4121 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/CE91B684-7226-483F-B7FD-DB4AED84522A/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 16 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/1AC0B9E8-E6D7-4923-AF38-99FBE0E560A1/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
    INFO: jsgf.c(691): Defined rule: <FirstOpenEarsDynamicLanguageModel.g00000>
    INFO: jsgf.c(691): Defined rule: PUBLIC <FirstOpenEarsDynamicLanguageModel.rule_0>
    INFO: fsg_model.c(215): Computing transitive closure for null transitions
    INFO: fsg_model.c(277): 0 null transitions added
    INFO: fsg_search.c(227): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -5, pip: 0)
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [BREATH] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [COUGH] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [NOISE] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [SMACK] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for [UH] to FSG
    INFO: fsg_model.c(448): Added 6 silence word transitions
    INFO: fsg_search.c(173): Added 3 alternate word transitions
    INFO: fsg_lextree.c(110): Allocated 564 bytes (0 KiB) for left and right context phones
    INFO: fsg_lextree.c(256): 117 HMM nodes in lextree (57 leaves)
    INFO: fsg_lextree.c(259): Allocated 16848 bytes (16 KiB) for all lextree nodes
    INFO: fsg_lextree.c(262): Allocated 8208 bytes (8 KiB) for lextree leafnodes
    2016-05-05 21:01:01.375 OpenEarsSampleApp[733:193049] Listening.
    2016-05-05 21:01:01.375 OpenEarsSampleApp[733:193049] Project has these words or phrases in its dictionary:
    CHANGE
    COMMANDS
    DOCUMENT
    DOCUMENT(2)
    HELP
    LANGUAGE
    LANGUAGE(2)
    LIBRARY
    LIST
    MODEL
    MY
    NEW
    NEW(2)
    SAVE
    SETTINGS
    STOP
    2016-05-05 21:01:01.375 OpenEarsSampleApp[733:193049] Recognition loop has started
    2016-05-05 21:01:01.388 OpenEarsSampleApp[733:193018] Local callback: Pocketsphinx is now listening.
    2016-05-05 21:01:01.389 OpenEarsSampleApp[733:193018] Local callback: Pocketsphinx started.
    2016-05-05 21:01:01.817 OpenEarsSampleApp[733:193049] Speech detected...
    2016-05-05 21:01:01.821 OpenEarsSampleApp[733:193018] Local callback: Pocketsphinx has detected speech.
    2016-05-05 21:01:03.634 OpenEarsSampleApp[733:193049] End of speech detected...
    2016-05-05 21:01:03.635 OpenEarsSampleApp[733:193018] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 40.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 43.97 -5.00 -12.58  9.52 16.53  9.09 -1.09  3.41 -1.10  8.24 -9.69 -10.98 -0.01 >
    INFO: fsg_search.c(843): 187 frames, 9914 HMMs (53/fr), 24051 senones (128/fr), 695 history entries (3/fr)
    
    ERROR: "fsg_search.c", line 913: Final result does not match the grammar in frame 187
    2016-05-05 21:01:03.642 OpenEarsSampleApp[733:193049] Pocketsphinx heard "" with a score of (0) and an utterance ID of 0.
    2016-05-05 21:01:03.642 OpenEarsSampleApp[733:193049] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    2016-05-05 21:01:04.004 OpenEarsSampleApp[733:193049] Speech detected...
    2016-05-05 21:01:04.004 OpenEarsSampleApp[733:193018] Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 43.97 -5.00 -12.58  9.52 16.53  9.09 -1.09  3.41 -1.10  8.24 -9.69 -10.98 -0.01 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 52.22 -3.71 -18.04  3.93  7.25 12.96  3.76 -1.31  9.14  2.66 -9.46 -5.11 -5.77 >
    INFO: cmn_prior.c(99): cmn_prior_update: from < 52.22 -3.71 -18.04  3.93  7.25 12.96  3.76 -1.31  9.14  2.66 -9.46 -5.11 -5.77 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 53.42 -3.52 -21.11  0.57  3.92 13.89  4.79 -0.94 14.91  2.10 -7.24 -0.58 -5.02 >
    INFO: cmn_prior.c(99): cmn_prior_update: from < 53.42 -3.52 -21.11  0.57  3.92 13.89  4.79 -0.94 14.91  2.10 -7.24 -0.58 -5.02 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 53.36 -2.42 -22.17  0.01  3.68 14.88  6.11 -0.73 16.90  0.72 -8.96  0.24 -5.94 >
    2016-05-05 21:01:17.177 OpenEarsSampleApp[733:193049] End of speech detected...
    2016-05-05 21:01:17.178 OpenEarsSampleApp[733:193018] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 53.36 -2.42 -22.17  0.01  3.68 14.88  6.11 -0.73 16.90  0.72 -8.96  0.24 -5.94 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 53.08 -2.87 -22.50 -0.25  3.49 15.61  6.18 -1.21 17.03 -0.40 -9.44  0.36 -7.65 >
    INFO: fsg_search.c(843): 1325 frames, 13256 HMMs (10/fr), 36815 senones (27/fr), 1846 history entries (1/fr)
    
    2016-05-05 21:01:17.182 OpenEarsSampleApp[733:193049] Pocketsphinx heard "STOP" with a score of (0) and an utterance ID of 1.
    2016-05-05 21:01:17.182 OpenEarsSampleApp[733:193018] Local callback: The received hypothesis is STOP with a score of 0 and an ID of 1
    2016-05-05 21:01:17.183 OpenEarsSampleApp[733:193018] ?????: CALLING: stopListening....
    2016-05-05 21:01:17.183 OpenEarsSampleApp[733:193018] Stopping listening.
    2016-05-05 21:01:17.416 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.426 OpenEarsSampleApp[733:193074] Audio route has changed for the following reason:
    2016-05-05 21:01:17.429 OpenEarsSampleApp[733:193074] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2016-05-05 21:01:17.430 OpenEarsSampleApp[733:193074] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is ---SpeakerMicrophoneBuiltIn---. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x127687dd0, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x1276a2b20, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x12761cb90, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>.
    2016-05-05 21:01:17.467 OpenEarsSampleApp[733:193018] Attempting to stop an unstopped utterance so listening can stop.
    2016-05-05 21:01:17.467 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.518 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.569 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.621 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.672 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.723 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.775 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.826 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.878 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.928 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:17.980 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.031 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.083 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.134 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.185 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.237 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.288 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.340 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.391 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.443 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.494 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.545 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.596 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.647 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.699 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.750 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.802 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.853 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.904 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:18.956 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.007 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.059 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.110 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.163 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.214 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.266 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.317 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.369 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.420 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.471 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.523 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.574 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.626 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.677 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.729 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.780 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.831 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.883 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.934 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:19.986 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.037 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.089 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.140 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.191 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.243 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.294 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.345 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.397 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.448 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.499 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.551 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.602 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.653 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.704 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.756 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.807 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.858 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.910 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:20.962 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.013 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.065 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.116 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.167 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.219 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.270 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.322 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.373 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.424 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.476 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.527 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.578 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.630 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.681 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.733 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.784 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.835 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.887 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.938 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:21.990 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.041 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.093 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.144 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.195 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.246 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.298 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.349 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.401 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.452 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.503 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.555 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.606 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.658 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    2016-05-05 21:01:22.709 OpenEarsSampleApp[733:193018] Unable to stop listening because because an utterance is still in progress; trying again.
    
    

    [/spoiler]

    #1030268
    Halle Winkler
    Politepix

    Hi,

    Thanks very much for taking the time. I ran your file and view controller code (on an iPhone 5S with 9.3.1) and the issue didn’t replicate for me yet – with which device do you reliably experience this result? Is there anything interesting about the device that could lead to it being more prone towards this result (hard to imagine, but maybe something like having lots of apps loaded, conceivably an audio app backgrounded, very little disk space available, jailbroken, anything of note)?

    The logs are intriguing because it actually doesn’t look like there’s an utterance in progress at the time that it complains that there is. In the other report of this issue there is a slow search happening while stopping is being attempted, but in this one the hypothesis is received and it doesn’t even look like listening is resumed before the shutdown starts. I’m suspicious of that route change, which looks kind of extraneous and which also pops up in the other report right before the bad shutdown.

    #1030270
    roche314
    Participant

    Halle,

    Phone info:
    Model: A1688 iPhone 6s
    iOS: 9.3.1
    Storage: 128GB (55GB free space)

    Nothing special about the phone, not jail broken, it has a fair amount of apps, but plenty of free space.
    I also tried a fresh reboot of the phone and reran the sample app and still had the bug.

    A colleague is also experiencing the problem. (I believe he has an iPhone 6+ with 9.3.1)

    Let me know if you want me to run any more tests or need more information about my device.

    thanks
    -steve

    #1030326
    Halle Winkler
    Politepix

    Hi Steve,

    Just an update that I am now seeing this replicate on a 6S with 9.3.1 and I’m investigating the cause. Thank you for the test case.

    #1030329
    sathishkrishna
    Participant

    Hi,

    I’m having a similar/exact issue, stopListening just keeps hanging for about 10 seconds, and here are the details:

    OS: 9.3.2
    Mem Capacity: 55 gb; available: 53 gb
    device: iPad Air 2
    model: MGKM2LL/A

    —————————–
    LOGS are below
    —————————–
    [spoiler]
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: Attempting to start listening session from startListeningWithLanguageModelAtPath:
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: User gave mic permission for this app.
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: Valid setSecondsOfSilence value of 0.400000 will be used.
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: Successfully started listening session from startListeningWithLanguageModelAtPath:
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: Starting listening.
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: about to set up audio session
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: Creating audio session with default settings.
    May 16 12:52:03 USERs-iPad MyApp.x Debug[2808] <Warning>: audioMode is correct, we will leave it as it is.
    May 16 12:52:03 USERs-iPad mediaserverd[25] <Error>: 12:52:03.946 ERROR: [0x16e12f000] >va> 505: Error ‘what’ getting client format for physical format [ 16/16000/1; flags: 0xc; bytes/packet: 2; frames/packet: 1; bytes/frame: 2; ]
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: Audio route has changed for the following reason:
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x12e62e5c0,
    inputs = (
    “<AVAudioSessionPortDescription: 0x12e786190, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
    );
    outputs = (
    “<AVAudioSessionPortDescription: 0x12e7434f0, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>”
    )>.
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: done starting audio unit
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: Restoring SmartCMN value of 21.918213
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: Listening.
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: Project has these words or phrases in its dictionary:
    !exclamation-point
    “close-quote
    “double-quote
    “END-OF-QUOTE
    “end-QUOTE
    “in-quotes
    “quote
    “unquote
    #sharp-sign
    %percent
    &ampersand
    ’bout
    ’cause
    ‘course
    ‘cuse
    ’em
    ‘end-inner-quote
    ‘end-quote
    ‘frisco
    ‘gain
    ‘inner-quote
    ‘kay
    ‘m
    ‘n
    ‘quote
    ’round
    ‘s
    ‘single-quote
    ’til
    ’tis
    ’twas
    …and 133407 more.
    May 16 12:52:04 USERs-iPad MyApp.x Debug[2808] <Warning>: Recognition loop has started
    May 16 12:52:06 USERs-iPad MyApp.x Debug[2808] <Warning>: Speech detected…
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: End of speech detected…
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Pocketsphinx heard “read my news” with a score of (0) and an utterance ID of 1.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Stopping listening.
    May 16 12:52:07 USERs-iPad mediaserverd[25] <Error>: 12:52:07.419 ERROR: [0x16e35f000] >va> 505: Error ‘what’ getting client format for physical format [ 16/44100/2; flags: 0xc; bytes/packet: 4; frames/packet: 1; bytes/frame: 4; ]
    May 16 12:52:07 USERs-iPad mediaserverd[25] <Error>: 12:52:07.442 EXCEPTION: [0x16e0a3000] >va> 470: kAudioHardwareUnknownPropertyError: “unknown property [rtcf/glob/0].”
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Audio route has changed for the following reason:
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x12e55fa60,
    inputs = (
    “<AVAudioSessionPortDescription: 0x13093bc10, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Right>”
    );
    outputs = (
    “<AVAudioSessionPortDescription: 0x12e6298a0, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>”
    )>.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Attempting to stop an unstopped utterance so listening can stop.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:07 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:08 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:09 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:10 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:11 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:12 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:13 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:14 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:15 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:16 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: Because the utterance couldn’t be stopped in a reasonable timeframe, we will break but prefer to let the decoder leak than force an exception by freeing it when it’s unsafe. If you see this message regularly, it is a bug, so please report the specific circumstances under which you are regularly seeing it.
    May 16 12:52:17 USERs-iPad MyApp.x Debug[2808] <Warning>: No longer listening.
    [/spoiler]

    #1030330
    Halle Winkler
    Politepix

    Welcome,

    This actually looks like a different issue due to a conflicting audio session being used at the same time (there’s a session error during the shutdown process if you look up above). But the initial issue is replicated so there should be a fix for it shortly.

    #1030331
    sathishkrishna
    Participant

    Thanks for your quick response.. but the same thing works fine on older version of iOS on older device.

    regards,
    sathishkrishna

    #1030332
    Halle Winkler
    Politepix

    Hi,

    Unfortunately audio session coexistence isn’t supported, so even if the error being returned is a new thing due to the OS version, I can’t necessarily help with it. But the hang might be fixed incidentally by testing against the first case now that I can replicate it (let’s hope so).

    #1030333
    sathishkrishna
    Participant

    I’ll wait for your fix to the first issue..

    Thanks!
    sathish

    #1030334
    sathishkrishna
    Participant

    Hi,

    Another log trace, which indicates the 10s delay still issue exists without any multi audio session.
    [spoiler]
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: Attempting to start listening session from startListeningWithLanguageModelAtPath:
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: User gave mic permission for this app.
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: Valid setSecondsOfSilence value of 0.400000 will be used.
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: Successfully started listening session from startListeningWithLanguageModelAtPath:
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: Starting listening.
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: about to set up audio session
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: Creating audio session with default settings.
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: audioMode is incorrect, we will change it.
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: audioMode is now on the correct setting.
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: done starting audio unit
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: Restoring SmartCMN value of 25.314453
    May 16 16:55:00 USERs-iPad MyApp.x Debug[2985] <Warning>: Listening.
    May 16 16:55:01 USERs-iPad MyApp.x Debug[2985] <Warning>: Project has these words or phrases in its dictionary:
    !exclamation-point
    “close-quote
    “double-quote
    “END-OF-QUOTE
    “end-QUOTE
    “in-quotes
    “quote
    “unquote
    #sharp-sign
    %percent
    &ampersand
    ’bout
    ’cause
    ‘course
    ‘cuse
    ’em
    ‘end-inner-quote
    ‘end-quote
    ‘frisco
    ‘gain
    ‘inner-quote
    ‘kay
    ‘m
    ‘n
    ‘quote
    ’round
    ‘s
    ‘single-quote
    ’til
    ’tis
    ’twas
    …and 133407 more.
    May 16 16:55:01 USERs-iPad MyApp.x Debug[2985] <Warning>: Recognition loop has started
    May 16 16:55:01 USERs-iPad MyApp.x Debug[2985] <Warning>: Speech detected…
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: End of speech detected…
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Pocketsphinx heard “read emails” with a score of (0) and an utterance ID of 0.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Stopping listening.
    May 16 16:55:02 USERs-iPad mediaserverd[25] <Error>: 16:55:02.605 EXCEPTION: [0x16e12f000] >va> 470: kAudioHardwareUnknownPropertyError: “unknown property [rtcf/glob/0].”
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Audio route has changed for the following reason:
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x13de52860,
    inputs = (
    “<AVAudioSessionPortDescription: 0x1406612b0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Right>”
    );
    outputs = (
    “<AVAudioSessionPortDescription: 0x1406211b0, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>”
    )>.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Attempting to stop an unstopped utterance so listening can stop.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:02 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:03 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:04 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:05 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:06 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad syslogd[22] <Notice>: ASL Sender Statistics
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:07 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad syncdefaultsd[2986] <Notice>: (Note ) SYDAccount: no account
    May 16 16:55:08 USERs-iPad syncdefaultsd[2986] <Notice>: (Note ) SYDBootAccount: no account (null)
    May 16 16:55:08 USERs-iPad syncdefaultsd[2986] <Notice>: (Note ) SYDPIMAccount: no account (null)
    May 16 16:55:08 USERs-iPad syncdefaultsd[2986] <Notice>: (Note ) SYDAlwaysOnAccount: no account (null)
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:08 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:09 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:10 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:11 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Unable to stop listening because because an utterance is still in progress; trying again.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: Because the utterance couldn’t be stopped in a reasonable timeframe, we will break but prefer to let the decoder leak than force an exception by freeing it when it’s unsafe. If you see this message regularly, it is a bug, so please report the specific circumstances under which you are regularly seeing it.
    May 16 16:55:12 USERs-iPad MyApp.x Debug[2985] <Warning>: No longer listening.
    [/spoiler]

    #1030336
    Halle Winkler
    Politepix

    OK, thanks for the additional report of the issue.

    #1030340
    sathishkrishna
    Participant

    Yeah sure.. another observation is.. Does this problem exist ONLY on arm64 architecture based devices.. I couldn’t try testing on an iOS 8.X with arm64 device to confirm my doubt.. all of my devices are iOS 9.x with arm64.. Hope that helps in your debugging.

    Thanks!
    sathish

    #1030346
    Halle Winkler
    Politepix

    OK, this issue is fixed for the upcoming version (it is a race condition between the shutdown process and a new utterance starting that only manifests on very fast devices), no more info needed here. No estimated delivery date for the next version, but it is only dependent on one more bug fix and testing time, so it shouldn’t be too long.

    #1030347
    sathishkrishna
    Participant

    That’s Awesome Halle!
    Meanwhile, can I have a beta version of your fix?? I’ll test it out in my application and provide appropriate feedback with issues if any..

    Thanks again!
    sathish

    #1030348
    Halle Winkler
    Politepix

    No, sorry.

    #1030553
    Halle Winkler
    Politepix

    OK, the new version of OpenEars 2.502 (http://changelogs.politepix.com) should fix this issue.

    #1031461
    davestrand
    Participant

    Hi, I too was noticing the 10 second delay in my App but after your latest update that issue is resolved. However now I am noticing some other strange issues, and was wondering if you could check to see if your 2.504 version experiences anything like it. Keep in mind that I haven’t touched this application in a while, until the error with the 10 second delay, so these issues could be related to new iOS releases and not OpenEars.

    1.) My application uses background audio which allows it to play when minimized or screen dimmed. Now, if I minimize my application and try to play video or audio from a different application, it used to stop my App audio playback and switch over to their application. Now certain applications, like playing a regular recorded Video inside of the default Photos application on iPhone will not work at all. The play button does not function. Curiously, the only way for me to get the Photos application working again is to launch Pandora, which seems to wake up the audio/video processes again. If I then go back to my OpenEars enabled app, play audio, and minimize, the same playback issue is replicated. Could certainly be a problem with how I am handling background services, but I thought I’d ask and see if you are noticing anything similar in your latest version.

    2.) My Apps audio playback will be silent after stopListen unless I pause, clear queue, add audio, then pause it again, then try audio playback again. It’s very strange.. I wrote this code a long time ago, and it’s messy, so it could easily be a bug in my logic which I am looking into. This is probably my problem, not yours, but I thought I’d mention it just in case.

    My Application is a choose your own adventure audio book. So it plays an mp3, waits for you to respond with a voice command, and then plays the next section.

    #1031462
    Halle Winkler
    Politepix

    Hi,

    Sorry, there have been no recent changes that would lead to these issues. They are situations where minor iOS or device changes can bring out new behavior so I would check against other iOS versions and file a radar if the behavior is undesirable, or spend more simplifying the messy code so a clear correlation emerges that would be possible to replicate and troubleshoot. I don’t actually support background mode so I wouldn’t be able to help much with that aspect generally, sorry.

    I will close this since it is pretty far afield of the topic, but if you have issues happening in the foreground that you can create a reduced and simple example of an issue which is demonstrably related to OpenEars according to this process:

    https://www.politepix.com/forums/topic/how-to-create-a-minimal-case-for-replication/

    then it’s totally fine to open a new topic just for that report and I’ll be happy to take a look, thanks.

Viewing 23 posts - 1 through 23 (of 23 total)
  • The topic ‘stopListening hangs using iOS9’ is closed to new replies.