I’m having one challenge that I’d love to get feedback from the community on:
When I start Listening, there’s a bit of a lag, so I typically prefer to start the system prior to requiring any speech input. Unfortunately, that means that it automatically jumps into recognition mode, which can cause code to be implemented that’s not useful.
I’ve solved this in the past by blocking based on a boolean, but this seems inefficient and still seems to frequently lead to words getting queued in the hypothesis, and the first recognition being error prone.
Is there a way to “soft start” the engine so that there isn’t a lag between first request and “ready” state, while not starting the actual recognition process?
Thanks!