- This topic has 6 replies, 2 voices, and was last updated 10 years, 5 months ago by Halle Winkler.
-
AuthorPosts
-
November 4, 2013 at 9:30 pm #1018813steve100Participant
Hi,
I want to play the audio while the user is speaking. I noticed the OpenEars freamwork uses Audio Unit. How can I change the code to turn on the output?
Another question I have is how to use the outputAudio? What will happen if I set it to true?
Thanks,
Steve
November 5, 2013 at 7:54 am #1018815steve100ParticipantBasically, I want to playback whatever audio my app received, do the speech recognition and save the audio into a file if the engine can recognize anything.
What is the best approach to do all these? Can the SaveTheWave plug-in save all recognized voices into files?
Thanks,
Steve
November 5, 2013 at 10:39 am #1018816Halle WinklerPolitepixWelcome Steve,
I’m not completely clear on the question yet, but maybe you can clarify it for me a bit. Are you intending to play back the user’s own voice while listening to their voice?
November 5, 2013 at 7:48 pm #1018818steve100ParticipantYes. I want to play back the user’s own voice or any voice that around the user, just like a hearing aid.
Another question I have is normally how close to the device for an voice to get accurate recognition? If the user talks to another person in one meter distance, can the engine get pretty accurate recognition for the other person?
Thanks,
Steve
November 5, 2013 at 7:53 pm #1018819Halle WinklerPolitepixYes. I want to play back the user’s own voice or any voice that around the user, just like a hearing aid.
OK, I think you could use SaveThatWave for this. It saves a WAV file of recognized speech that you can then play back using an AVAudioPlayer.
Another question I have is normally how close to the device for an voice to get accurate recognition? If the user talks to another person in one meter distance, can the engine get pretty accurate recognition for the other person?
For open speech into the built-in mic, I usually test from about a meter away, so I would expect that to work pretty well. A bigger issue is going to be cross-talk between two different speakers.
November 5, 2013 at 8:03 pm #1018820steve100ParticipantThank you for the info. Can I also get the data in the buffer and play from there? I also noticed there is a openAudioDevice function inside ContinousAudioUnit.mm. That function uses Remote Audio Unit. Can I also use that to play back the voice?
If I purchase SaveThatWave, I also get the source code for that?
Thanks,
Steve
November 5, 2013 at 8:09 pm #1018821Halle WinklerPolitepixYou’re welcome.
Can I also get the data in the buffer and play from there? I also noticed there is a openAudioDevice function inside ContinousAudioUnit.mm. That function uses Remote Audio Unit. Can I also use that to play back the voice?
Nope, you can’t play back audio using ContinuousAudioUnit.mm. If you’re well-acquainted with audio unit programming there’s nothing standing in the way of modifying it to do that, but it’s outside of the support I can give here since it isn’t part of the functionality of the framework and playback is already supported via AVAudioPlayer.
If I purchase SaveThatWave, I also get the source code for that?
No, SaveThatWave is a compiled plugin.
Best,
Halle
-
AuthorPosts
- You must be logged in to reply to this topic.