Welcome,
Can you describe the intended implementation and results in a bit more detail? OpenEars isn’t designed to do comparisons between audio files so I think I’m not quite understanding the goal yet.
The aspect that I’m least clear about is the time or ordering relationship between 1) using SaveThatWave to make a recording (by definition not occurring in real time since the recording can’t exist until the utterance is fully complete, and presumably not the recording from the realtime session because that one is intended to run a comparison with WAV produced from this phase) and 2) the realtime recognition (happening while a mic utterance is in progress, so not using the WAV recording) and 3) running WAV-based recognition on the original WAV (happening in a third time period altogether since it requires the existence of a fully-recorded WAV and requires that realtime recognition isn’t in progress).
I’ll be happy to give some advice if I can, once I get the goal a little better. Or maybe there is a simpler way to accomplish the underlying speech interface task, if you want to elaborate on that.