Forum Replies Created

Viewing 3 posts - 1 through 3 (of 3 total)

  • Author
  • Bruno

    Thanks for the explanation. I’m generating the models each time I needed now and it’s working as expected.


    I think that the main issue you’re encountering is that the demo version of Rejecto doesn’t write out a file – it can only be used to dynamically generate models.

    I get the model files saved to my app Cache folder, that means I’m not using the demo version right? If I use them in different app sessions (application runs) I always get the rejected phonemes as hypothesis in the OEEventsObserver callbacks.

    The algorithm I’m using the get the model paths needed to call startRealtimeListeningWithLanguageModelAtPath is:

    • Call pathToSuccessfullyGeneratedXXXXWithRequestedName: for each path needed (model and dictionary in my case)
    • If I get a paths to existing files I use those to call startRealtimeListening...
    • If I don’t get a valid path I use generateRejectingLanguageModelFromArray and start over

    That make sense?

    To briefly clarify, in order to not have Rejecto phonemes returned, it is necessary to use Rejecto to dynamically generate your models at runtime…

    That means that if I don’t want to receive rejected phonemes I need to always generate the language model? That is what we are using so far and works well, my intention is avoid generate the model files each time I start listening mostly because the input parameters never change.



    I’m working for a client that already have the plugins integrated in the code base some time ago so I don’t have the email used to request the plugin. Sorry.

    I update the OpenEars framework a couple of weeks ago but I don’t get the last Rejecto and RapidEars plugins. Could that be an issue?

    Also, I’m using Swift 2 and XCode 7.

    Thanks for your quick answer. Nice you remember me :)


Viewing 3 posts - 1 through 3 (of 3 total)