Forum Replies Created

Viewing 4 posts - 1 through 4 (of 4 total)

  • Author
  • lory


    Can you confirm I can’t use SoundSystem when Openears is started ?
    Before startListening call, I can play my sound with:

    Then it’s doesn’t work even if I suspend recognition. No error but I can’t hear the sound. Do you have a trick ? I’ve to use AVAudioPlayer way ?

    I’m not using PlayAndRecord, juste simple recognition.

    in reply to: Rejecto questions #1020891

    The latest iPad Air

    in reply to: Rejecto questions #1020858

    This is just my jSON, I pass a real array to the generator and hypothesis can output these 8 different words.

    Something like this:

    NSMutableArray *words = [[NSMutableArray alloc] init];
            for (NSDictionary *dict in [self.currentSentenceDictionnary valueForKey:@"words"]){
                [words addObject:[[dict valueForKey:@"word"] uppercaseString]];

    I have accent but sometimes it recognize very different things from what I say. For exemple if I say “left hand” it return often “it and”. My “left” pronunciation is not near of “it”. In opposition if I said “test it” it work everytimes. “Help me” never worked but already have a “half hand help” output.

    I’ll explain what I want to do, maybe I don’t use the best algorithm to do what I want.

    I made an app for learning english with music. In my controller an extract of a video music clip is played and you have to repeat what you have listen.
    For this example the sentence you listen is “I wanna be your left hand man”
    In the UIView I display “I wanna be your … … man” and the user must say “left hand” to success. But I can’t only put two words in the array, because then it’s too easy and “left hand” is always recognized. So I put some “close” sentences to see if the user really understand what he listens but even me who know the answer I can’t success everytime so it’s a bit problematic.

    in reply to: Rejecto questions #1020855

    Thanks for the fast response, it was the flag issue.

    It’s seems to work better with Rejecto now. At the beginning it was a still a bit random but when I speak faster the good sentence is more often spotted.

    I used these sentences:

    "words" : [
    { "word" : "half and", "match" : false },
    { "word" : "left hand", "match" : true },
    { "word" : "help me", "match" : false },
    { "word" : "test it", "match" : false }

    If I say “left-hand” rapidly it’s ok but with “left hand” hypothesis return only “and”.

    But you are right, I have a french accent and it’s may be the problem ! But I really think I pronounce the same sentence with the same pronunciation and having different results is a bit strange.

Viewing 4 posts - 1 through 4 (of 4 total)