Sign2me is based on
AI & Deep Learning
How To Use It
The app uses artificial intelligence. Through pictures, the app translates sign language into speech. As you film yourself, using sign language, your phone will pick up the signs. By the means of artificial intelligence, the app translates the signs to speak, which will come out as audio from your phone’s speaker. The other way around, the app translates the words into signs. Speaking into your phone’s microphone the app will translate the audio into signs shown by an avatar on your phone screen.
The app is based on deep learning which is a tool that comes from artificial intelligence. The technology behind deep learning makes the computer – in this case your phone – capable of analyzing, categorizing and recognizing visual atoms: In our case, the videos you tape, as you use sign language. Over time the computer will be able to analyze the semantic meaning, and as time goes it will pick up on those meanings remembering and being able to translate them in to speak.
Therefore, when you first download the app the computer’s memory will be empty. As you feed the app with examples of sign language and speech, you’ll feed it with data it can use to learn the algorithm itself over time.
Therefore, when you first download the app the computer’s memory will be empty. As you feed the app with examples of sign language and speech, you’ll feed it with data it can use to learn the algorithm itself over time.