By playing appropriate sound effects for each avatar triggered when users either make specific expressions or say certain words during recording session, Apple's Animoji may provide more ways to interact with the face-tracking animated characters,... Launched alongside iPhone X Animoji became addition to iOS that spawned commercials and. While Apple has added more interactivity to the face-tracked creations, including tongue detection and user-created Memoji, it seems that Apple has more ideas of where Animoji can go.. During the phase of Animoji, the software captures the facial movements and audio from the subject. Usually, the face tracking is mapped directly to the character, with the user's movements mirrored as closely as possible, while the audio track is what was picked up by iPhone during the period.. For example, person using avatar could say the word bark, resulting in the playing of audio file of dog barking and altered mouth shapes on the character to match. By synthesized voice in some cases, The voice recording in its entirety could be replaced, with voice recognition detecting individual words, pitch, and cadence that can then be reproduced in character-specific voice.. While the filing of application is not guarantee that it will appear in Apple product or service, while also serving in this case the concept has good chance of being implemented in iOS update.. In the case of Animoji, it already takes advantage of the facial tracking elements of TrueDepth camera-equipped iPhones, making it feasible for Apple to add emotion recognition. The voice recognition capabilities, also lends to the audio-tampering elements of the patent application...
Read more