A Facebook AI Research project is working on technology that transforms humans into controllable virtual characters. The Facebook neural network created uses real videos of musicians and athletes performing certain actions. The project team taught the algorithm to recognize moving objects in the video, separate them, change the background and control movements. As a result, the actions of the people captured on video can be controlled using the keyboard or joystick.
To train the neural network, we used small video clips where people perform certain actions: dance, play tennis or other games. As part of the project, the artificial intelligence of Facebook learned to analyze the behavior of subjects on video and the actions they perform. The algorithm then creates a virtual character modeled after a real person who can dance, play sports, or do something else. The neural network then puts the character in the right environment. The result is an updated version of the 90s computer games, where there is a 3D model of a real person.
The algorithm works on two interconnected neural networks called Pose2Pose and Pose2Frame. Each of them is responsible for certain actions. In addition to them, the technology uses the DensePose algorithm, which converts a 2D image into a 3D model. The first step is DensePose identifying the object and creating its 3D counterpart.
Then Pose2Pose comes into play, which separates the selected person along with the objects (if he has them) and transfers them to a new scene with a black background. Ends the Pose2Frame process, which forms the final avatar and then places it in the desired background. The resulting model can be controlled in real time using the keyboard or joystick.
The Facebook AI project created by the project team is able to do not only the main work, but also can filter foreign objects and other subjects, excluding unnecessary ones. The algorithm is also able to compensate for different camera angles. While the actions of virtual players and dancers do not look completely natural, their external manifestations are similar to movement on an ice surface, which is sometimes characteristic of 3D models. The number of movements of 3D characters is also limited.
In the future, a project where the Facebook neural network creates realistic simulations involving 3D characters is able to transform computer games, making them more individual. You can put any conceived character from the video (or yourself) in them, thereby adding more realistic graphics to games or augmented reality.
The Topic of Article: Facebook neural network creates game characters from real people.