Cyan Puppets Technology is about to release Cyanpuppets version 1.50, which is a landmark update. This version uses the largest training data set to date and is more versatile, allowing users to quickly create 3D dance content in real time using only two webcams. The core technology of this version is based on convolutional neural network and deep neural network algorithms. Through the self-developed AI model architecture, it realizes the collaboration between the virtual and real world, providing users with a convenient 3D animation production experience.
The latest version 1.50 of the 2D video generation 3D animation algorithm model Cyanpuppets will be released this Friday. This version has the largest training data set to date and is the most versatile, allowing users to quickly create 3D dance content in real time with just two webcams. Cyanpuppets is provided by Cyan Puppet Technology. It uses convolutional neural network and deep neural network algorithms as the core to create a self-developed AI model architecture to achieve collaboration between the virtual and real world. Their CYAN.AI platform combines NVIDIA GPU computing power to generate 3D motion data from 2D videos, providing users with wearless motion capture technology, virtual social full-body interaction technology, 3D animation production tools, etc.
The release of Cyanpuppets version 1.50 marks that Cyanpuppets Technology has made significant progress in the field of AI-driven 3D animation production. Its convenience and efficiency will bring users a new creative experience, which is worth looking forward to.