The rapid development of Deepfake technology has brought serious network security threats, and false information is rampant. According to statistics, Deepfake-related fraud has grown rapidly and has become a problem that cannot be ignored. In order to deal with this challenge, technology companies have sought solutions one after another. Among them, the Video Seal tool launched by Meta Company has attracted much attention. The tool aims to combat deepfakes by adding imperceptible watermarks to AI-generated videos, and is open-sourcing its technology to promote wider collaboration and development.
Today, deepfakes are pervasive. As generative AI gains popularity, false content on the web has exploded. According to statistics from the identity verification platform Sumsub, the number of global Deepfakes will increase fourfold from 2023 to 2024. In 2024, Deepfake will account for 7% of all fraud, ranging from identity impersonation and account theft to complex social engineering attacks, all involving Deepfake.
In order to effectively combat Deepfakes, Meta recently released a tool that can add imperceptible watermarks to AI-generated video clips. The tool, called Meta Video Seal, was announced as open source on Thursday and is designed to be integrated into existing software. This tool joins Meta's other watermarking tools Watermark Anything (rereleased today under a permissive license) and Audio Seal to form a complete watermarking solution.
"We developed Video Seal to provide a more effective video watermarking solution, especially when it comes to detecting AI-generated videos and protecting originality," Meta AI research scientist Pierre Fernandez told TechCrunch.
Video Seal is not the first technology of its kind. DeepMind's SynthID can add watermarks to videos, and Microsoft also has its own video watermarking method.
But Fernandez believes many existing methods fall short.
"While other watermarking tools exist, they are not robust enough to video compression (which is very common when content is shared via social platforms); do not run efficiently enough for large-scale application; are not open or repeatable enough; or are derived from Image watermarks, and image watermarks are not the best option for video,” Fernandez said.
In addition to watermarks, Video Seal can add hidden messages to videos to later reveal the source of the video. Meta claims that Video Seal can resist common editing operations such as blurring and cropping, as well as common compression algorithms.
Fernandez acknowledged that Video Seal has certain limitations, primarily a trade-off between how perceptible the watermark is and its overall resistance to manipulation. He added that heavy compression and heavy editing could alter the watermark or render it unrecoverable.
Of course, the bigger problem for Video Seal is that there isn't much reason for developers and industry to adopt it, especially those that already use proprietary solutions. To solve this problem, Meta is launching a public ranking list, Meta Omni Seal Bench, specifically designed to compare the performance of various watermarking methods. In addition, Meta will organize a workshop on watermarking at this year's ICLR (International Conference on Learning Representations). ICLR is an important AI conference.
“We hope that more and more AI researchers and developers will integrate some form of watermarking into their work,” Fernandez said. “We hope to work with industry and academia to advance progress in this field more quickly. develop."
This move by Meta undoubtedly provides new ideas for combating the proliferation of Deepfake. Open source and open collaboration may be an effective way to solve this thorny problem. However, the future of Video Seal still depends on whether it can be widely accepted and applied by the industry, and its ability to fight against Deepfakes.
All in all, Meta's Video Seal tool provides a new attempt to combat Deepfake, but its success remains to be seen. In the future, more technological innovation and industry cooperation will be needed to effectively solve the Deepfake problem. Open source and a collaborative spirit are key, but it takes time and sustained effort.