Recently, the editor of Downcodes learned that in order to cope with the growing challenge of AI-generated content and improve users’ ability to identify the authenticity of content, Google announced that it will add a new feature to search results. This feature is designed to help users better understand where images are created and edited, thereby reducing the negative impact of misinformation and deepfake images. The launch of this feature is an important step after Google joined the Content Provenance and Authenticity Alliance (C2PA), whose members also include technology giants such as Amazon, Adobe, and Microsoft.
Recently, Google announced that it would be rolling out a new feature in its search results designed to help users better understand how content is created and modified. The move comes after Google joined the Content Provenance and Authenticity Alliance (C2PA), which includes major brands including Amazon, Adobe and Microsoft, and is dedicated to combating online disinformation.
Google said the new feature, which will be rolled out gradually over the next few months, leverages the current content credentials standard (i.e. the image’s metadata) to tag AI-generated or edited images in search to increase transparency for users. .
Users will be able to see if it was AI-generated by clicking on the three dots above the image and selecting "About this image." This feature will also be available to users through Google Lens and Android’s “Circle to Search” feature. However, it’s worth noting that this tag is not very conspicuous, requiring users to take extra steps to confirm the origin of an image.
In recent years, with the development of AI technology, the problems of video deep fake technology and AI-generated images have become increasingly serious. For example, Trump once posted a fictional image of Taylor Swift supporting his campaign, causing a lot of misunderstanding and controversy. In addition, Taylor Swift also encountered malicious images generated by AI, which raised doubts about the authenticity of AI images.
While Google's rollout of this feature is a good start, concerns remain about whether this hidden label is effective enough. Many users may not be aware that the About This Image feature exists and may not be able to take full advantage of this new tool. In addition, currently only a few camera models and some software implement the content voucher function, which also limits the effectiveness of the system.
According to a study by the University of Waterloo, only 61% of people can tell the difference between an AI-generated image and a real image, which means that if Google’s tagging system cannot be used effectively, it will be difficult to provide true transparency to users.
Highlight:
Google will launch a new feature to label AI-generated and edited images to increase user transparency.
Users can check whether the image was generated by AI through the "About this Image" function, but the label is not conspicuous.
? Research shows that only 61% of people can distinguish AI images from real images, and Google needs to enhance the visibility of labels.
All in all, although Google’s move is an important step in combating disinformation, its effectiveness remains to be seen. Increasing the visibility of tags and popularizing content voucher technology will be key directions for future improvements. The editor of Downcodes will continue to pay attention to the subsequent development of this function and bring more relevant information to readers.