Recently, the editor of Downcodes learned that a group of artists who participated in the test of OpenAI's new text-to-video AI model Sora publicly expressed their protest and leaked early access rights to the model because they were dissatisfied with OpenAI's cooperation method. They believe that OpenAI uses artists for free R&D and public relations, treating artists only as "free bug testers, publicity tools, training data or verification tokens", which has triggered widespread concern in the industry about the rights of artists in the development of AI models.
The artists published an open letter on the Hugging Face platform detailing their dissatisfaction, pointing out that OpenAI only provides "artistic credibility" without corresponding material rewards, as well as strict restrictions on Sora's content review requirements. OpenAI responded that participation in the test is voluntary and promised to continue supporting participating artists. However, this incident has also triggered people's thinking about the ethics and business models in the development of AI models, as well as discussions on how to better protect the rights of artists.
Artists are not opposed to the application of AI technology to artistic creation, but they strongly condemn OpenAI's use of its early access program to exploit artists' labor. They believe that OpenAI should give artists fair pay and respect, rather than treating them as mere tools to achieve the company's goals.
In the open letter, the artists said: "Dear Enterprise AI Lords, When we were given access to Sora, we were promised to be early testers and creative partners. However, we believe we were simply lured into an 'art' "The process of whitewashing and promoting Sora." They emphasized that the artist is not the company's "free bug tester, promotional tool, training data or verification token."
These artists are not opposed to using AI technology as a tool for artistic creation, but they are dissatisfied with OpenAI's early access program, which they believe exploits artists' creativity for unpaid labor.
They criticized OpenAI for providing them with "artistic credit" in projects without reciprocating the work. In particular, they objected to OpenAI's moderation requirements for Sora's content, saying that each generated content needed to be approved by the OpenAI team before it could be shared.
When contacted by the media, OpenAI did not confirm the authenticity of the Sora leak, but emphasized that artists participating in the "research preview" were voluntary and were not required to provide feedback or use the tool. An OpenAI spokesperson said: "Sora is still in the research phase, and we are working hard to balance creativity with safety measures. The participation of hundreds of artists helps us prioritize new features and safeguards." In addition, OpenAI promises to continue to support the company through grants, events and Support participating artists in other ways.
Earlier, Mira Murati, the former chief technology officer of OpenAI, said that Sora is expected to be released before the end of the year, but will not release anything that they are not confident in the impact of the model.
In a recent Reddit Q&A, chief product officer Kevin Weil mentioned that the reason Sora hasn't been released yet is the need to expand its capabilities and ensure security and prevent imposter issues.
This incident once again highlights the importance of ethics and morality in the development of artificial intelligence, and also reminds us to pay attention to the humanistic care behind technological progress. How to balance technological innovation and artists’ rights will be an issue that needs serious consideration in the future development of AI. The editor of Downcodes will continue to pay attention to the subsequent development of the incident.