OpenAI announced the development of a "Media Manager" tool in May 2024, aiming to help creators manage whether their works are used for AI training data and avoid intellectual property disputes. However, seven months later, the tool has still not been released, and its progress has been slow or even shelved. This article will delve into the reasons for the delay of the "Media Manager" project, the legal challenges faced by OpenAI, and its future copyright strategy.
Although OpenAI announced in May 2024 that it was developing a tool called "Media Manager" to allow creators to manage whether their works are included in AI training data, seven months have passed and this feature has not yet been released. This tool was originally designed to help creators identify and control the use of copyrighted text, images, audio and video, thereby avoiding legal disputes related to intellectual property rights. However, people familiar with the matter revealed that the project is not considered a priority within OpenAI.
Media manager is slow
OpenAI originally planned to launch a media manager before 2025, but at present, this goal may be difficult to achieve. According to people familiar with the matter, development of the tool has been slow, and some former employees even said they did not remember the project being actively promoted. While the company emphasized that Media Manager would "set the standard for the entire AI industry" when it launched in May, OpenAI has not made any public updates on the tool's progress since then.
The launch of this tool is supposed to provide creators with a more convenient way to help them manage the use of copyrighted content, but the opt-out mechanism currently provided by OpenAI appears fragmented and incomplete. Creators have criticized the existing flagging and removal process as being too cumbersome and lacking effective solutions, especially when it comes to the removal of content such as written works and videos.
Intellectual property issues remain serious
OpenAI is facing a class-action lawsuit from artists, writers, news organizations and others claiming that the company used their work for training without permission. The lawsuits include well-known writers Sarah Silverman and Ta-Nehisi Coates, as well as media outlets such as The New York Times and the Canadian Broadcasting Corporation. Although OpenAI has reached licensing agreements with some partners, not all creators are satisfied with these terms.
One of the challenges OpenAI faces is how to use creators' works appropriately without infringing on their copyrights. Although the company advocates that the works generated by its AI models are transformative, many creators believe that AI-generated content is often a close copy of their works and is suspected of unauthorized use.
Legal Challenges and Creator Protection
Experts generally believe that even if the media manager is eventually launched, it will be difficult to fundamentally solve the legal issues surrounding artificial intelligence and intellectual property rights. Intellectual property lawyer Adrian Cyhan pointed out that content identification itself is a large and complex task, and even giant platforms like YouTube and TikTok have not been able to achieve this goal perfectly. What's more, OpenAI needs to face the legal requirements and creators' rights in different jurisdictions around the world.
Fairly Trained founder Ed Newton-Rex said that media managers may transfer control to creators, forcing them to actively participate in the management of AI training data, although many creators may not even know that this tool exists. He believes that this approach could lead to large-scale exploitation of creators' works, even without their authorization.
OpenAI’s copyright strategy and future
Despite various legal challenges, OpenAI maintains its "fair use" stance and continues to believe that the use of unauthorized materials when training AI models is inevitable. OpenAI has publicly stated that without copyrighted material, building a competitive AI model is almost impossible. If the court ultimately rules in favor of OpenAI in the copyright lawsuit, the media manager may not have much legal significance for the company.
Currently, OpenAI has begun to implement filters to avoid reuse of copyright-restricted content, although these measures are not perfect. If a future court determines that its AI-generated content has a "transformative purpose," similar to Google's decision in the 2009 book digitization case, OpenAI may be able to continue its AI training efforts without relying so much on the media manager.
Delays in OpenAI’s “Media Manager” project highlight complex copyright issues in the development of artificial intelligence. In the future, how OpenAI balances AI model training and creators' rights protection will be the key to its sustainable development.