Over 10,000 people from the literary and artistic circles signed a letter warning AI companies: it poses a major and unfair threat to the livelihood of creators.
Author:Eve Cole
Update Time:2025-01-22 15:12:01
On October 22, more than 10,500 people in the creative industry around the world signed a statement warning artificial intelligence companies to use their works without permission, calling this a "significant and unfair threat" to artists' livelihoods. The statement comes amid a series of legal disputes between creative industry figures and technology companies over copyright issues. "Dehumanization" and "fair use" of data According to the Guardian report on the 22nd, creative industry figures from the literary, music, film, theater and television industries expressed support for the statement, including Sweden's ABBA Band member Björn UlVAEus, American actress Julianne Moore, Radiohead frontman Thom Yorke), Nobel Prize winner in Literature Kazuo Ishiguro, etc. Creative industry organizations and companies including the American Federation of Musicians, SAG-AFTRA, the Writers Council of Europe and Universal Music Group also signed the statement. The 29-word statement read: "The unauthorized use of creative works to train generative artificial intelligence is a significant and unfair threat to the livelihoods of the creators behind these works and will never be tolerated." The letter Ed Newton-Rex, founder of the non-profit organization FAIrly Trained, CEO of the non-profit organization FAIrly Trained, British composer and former artificial intelligence executive, said that people who make a living from creative work have no rights to intellectual property rights. "very worried" about the protection situation. “Generative AI companies need three key resources to build AI models: people, compute, and data. They spend huge sums on the first two—sometimes over $1 million on a single engineer, and up to $100,000 per model on 1 billion, but they want to get training data for free.” Newton-Rex believes that when artificial intelligence companies call these “data” “training data”, they dehumanize these “data”. In fact, behind this is people's artistic creation. Newton-Rex was the head of audio at tech company Stability AI, but resigned last year after the company determined that using copyrighted content to train an artificial intelligence model without permission constituted "fair use." Tech companies like OpenAI need text, images, videos and other materials to train algorithms for artificial intelligence systems like chatbots. This data is often scraped from the internet without consent, compensation, or attribution to the source. Tech companies argue the practice is protected by "fair use" under copyright law, but content owners and publishers are fighting back increasingly. They claim in lawsuits and requests to regulators that AI developers who use their work illegally infringe copyrights. In the United States, a group of writers including John Grisham, Jodi Picoult and George RR Martin sued OpenAI for alleged copyright infringement; Sony Music and Universal Music Group Major record labels including Warner Music Group are also suing artificial intelligence music companies Suno and Udio. The seemingly selective "opt-out plan" is not completely "head-to-head" between some content publishers and technology companies. They may sign agreements with them to provide access to data in exchange for remuneration or other benefits. Regarding regulatory measures from the government level, Newton-Rex, who believes that generative artificial intelligence "exploits creators", issued a warning. Last month, Google called on the UK to relax restrictions on the practice of text and data mining (TDM), which allows copying of copyright-protected works for non-commercial purposes such as academic research. The UK government is consulting on a plan that would allow artificial intelligence companies to acquire content from artists and publishers unless the latter "opt out" of the process, the Financial Times reports. Newton-Rex, who has implemented exit plans for artificial intelligence companies, said the "exit" option is flawed because most people don't know there is such a plan, and even if it is perfectly designed, most people will miss it. "It's completely unfair to put the onus on creators to opt out of AI training. If the government really thought this was a good thing for creators, it would create an opt-in scheme."