The Australian Senate Special Committee recently released a report accusing Amazon, Google and Meta of being ambiguous when using Australian data to train AI products, which has attracted widespread attention. In the report, Senator Sheldon severely criticized these technology giants for being evasive during the hearing and like "pirates" plundering Australia's culture, data and creativity. The editor of Downcodes will explain this report in detail and its impact on Australia’s AI regulatory policies and creative industries.
Recently, the Australian Senate Special Committee’s investigation report revealed that technology companies Amazon, Google and Meta (formerly Facebook) have disappointingly ambiguous attitudes when using Australian data to train their artificial intelligence products.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
The chairman of the investigation, Labor Senator Tony Sheldon, expressed strong dissatisfaction with this, believing that these multinational companies continued to avoid direct questions during the hearing, as if they were performing a cheap magic show, and in the end did nothing.
Sheldon said after the report was released that these technology companies are like "pirates", plundering Australia's culture, data and creativity, and ultimately leaving Australians empty-handed. He pointed out that Amazon has refused to disclose how it uses data collected by Alexa, Kindle and Audible devices to train AI, while Google has also not explained how it uses user data to develop AI products. While Meta acknowledged that it had been extracting data from Australian Facebook and Instagram users for use in future AI models since 2007, it was unable to explain how users in 2007 consented to the data being used for purposes that did not yet exist.
The report also highlights that creative workers face the risk of artificial intelligence severely affecting their livelihoods. It recommends establishing payment mechanisms to compensate creative workers when AI-generated work is based on original material. Additionally, companies developing AI models need to be transparent about the origins of copyrighted works used in their datasets, and all claimed works should be licensed and paid accordingly.
One of the 13 recommendations in the report calls for the introduction of independent artificial intelligence legislation, specifically targeting AI models deemed "high risk". AI applications involving human rights should be considered high risk and require consultation, collaboration and representation prior to implementation.
However, two Coalition senators on the committee said AI poses a far greater threat to Australia's cyber security, national security and democratic institutions than its impact on the creative economy. They believe that mechanisms should be established to protect the potential opportunities brought by AI, rather than suppress them.
The report also triggered further discussions on Australia’s AI regulatory policies. Many people called for consistency with regulatory measures in the United Kingdom, Europe, California and other regions to deal with the challenges of the rapid development of artificial intelligence technology.
This report not only reveals the opacity of data use by technology giants, but also triggers in-depth thinking on the ethics and supervision of artificial intelligence. Australia's AI regulation road has a long way to go, and requires the joint efforts of the government, enterprises and the public to find the best path between innovation and risk. The editor of Downcodes will continue to pay attention to the development of the incident.