The Australian Senate Special Committee recently released a report, severely condemning Amazon, Google and Meta for their lack of transparency and responsibility when using Australian data to train AI products. The report pointed out that these technology giants are like "pirates", plundering Australia's culture, data and creativity without giving corresponding rewards, which has triggered widespread concern in Australian society about data sovereignty and AI ethics. The chairman of the investigation committee expressed strong dissatisfaction with the evasive attitude of technology companies at the hearing and called for strengthening AI supervision to protect the rights and interests of creative workers.
Recently, an investigation report by the Australian Senate Special Committee revealed that technology companies Amazon, Google and Meta (formerly Facebook) have disappointingly ambiguous attitudes when using Australian data to train their artificial intelligence products.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
Labor Senator Tony Shelton, the chairman of the investigation, expressed strong dissatisfaction with this, believing that these multinational companies continued to avoid direct questions during the hearing, as if they were performing a cheap magic show, and in the end did nothing.
Shelton said after the report was released that these technology companies are like "pirates", plundering Australia's culture, data and creativity, and ultimately leaving Australians empty-handed. He pointed out that Amazon has refused to disclose how it uses data collected by Alexa, Kindle and Audible devices to train AI, while Google has also not explained how it uses user data to develop AI products. While Meta acknowledged that it had been extracting data from Australian Facebook and Instagram users for use in future AI models since 2007, it was unable to explain how users in 2007 consented to the data being used for purposes that did not yet exist.
The report also highlights that creative workers face the risk of artificial intelligence severely affecting their livelihoods. It recommends establishing payment mechanisms to compensate creative workers when AI-generated work is based on original material. Additionally, companies developing AI models need to be transparent about the origins of copyrighted works used in their datasets, and all claimed works should be licensed and paid accordingly.
One of the 13 recommendations in the report calls for the introduction of independent artificial intelligence legislation, specifically targeting AI models deemed "high risk". AI applications involving human rights should be considered high risk and require consultation, collaboration and representation prior to implementation.
However, two Coalition senators on the committee said AI poses a far greater threat to Australia's cyber security, national security and democratic institutions than its impact on the creative economy. They believe that mechanisms should be established to protect the potential opportunities brought by AI, rather than suppress them.
The report also triggered further discussions on Australia’s AI regulatory policies. Many people called for consistency with regulatory measures in the United Kingdom, Europe, California and other regions to deal with the challenges of the rapid development of artificial intelligence technology.
Highlight:
**Tech giants accused of plundering Australian culture and data**: Investigation report criticizes Amazon, Google and Meta for their vague attitude towards data use.
**Creative workers face high risks**: Report highlights the threat of AI to the creative industry and recommends the establishment of a compensation mechanism.
** Calling for independent AI legislation**: The report proposes the need to introduce independent legislation for high-risk AI to protect human rights and the rights of creative workers.
This report triggered a heated discussion on the ethics and supervision of AI data use, indicating that Australia may introduce stricter AI-related laws and regulations in the future to better protect national interests and citizens’ rights and interests. All countries should learn from this incident, strengthen the supervision of AI technology, and promote the healthy development of AI technology.