Australia recently released new online safety standards, which specifically focus on the use of generative artificial intelligence in child safety and terrorist material, a move that has triggered concerns from technology giants including Microsoft and Meta. These companies believe that the new standards may have a negative impact on the protection capabilities of artificial intelligence systems and limit the accuracy of artificial intelligence training and content review, thus affecting its development and application.
Australia has released new online security standards, sparking concerns among technology companies including Microsoft and Meta. The standards relate to generative artificial intelligence, targeting child safety and terrorist material. Technology companies worry that standards could weaken the protective capabilities of AI systems, limit training and affect the accuracy of content review.
The Australian government's move aims to strengthen network security and protect users from harmful content, but its impact on the artificial intelligence industry still requires further observation and evaluation. Communication and consultation between technology companies and governments will help find a balance point and ensure that security and innovation develop in parallel.