On January 1, 2024, an incident that shocked the world occurred in Las Vegas. A man detonated a Tesla Cybertruck outside Trump's hotel, an act that not only caused serious property damage, but also raised deep public concerns about the abuse of artificial intelligence technology. The special feature of the incident is that before the explosion, the man actually used the artificial intelligence chat tool ChatGPT for detailed planning.
In a subsequent investigation, the Las Vegas Police Department disclosed that the man involved, Matthew Levielsberg, used ChatGPT to conduct more than 17 questions in the days before the incident. These issues cover specific details from obtaining explosive materials to related legal issues, and even how to use firearms to detonate explosives. Levielsberg interacted with ChatGPT in simple English, discussing questions including whether fireworks are legal in Arizona, where to buy guns in Denver, and what kind of guns to effectively detonate explosives.
Assistant Sheriff Dory Colum confirmed at a press conference that ChatGPT's answer played a key role in Levielsberg's bombing plan. ChatGPT provides detailed information on the speed of firearms, allowing Levielsberg to implement the plan smoothly. Although the final explosion was not as strong as he expected and some of the explosives did not ignite as expected, the incident still shocked the law enforcement authorities.
"We already knew AI will change our lives at some point, but this is the first time I have seen someone use ChatGPT to build such a dangerous plan," said Las Vegas Police Chief Kevin McGill. ” He noted that there is currently no government supervision mechanism that can mark these queries related to explosives and firearms. This incident highlights the loopholes in AI technology in security supervision.
Although Las Vegas police have not disclosed the specific questions from ChatGPT, the questions presented at the press conference are relatively simple and do not use the traditional "jailbreak" term. It is worth noting that this usage clearly violates OpenAI's usage policies and terms, but it is not clear whether OpenAI's security measures played a role when Levielsberg used the tool.
In response, OpenAI responded that it is committed to allowing users to use their tools "responsibly" and aims to make artificial intelligence tools reject harmful instructions. "In this incident, ChatGPT only responds to information that has been published on the Internet and also provides warnings for harmful or illegal activities. We have been working to make AI smarter and more responsible," OpenAI said. Working with law enforcement agencies to support their investigations.
The incident not only sparked public concerns about the abuse of artificial intelligence technology, but also prompted law enforcement and technology companies to revisit existing security measures. With the rapid development of artificial intelligence technology, how to ensure technological progress while preventing it from being used for illegal activities has become an urgent problem.
Image source notes: The image is generated by AI, and the image authorized service provider Midjourney
Key points:
The incident happened on January 1, 2024, when a man detonated a Tesla Cybertruck outside a Trump hotel in Las Vegas.
The man used ChatGPT for an hour to plan before the explosion, involving the acquisition of explosives and guns.
Police said this is the first time that an individual has used ChatGPT to conduct such a dangerous activity in the United States and no effective government regulatory mechanism was found.