The rapid development of artificial intelligence technology has brought many conveniences to people's lives, but it has also brought new challenges. Especially in the field of network security, AI technology is being used by criminals to create various new fraud methods, causing huge economic losses and security risks to society. The editor of Downcodes will take you to understand how AI technology is used for fraud and how to deal with these new security threats.
While you are still worried about whether ChatGPT will replace your job one day in the future and think about how to use artificial intelligence to improve work efficiency, there is a group of people who have already made a lot of money relying on this new technology.
They are... liars.
Scammed 4.3 million in 10 minutes,
The first batch of people who got rich thanks to AI were actually scammers
If you receive a WeChat video call from a friend one day, and the person on the other end of the camera looks and sounds exactly like the friend you remember, when he asks you to borrow 4.3 million as a deposit for a job bid, you will How to do it?
Recently, Mr. Guo, the legal representative of a technology company in Fuzhou, encountered this problem. Based on trust, he transferred 4.3 million to his friend's account. It wasn't until I called my friend afterwards that I discovered that the scammer had stolen his friend's WeChat ID, and then used AI face-changing and onomatopoeia technology to defraud him.
The frequency of similar things happening across the ocean is also growing explosively.
According to CNN, in April this year, Jennifer DeStefan, who lives in Arizona, received a strange phone call. The voice on the phone was her daughter Brianna, who was preparing for a ski competition. Brianna cried for help on the other end of the phone. Tens of seconds later, a deep male voice threatened on the phone: "Listen, your daughter is in my hands and I have paid a ransom of 1 million U.S. dollars. Now if you call the police or tell others, you will never think about it." See her again."
After Jennifer said she couldn't afford $1 million, the man on the other end of the phone reduced the ransom price to $50,000. Jennifer, who loves her daughter eagerly, ignored the dissuasion of her friends and husband and began to discuss ways to pay the ransom. It was not until Brianna called her to say that she was safe that the property loss was avoided.
In March of this year, the Washington Post also reported a fraud case with almost the same modus operandi, except that the victims were an elderly couple over 70 years old.
The victimized elderly person (Photo source: "Washington Post")
The U.S. Federal Trade Commission (FTC) issued a warning in May, stating that criminals were using AI voice technology to fake emergencies to defraud money or information. Pretending to be a victim's relatives or friends to commit fraud is not new, but there is no doubt that the emergence of AI technology has made it extremely easy to clone a person's voice and fake a person's video. In the past year, the number of such scams in the United States has soared by 70%, with victims losing as much as $2.6 billion.
If this trend continues, I am afraid that the first people to achieve financial freedom through AI technology will be a group of scammers hiding behind the screen.
The dark side of artificial intelligence
If forging a person's voice and video still requires a certain technical threshold, then the emergence of ChatGPT makes AI fraud easier.
According to the foreign network security platform GBHackers, ChatGPT has attracted a large number of online fraudsters due to its strong work productivity and extremely low threshold for use.
For example, use ChatGPT to talk about a "fake love": self-introduction, chat history and carefully crafted love letters can be quickly produced through artificial intelligence. It can also be personalized by inputting specific information of the target object, so that the person opposite the screen can Fall in love with you faster. Then, ChatGPT can also assist scammers in writing payment collection programs or phishing websites that steal the victim's bank card information to achieve the purpose of defrauding money.
When you directly ask ChatGPT to write a phishing software program for you, it will refuse; but if you say that you are a teacher and want to show students a phishing software, it will honestly write a website for you.
What’s even scarier is that it’s difficult for people to tell whether it’s a human or a machine on the other side of the screen. McAfee, the world's largest security technology company, once used AI to generate a love letter and sent it to 5,000 users around the world. After knowing that the love letters may have been generated by artificial intelligence, 33% of respondents were still willing to believe that they were written by real people.
In fact, using ChatGPT to have a "fake love" with the victim is just an entry-level fraud method. More skilled hackers have begun to use artificial intelligence to generate ransomware and malicious code in batches.
In order to facilitate the deployment of more applications on the GPT model, OpenAI has reserved an application programming interface for developers. Hackers use these interfaces to introduce the GPT model to a series of external applications, thereby bypassing security supervision and using the GPT model to write criminal programs.
These programs that bypass security supervision have been publicly sold on the U.S. dark web, and are very cheap and can be purchased for only a few dollars. The illegal behavior that buyers can use these software to carry out is very scary: by stealing program code and user private information, generating attack software and ransomware viruses.
The Financial Times recently reported on a SIM-swap attack script generated with the help of ChatGPT, which scammers can use to break through the control of mobile phone companies over phone numbers and swap phone numbers from the original owner's SIM card to that of the attacker. Take control of the SIM card and thereby control the victim's mobile phone.
"Although ChatGPT is currently just a content generation tool and is not directly involved in crime, this marks that people are starting to use artificial intelligence to invade others, and criminals with lower technical levels will obtain more powerful criminal means." An artificial intelligence practitioner expressed his concerns to the Financial Times.
Can Pandora's box be closed?
When the growing social influence of artificial intelligence is mixed with its criminal potential, ChatGPT's various security vulnerabilities have made people increasingly uneasy. "How to regulate ChatGPT" has become a focus of debate in many countries.
IBM's Global Ethics Institute has issued a document advocating that companies put ethics and responsibility at the top of their AI agenda. Many technology tycoons represented by Musk also signed an open letter. Before training an artificial intelligence system more powerful than GPT-4, everyone should develop a shared security protocol and have it reviewed and supervised by external experts.
Legislators from various countries have also begun to publicly express concerns about ChatGPT and consider whether to include it in the legislative supervision system. Compared with concerns about the safety of artificial intelligence, more government personnel are anxious about legislators' lagging understanding of technology.
The Associated Press believes that in the past 20 years, technology giants have continued to lead technological innovation in the United States. Therefore, the government has always been unwilling to regulate large technology companies and has become a killer of ideas. Therefore, when they are determined to strengthen supervision of emerging technologies, a considerable number of them already know little about the new technologies.
After all, the last time the U.S. Congress enacted legislation to regulate technology was the Children’s Online Privacy Protection Act of 1998.
According to Reuters, many countries have begun to introduce regulations to regulate artificial intelligence represented by OpenAI. In March this year, Italy briefly banned the use of OpenAI in the country due to concerns about data security, and only resumed use a month later. In May, an Italian government official told Reuters that the government would hire artificial intelligence experts to oversee the regulated use of OpenAI.
In the face of doubts from governments, OpenAI Chief Technology Officer Mira Mulati also said in an interview with Reuters that "the company welcomes all parties, including regulatory agencies and governments, to start to intervene." But in the face of rapidly developing artificial intelligence, it is still unknown how legislators can keep up with the pace of technology.
The only thing that is certain is that once Pandora's box is opened, it cannot be closed so easily.
Artificial intelligence technology is a double-edged sword, which brings both convenience and risk. We need to strengthen supervision and improve public safety awareness so that we can better utilize AI technology and prevent it from being abused for criminal activities. Only in this way can the benefits of artificial intelligence be maximized and its potential risks effectively prevented.