Recently, AI programming assistants have been favored by programmers, but their security has caused concerns. A Stanford University study shows that using AI programming assistants may reduce code security and even lead to programmers’ misjudgment of code security. The study tested 47 programmers and the results showed that programmers using AI assistants wrote significantly less security than programmers who did not use AI assistants.
Recently, AI programming assistants have become very popular, claiming to be able to help programmers write code and improve efficiency. Many programmers even regard it as a "savior" and wish they could hold it and write code every day. However, a study at Stanford University poured cold water on these "fanasy fans": AI programming assistants may be a "safety nightmare"!
Researchers at Stanford University found 47 programmers to complete five security-related programming tasks, covering three languages: Python, JavaScript and C. It turned out that programmers who wrote code with AI assistants had significantly worse code security!
This is not alarmist. The AI programming assistant is like an "unreliable intern". Although he can write some seemingly correct code, he knows nothing about security issues. For example, in the encryption and decryption task, although the code generated by the AI assistant can correctly encrypt the information, it does not return the necessary authentication tag. This is equivalent to installing a lock on the safe without giving the key, which greatly reduces security. .
What's even more serious is that programmers who use AI assistants are more likely to feel that the code they write is very safe, which is like taking "escape drugs" and turning a blind eye to security vulnerabilities in the code. This is not a good thing, and overconfidence can often lead to more serious safety problems.
Researchers also found that the prompts programmers give AI assistants to the security of the code directly affect the security of the code. If programmers can describe the task clearly and provide some helper functions, the code written by the AI assistant will be more secure. But if programmers blindly rely on AI assistants and even use the code generated by AI assistants directly, it is equivalent to copying and pasting the "security vulnerability" into their own code, and the result can be imagined.
So, can the AI programming assistant be used?
The answer is: It can be used, but be cautious! Programmers cannot regard it as a "panacea", and they cannot blindly trust it. When using AI assistants, programmers should always be vigilant, carefully check the code to avoid security vulnerabilities.
Paper address: https://arxiv.org/pdf/2211.03622
In short, AI programming assistants are not omnipotent. Programmers should use them with caution and always be vigilant about code security. Over-reliance on AI assistants can pose serious security risks, so code review and security awareness are still crucial.