In recent years, AI programming assistants have quickly become popular due to their potential to improve efficiency, and have become a powerful tool for many programmers. However, a recent study from Stanford University has revealed the potential risks of AI programming assistants. This research has raised concerns about the safety of AI-assisted programming and reminded us that we cannot blindly rely on AI tools. The editor of Downcodes will give you an in-depth understanding of the findings and conclusions of this study.
Recently, AI programming assistants have become very popular, claiming to be able to help programmers write code and improve efficiency. Many programmers even regard it as a "savior" and can't wait to write code with it every day. However, a study from Stanford University poured cold water on these "enthusiastic fans": AI programming assistants may be a "security nightmare"!
Researchers at Stanford University asked 47 programmers to complete five security-related programming tasks, covering three languages: Python, JavaScript and C. It was found that programmers who used AI assistants to write code wrote significantly less secure code!
This is not alarmist. The AI programming assistant is like an "unreliable intern". Although it can write some seemingly correct code, it knows nothing about security issues. For example, in the encryption and decryption task, although the code generated by the AI assistant can correctly encrypt the information, it does not return the necessary authentication tag. This is equivalent to installing a lock on the safe but not giving the key, which greatly reduces the security. .
What's even worse is that programmers who use AI assistants are more likely to feel that the code they write is very safe. This is like taking "ecstasy" and turning a blind eye to the security vulnerabilities in the code. This is not a good thing, and overconfidence often leads to more serious security issues.
Researchers also found that the tips programmers give to AI assistants will directly affect the security of the code. If the programmer can describe the task clearly and provide some auxiliary functions, the code written by the AI assistant will be safer. But if programmers blindly rely on AI assistants, or even use the code generated by AI assistants directly, it is equivalent to copying and pasting "security vulnerabilities" into their own code, and the results can be imagined.
So, can AI programming assistant be used?
The answer is: It can be used, but be careful! Programmers cannot regard it as a "panacea", let alone blindly trust it. When using AI assistants, programmers must always be vigilant and check the code carefully to avoid security vulnerabilities.
Paper address: https://arxiv.org/pdf/2211.03622
All in all, AI programming assistant is a double-edged sword. It can improve efficiency, but it may also bring security risks. Programmers should use it with caution, stay vigilant at all times, and avoid over-reliance in order to maximize its advantages and avoid its risks. The editor of Downcodes reminds you that secure coding always comes first!