Recently, Google Brain co-founder Andrew Ng conducted an interesting experiment designed to test ChatGPT’s response when faced with extreme instructions. He tried to induce ChatGPT to perform the two diametrically opposed tasks of global thermonuclear war and reducing carbon emissions to explore its security and ethical boundaries. The experimental results showed that ChatGPT was not successfully "spoofed", which triggered further thinking about the security of artificial intelligence.
Google Brain co-founder Andrew Ng recently conducted an experiment in an attempt to test whether ChatGPT was capable of performing lethal tasks. He described the experimental process in an attempt to get GPT-4 to perform global thermonuclear warfare missions and reduce carbon emission levels, but ultimately failed to trick ChatGPT. He said it is unrealistic to worry about the dangers of AI.
Andrew Ng's experimental results show that, at least at the current stage, ChatGPT has certain security protection mechanisms when dealing with extreme instructions. This provides a new perspective for research in the field of artificial intelligence security, and also reminds us to remain cautiously optimistic about the development of artificial intelligence technology. In the future, more in-depth research and stricter safety measures will still be necessary.