Currently, artificial intelligence technology is developing rapidly, but its reliability and security issues are also attracting increasing attention. Professor Eerke Boiten, a professor of cybersecurity at the University of De Montford, questioned the reliability of existing AI systems, believing that they are at risk in important applications. He pointed out that generative AI and large language models based on large neural networks, such as ChatGPT, have complexities that make it difficult to predict and verify their behavior, which makes it potentially risky in applications that require a sense of responsibility.
In the current technological environment, artificial intelligence (AI) has sparked widespread discussion. Eerke Boiten, a cybersecurity professor at the University of De Montford, said that existing AI systems have fundamental shortcomings in management and reliability and should not be used in important applications.
Professor Boiten pointed out that most current AI systems rely on large neural networks, especially generative AI and large language models (such as ChatGPT). The working principle of these systems is relatively complex. Although the behavior of each neuron is determined by precise mathematical formulas, the overall behavior is unpredictable. This "emergence" feature makes it difficult for the system to effectively manage and verify.
From the perspective of software engineering, Professor Boiten emphasized that AI systems lack composability and cannot be developed modularly like traditional software. Without a clear internal structure, developers cannot effectively segment and manage complexity, and it is also difficult for them to conduct step-by-step development or effective testing. This makes verification of AI systems limited to overall testing, which is extremely difficult due to excessive input and state space.
In addition, the wrong behavior of AI systems is often difficult to predict and fix. This means that even if errors are found during training, retraining does not guarantee that these errors will be effectively corrected and may even introduce new problems. Therefore, Professor Boiten believes that in any application that requires a sense of responsibility, current AI systems should be avoided.
However, Professor Boiten did not completely lose hope. He believes that although the current generative AI systems may have reached a bottleneck, by combining symbolic intelligence and intuitive-based AI, it is still possible to develop more reliable AI systems in the future. These new systems may generate some clear knowledge models or confidence levels that enhance AI's reliability in practical applications.
Professor Boiten's views have triggered people's deep thinking on the reliability and application scope of artificial intelligence, and have also pointed out a new path for the future development direction of artificial intelligence technology. While pursuing advances in artificial intelligence technology, we need to attach great importance to its security and reliability to ensure that it can be effectively controlled and managed in applications and avoid potential risks.