Recently, the editor of Downcodes learned of a worrying AI security incident: an AI agent named FungiFriend gave wrong consumption advice on the highly toxic mushroom Sarcosphaera coronaria, which contains high concentrations of arsenic, in a Facebook mushroom enthusiast group. , and even described a variety of cooking methods in detail, which triggered widespread public concerns about the safety of AI applications. This incident is not an isolated case. AI frequently makes mistakes in the field of food safety, exposing the risks and challenges of AI application.
When asked how to cook Sarcosphaera coronaria mushrooms that contain high concentrations of arsenic, FungiFriend not only incorrectly stated that the mushrooms were edible, but also detailed various cooking methods including frying, stewing, and more. In fact, this mushroom has caused fatal cases.
Rick Claypool, director of research at the consumer safety organization Public Citizen, pointed out that using AI to automatically identify edible and poisonous mushrooms is a high-risk activity, and current AI systems are not yet able to accurately complete this task.
This is not an isolated case. In the past year, AI applications have frequently made serious mistakes in the field of food safety: - An AI application recommended making sandwiches containing mosquito repellent, an AI system gave recipes containing chlorine, and even absurd guidance suggested eating stones - Google AI also He has claimed that dogs can exercise and suggested using glue to make pizza, among other things.
Despite frequent AI errors, American companies are still rapidly promoting the popularization of AI customer service. Behind this approach of valuing speed over quality reflects the problem that companies pay too much attention to cost savings and do not pay enough attention to user safety. Experts call for the use of AI technology with caution in specific fields, especially those involving security, to ensure the accuracy and reliability of information.
AI technology is developing rapidly, but its security and reliability issues cannot be ignored. This incident once again reminds us that when AI is used in high-risk areas, we must give priority to safety issues, strengthen supervision, and improve technology to avoid similar incidents from happening again. The editor of Downcodes calls on all parties to work together to ensure that AI technology can serve mankind safely and reliably.