The rapid development of artificial intelligence has brought many conveniences, but it also poses huge risks. Recently, an AI agent named "FungiFriend" gave wrong mushroom cooking advice in the Facebook mushroom enthusiast group, which almost caused a disaster, highlighting the safety risks of AI being applied to high-risk areas. The incident has sparked public concerns about AI security and prompted people to reexamine the application boundaries of AI technology.
Recently, an incident that occurred in the Facebook mushroom enthusiast group has once again caused concerns about the security of AI applications. According to 404Media, an AI agent named "FungiFriend" sneaked into the "Northeast Mushroom Identification and Discussion" group with 13,000 members and gave false suggestions with potentially fatal risks.
When asked how to cook Sarcosphaera coronaria mushrooms containing high concentrations of arsenic, FungiFriend not only mistakenly stated that the mushroom is "edible", but also introduced a variety of cooking methods including frying, stir-frying, stewing, etc. In fact, this mushroom has caused fatal cases.
Rick Claypool, research director at the consumer security organization Public Citizen, pointed out that using AI to automate the identification of edible and toxic mushrooms is a "high-risk activity" and current AI systems are not able to accurately complete this task.
This is not an isolated case. In the past year, AI applications have frequently made serious mistakes in the field of food safety: - Some AI applications recommend making sandwiches containing mosquito repellents, and a certain AI system gives recipes containing chlorine, and even absurd guidance on suggesting eating stones - Google AI also He once claimed that "dogs can exercise" and suggested using glue to make pizza, etc.
Despite frequent AI errors, American companies are still rapidly promoting the popularization of AI customer service. Behind this "focus on speed and neglect quality" approach reflects that companies are overly concerned about cost savings and lack consideration for user safety. Experts call for caution to use AI technology in specific areas, especially those involving security, to ensure the accuracy and reliability of information.
The "FungiFriend" incident sounded a wake-up call to warn us that while enjoying the convenience of AI, we must pay attention to its security risks. In the future, AI technology needs to be tested and regulated before it is applied to ensure its reliability and security and avoid similar incidents from happening again. Only in this way can the benefits of AI be truly utilized and avoid it becoming a potential harm.