New York City recently conducted an AI weapon scanning pilot project aimed at improving the safety of subway systems using scanning equipment provided by Evolv. However, the results of the one-month pilot project were disappointing, not only failed to detect any guns, but instead falsely reported 12 knives, resulting in unnecessary searches and delays for 118 passengers. This move has aroused public doubts about the reliability and practical utility of the technology, and also highlights the challenges and risks that may be faced when applying AI technology in the field of public safety.
Recently, the results of an AI weapon scanning pilot project in New York City are jaw-dropping. The scanning equipment provided by the technology company called Evolv did not find a gun during the experiments for the subway system, but found 12 knives.
Image source notes: The image is generated by AI, and the image authorized service provider Midjourney
The trial, which began at the end of July, lasted only a month and the results were disappointing. According to CBS New York, despite the device detecting 12 knives, no arrest records were generated, which displeased Diane Ackerman, an attorney for the Legal Aid Association. She said this means that the knives may be legal and there is no need to conduct additional searches of 118 passengers.
In fact, these 118 false alarms bothered many passengers. Many people suffer unnecessary stays and searches as they pass through these scanners. Even a CBS reporter encountered false alarms when walking through these scanners twice in 2022 and 2023. Obviously, the reliability of this system is worrying.
Despite the poor results of the project, the city tried to defend it. An NYPD spokesman said there were no shootings during the trial at a subway station using Evolv technology. Although this is true, according to the New York Times research, subway violence is actually relatively rare, usually only happening every million rides. Moreover, the technology has been deployed for only one month at 20 of 472 subway stations, making it difficult to prove its actual deterrent effect.
The failure of this AI scanning project not only disappoints citizens, but also exposes many problems in the application of technology in the field of public safety. Whether the city government will continue to invest in this technology is still a topic worthy of attention.
Key points:
No guns were found, but the report showed that 12 knives were found.
118 false alarms caused unnecessary stays and searches for passengers.
The municipal government argued that technology has a deterrent effect, but the actual data support was insufficient.
The failure of this AI weapon scanning pilot project not only exposed the flaws of the technology itself, but also triggered people's reflection on the application of AI technology in the field of public safety. In the future, the application of similar technologies will need to be more cautious and pay attention to their reliability and accuracy to avoid unnecessary troubles and harm to the public.