In recent years, generative AI technology has developed rapidly, bringing many conveniences to people's lives, but it has also brought new security risks. Recently, IBM researchers revealed a new fraud method that uses generative AI tools to hijack voice calls, which has attracted widespread attention. This new type of fraud uses low-cost AI tools to imitate other people's voices with extremely high fidelity, thereby easily defrauding victims of their trust and committing fraud. This move poses a serious threat to financial institutions and other organizations that rely on telephone verification of identity and requires serious attention.
Recently, IBM researchers discovered a relatively simple way to use generative AI tools to hijack voice calls, posing a new threat. Leveraging low-cost AI tools, scammers can now easily impersonate another person's voice and hijack ongoing conversations to steal funds and other sensitive information. As a result, organizations that use phone calls to verify identity, such as financial institutions, are concerned. The report advises anyone who receives a suspicious call to paraphrase it in their own words and repeat what was said to verify accuracy.
Facing the new challenges brought by AI technology, we need to be more vigilant and strengthen safety precautions. The public should increase their security awareness, be cautious about any suspicious calls, and actively learn how to identify and prevent these new types of scams. At the same time, relevant institutions should also actively explore and adopt more advanced security technologies to deal with the security risks brought by AI technology and ensure the security of user information. Only in this way can we make better use of AI technology while effectively avoiding its potential risks.