Artificial Intelligence (AI - iStock) |
A Quarter of Survey Participants Reported Fraud Attempts
A British bank has issued an urgent warning that millions of people around the world could fall victim to AI-driven scams that clone their voices.
In a statement, Starling Bank, an online-only lender, explained that fraudsters are using artificial intelligence to mimic a person’s voice using as little as three seconds of audio, often sourced from videos posted online.
The bank stated that scammers can then identify the victim's friends and family and use the AI-cloned voice to make phone calls, requesting money. These types of scams have the potential to defraud millions.
According to a survey conducted last month with over 3,000 adults by Starling Bank and Mortar Research, more than a quarter of participants reported being targeted by AI voice cloning scams in the past 12 months.
The survey also revealed that 46% of respondents were unaware of such scams, while 8% said they would send money if a friend or family member asked, even if the call seemed suspicious.
Lisa Graham, Chief Information Security Officer at Starling Bank, stated: “People often share content online that contains their voice recordings, never imagining it could make them vulnerable to fraudsters.”
The bank encourages customers to set up a safe phrase with their contacts—a simple, random phrase that’s easy to remember but distinct from other passwords—which can be used to verify their identity during phone calls.
They advise against sharing the safe phrase via text, which could expose it to fraudsters. If shared this way, the message should be deleted once it’s received.
As AI technology becomes increasingly adept at imitating human voices, concerns are rising about its potential to harm people, particularly by assisting criminals in gaining access to bank accounts and spreading misinformation.
Earlier this year, OpenAI, the company behind the AI chatbot ChatGPT, revealed its voice transcription tool Voice Engine. However, they did not release it to the public due to concerns about the potential misuse of AI-generated voices.
Post a Comment
0Comments