This article was published on September 2, 2019

Fraudsters deepfake CEO’s voice to trick manager into transferring $243,000


Fraudsters deepfake CEO’s voice to trick manager into transferring $243,000

It’s already getting tough to discern real text from fake, genuine video from deepfake. Now, it appears that use of fake voice tech is on the rise too.

That’s according to the Wall Street Journal, which reported the first ever case of AI-based voice fraud — aka vishing (short for “voice phishing”) — that cost a company $243,000.

In a sign that audio deepfakes are becoming eerily accurate, criminals sought the help of commercially available voice-generating AI software to impersonate the boss of a German parent company that owns a UK-based energy firm.

They then tricked the latter’s chief executive into urgently wiring said funds to a Hungarian supplier in an hour, with guarantees that the transfer would be reimbursed immediately.

The company CEO, hearing the familiar slight German accent and voice patterns of his boss, is said to have suspected nothing, the report said.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

But not only was the money not reimbursed, the fraudsters posed as the German CEO to ask for another urgent money transfer. This time, however, the British CEO refused to make the payment.

As it turns out, the funds the CEO transferred to Hungary were eventually moved to Mexico and other locations. Authorities are yet to determine the culprits behind the cybercrime operation.

The firm was insured by Euler Hermes Group, which covered the entire cost of the payment. The incident supposedly happened in March, and the names of the company and the parties involved were not disclosed, citing ongoing investigation.

AI-based impersonation attacks are just the beginning of what could be major headaches for businesses and organizations in the future.

In this case, the voice-generation software was able to successfully imitate the German CEO’s voice. But it’s unlikely to remain an isolated case of a crime perpetrated using AI.

On the contrary, they are only bound to increase in frequency if social engineering attacks of this nature prove to be successful.

As the tools to mimic voices become more realistic, so is the likelihood of criminals using them to their advantage. By feigning identities on the phone, it makes it easy for a threat actor to access information that’s otherwise private and exploit it for ulterior motives.

Back in July, Israel National Cyber Directorate issued warning of a “new type of cyber attack” that leverages AI technology to impersonate senior enterprise executives, including instructing employees to perform transactions such as money transfers and other malicious activity on the network.

The fact that an AI-related crime of this precise nature has already claimed its first victim in the wild should be a cause for concern, as it complicates matters for businesses that are ill-equipped to detect them.

Last year, Pindrop — a cybersecurity firm that designs anti-fraud voice software — reported a 350 percent jump in voice fraud from 2013 through 2017, with 1 in 638 calls reported to be synthetically created.

To safeguard companies from the economic and reputational fallout, it’s crucial that “voice” instructions are verified via a follow-up email or other alternative means.

The rise of AI-based tools has its upsides and downsides. On one hand, it gives room for exploration and creativity. On the other hand, it also allows for crime, deception, and nearly (unfortunately) damn competent fraud.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top