Interesting

Tag: AI

As payment fraud proliferates, governments, banks and tech companies disagree on who should cover consumer losses

Financial Times »

Audio, video and images generated by AI — so-called deepfakes — are one of the factors behind that rise. Accounting and consulting firm Deloitte estimates that AI-generated content contributed to more than $12bn in fraud losses in the US last year, and could reach $40bn by 2027.

As the problem has grown in a range of countries, so has the debate between government, banks and technology companies over who should foot the bill when the money cannot be recovered.

In the UK, the government ruled that banks are liable for up to £85,000 in losses. In Australia, more of the blame may be pinned on tech companies.

In the US, the question of who must pay remains unanswered — and is becoming politically fraught. Some senior Democrats want the banks to take more responsibility, and the Consumer Financial Protection Bureau is investigating Zelle, an account-to-account payments system owned by a consortium of large US banks which has been used by scammers.

 

What are deepfakes and how to defend against generative AI deception

The rapid advancements in artificial intelligence (AI) have unleashed a new threat: deepfakes. As powerful and effective models become more easily accessible, the risk of deepfakes becomes a present danger, for corporations, small businesses, and individuals.

IBM security guru Jeff Crume explores the technology, risks, and offers a few mitigation strategies to help us stay on top of this rapidly evolving landscape.


Note: Clicking the above image will load and play the video from YouTube.

A deepfaked CFO on a video call convinced an employee to wire US$25 million to an attacker

London-based architecture and design firm Arup Group were defrauded of some US$25 million (HK$200m) after scammers used AI-generated “deepfakes” to falsely pose as the group’s CFO and request transfers from an employee to bank accounts in Hong Kong.

Arup, which employs about 18,000 people globally, has annual revenues of more than £2bn.

Cheng Leng and Chan Ho-him, writing for Financial Times »

The case highlights the threat posed by deepfakes — hyper-realistic video, audio or other material generated using artificial intelligence — when used by cyber criminals to target companies or governments.

“We can confirm that fake voices and images were used,” the company said, declining to give details because the incident was still being investigated. “Our financial stability and business operations were not affected and none of our internal systems were compromised,” it said.

Hong Kong police acting senior superintendent Baron Chan told local media in February that a member of staff at the targeted company had received a message purporting to be from the UK-based chief financial officer regarding a “confidential transaction”.

CNN »

According to police, the worker had initially suspected he had received a phishing email from the company’s UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.

He subsequently agreed to send a total of 200 million Hong Kong dollars — about $25.6 million. The amount was sent across 15 transactions, Hong Kong public broadcaster RTHK reported, citing police.

“Deepfake” normally refers to fake videos that have been created using artificial intelligence (AI) and look extremely realistic.

Elsewhere » Fortune | CFO  | CFO Dive | Architects’ Journal

Social Engineering » How bad guys hack users

Humans are often the weakest link in a security system. So why would the bad guys attempt to hack into a complex system when they can go after the weakest link – you?

Jeff Crume, IBM security guy and distinguished engineer, describes the many methods that hackers use that you should know about so you can protect yourself.


Note: Clicking the above image will load and play the video from YouTube.

Fraudster used deepfake AI voice cloning to convince a bank manager to transfer US$35 million

Thomas Brewster, writing for Forbes »

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

What he didnt know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech.

“Audio and visual deep fakes represent the fascinating development of 21st century technology yet they are also potentially incredibly dangerous posing a huge threat to data, money and businesses,” says Jake Moore, a former police officer with the Dorset Police Department in the U.K. and now a cybersecurity expert at security company ESET. “We are currently on the cusp of malicious actors shifting expertise and resources into using the latest technology to manipulate people who are innocently unaware of the realms of deep fake technology and even their existence.

“Manipulating audio, which is easier to orchestrate than making deep fake videos, is only going to increase in volume and without the education and awareness of this new type of attack vector, along with better authentication methods, more businesses are likely to fall victim to very convincing conversations.”

Elsewhere » Gizmodo

© 2024 Downshift

Theme by Anders NorenUp ↑