This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Account takeover scams saw a dramatic increase of 250 per cent last year. Victims of these scams not only face financial losses, averaging about $180 per incident, but 40 per cent also suffer from subsequent identity theft. The use of deepfake technology and generative AI is also on the rise, compounding the threat range.
In the contemporary digital world, the proliferation of deepfake technology and generative AI heralds an era fraught with online scam challenges, notably within the financial sector in Asia. Economic Ramifications of DeepfakeScams The global impact of impersonation scams can be far-reaching, and expensive.
Crooks are circulating AI-generated deepfake videos of Prince William and UK prime minister Keir Starmer on Facebook and Instagram to dupe viewers into scam cryptocurrency investments.
Singapore authorities are warning businesses of a rise in scam video calls which use deepfake AI to impersonate business bosses with the aim of duping employees into transferring funds to criminal accounts.
“Using AI-driven tactics such as deepfake selfies and synthetic identities, organised fraudsters are testing traditional security measures like never before. It says future-proofing means adopting AI-driven validation and multi-layer defences to combat deepfakes, synthetic identities, and emerging threats.
The business email compromise (BEC) scam continues to rear its ugly head at the enterprise, with the global pandemic creating even more avenues through which cyber attackers can steal company money. At the heart of BEC and other scams is impersonation.
In recent years, the banking sector in the Association of Southeast Asian Nations (ASEAN) has witnessed a significant surge in scams and fraud activities. DBS isn’t the only bank making news for phishing scams. US$129,841). Losses exceeded S$13 million (US$9.59 Losses exceeded S$13 million (US$9.59
In fact, they’ve demonstrated a level of creativity and problem solving that has made digitally enabled crimes, such as scams, harder and harder to stop.
As much as 53 per cent of Brits have either never heard of the term deepfake or misunderstood its meaning, British bank Santander has revealed, as part of a new initiative to warn consumers about the dangers of AI deepfakescams.
Fraudsters are exploiting GenAI tools such as large language models (LLMs), voice cloning, and deepfakes to carry out increasingly sophisticated scams that are harder to detect and prevent. Romance fraud: Deepfake images and videos help fraudsters create convincing personas to manipulate victims emotionally and financially.
In the wrong hands, however, it can be incredibly harmful, especially with the emergence of deepfakes. A new survey from iProov , the biometric solutions provider, has revealed the attitudes on the threat of generative AI and deepfakes across the UK, US, Brazil, Australia, New Zealand and Singapore.
This includes a global, fourfold increase in AI-driven deepfakescams. The integration of Sumsub’s compliance solutions will help financial institutions deal with the growing threat of fraud and financial crime.
In its sixth edition, the 2025 Identity Fraud Report found that attacks involving deepfakes happened every five minutes in 2024, and digital document forgeries increased by 244 per cent year over year. Another way fraudsters are abusing financial firms is through AI-assisted deepfakes.
Sophisticated scams dominate the fraud landscape BioCatch’s report uncovers a startling surge in financial cybercrime in Asia-Pacific. With scams accounting for 54 percent of all cases, there’s a 200 percent increase in voice scams from the previous year.
In response to the growing threat of deepfake technologies, Ant International has also developed an advanced anti-deepfake electronic Know Your Customer (e-KYC) tool. To ensure reliability, Ant International maintains safeguards with human intervention ready to address discrepancies, avoiding over-reliance on AI.
The UK’s increasing fraud and scam problem, focusing on new regulations mandating automatic reimbursement for APP fraud victims. The UK’s fraud and scams problem is not going away. With a wealth of AI and deepfake technology at their fingertips, even the most novice of criminals can perpetrate sophisticated fraud.
In Myanmar and other Southeast Asian countries, cyber scam rings target victims with fraudulent schemes like fake jobs or investments. Fraud networks, however small they may seem right now, will gain prominence, just like AI-powered deepfakes. The damage of fraud rings is much more significant than that of individual scammers.
As fraud continues to rise, especially with the emergence of AI-powered scams, is this new regulation enough to tackle the ever-evolving threat of financial fraud? As scams become more advanced, even the most cautious consumers are falling victim. AI is a double-edged sword in fraud prevention.
However, regulators have been expanding accountability beyond financial institutions to the digital channels used in fraud scams, such as telecommunications and social media. The use of deepfakes and chatbots by fraudsters poses significant challenges, while real-time fraud detection systems are being enhanced to counter these threats.
Banks are coming under an increasingly intense barrage of cybersecurity attacks, and many of these now use deepfakes and generative AI to make the initial breach. As deepfakes proliferate, a trickle of lawsuits has the potential to become a flood – and one which absolutely could sink the banks. Michael Marcotte, CEO of artius.iD
You can find Part 1 on impersonation scams here and Part 2 on money mules here. This is the third piece in an ongoing conversation between BioCatch Global Advisor Seth Ruden and BioCatch Threat Analyst Justin Hochmuth about how various fraud trends impact smaller financial institutions.
Bots and deepfakes The rise of bots using deepfake technology to create convincing fake profiles poses an additional challenge. Bots and deepfake tech are enabling criminals to automate complex scams that used to take a lot more effort, making them incredibly efficient and dangerous,” said Dan Yerushalmi , CEO of AU10TIX.
” Slipping through the cracks Increasingly sophisticated tactics, such as fake invoices, deepfake media, and phishing scams – driven by generative AI – have made attacks more convincing and harder to detect.
Getting a grip on identity fraud Sumsub Growing prevalence of AI-driven deepfakes, digital forgeries and identity ‘spoofing’ to obtain valuable personal and business data is impacting industries across the board.
Another ethical concern outlined in the report is the potential of gen AI to produce convincing deepfake content, which can lead to misinformation. Furthermore, gen AI may be used for malicious purposes, and may amplify the scale, speed and sophistication of cyberattacks and scams.
From high-profile ransomware attacks and terrorist financing to scams that wiped out millions in savings, global crypto crime has become an urgent concern. In Asia, investment scams, Ponzi schemes, and romance fraudsalso known as “pig butchering” scamscontinue to target unsuspecting retail investors.
The UAE Cyber Security Council recently reported that more than 50% of individuals fell victim to a phishing website, and nearly one in five were targeted with a social media scam. There is no escape.
A subsidiary of automaker Toyota has lost millions of dollars thanks to a vendor payment scam, according to SC Media UK reports. It’s unclear how much money that company lost, while cyber experts also say it’s not certain whether such “deepfake” tactics will become the norm. The publication said Monday (Sept.
Though the tactic is unusual, cybersecurity experts warn the strategy could signal an increasing use of so-called “deepfake” technology to infiltrate systems and steal company cash. Australia is now seeking comment from industry experts to address modern cyber threats in both the private and public sector.
The PYMNTS January 2020 Digital Identity Tracker examines the issues, as well as advances in mobile biometric security, “deepfake” detection and more. Into the fray comes new technology like “liveness detection” that supercharges facial recognition biometrics, making selfie scams and the like increasingly difficult.
With inherent portability and connectivity, mobile devices are prime targets for exploitation, and contemporary cybercriminals are leveraging these characteristics to perpetrate more fraud than ever before, including phishing scams, malicious apps and social engineering tactics explicitly tailored for mobile platforms.
Corsound AI Corsound AI utilizes innovative technology to verify customers’ identities for financial institutions, leveraging over 200 patents to detect AI scams and voice fraud.
For instance, fraudsters now leverage innovative technologies to create deepfakes, bypassing traditional identity verification methods like document ID checks and biometrics. Additionally, customer education remains a hurdle, with some customers perceiving newer security measures, like selfie verification, as potential scams.
Other AI applications include enhancements to traditional scams, such as phishing and social engineering attacks. million for a purported new headquarters building in an email scam. By personalizing approaches and crafting tailored messages, fraudsters can appear more convincing than ever to unsuspecting targets.
Leflambe continues: “In parallel, fraudsters leverage new technology very quickly (for instance, using deepfakes to circumvent liveness checks) and compliance teams must remain very vigilant about new controls not being outdated as a result.” Keeping up with this can be a true headache.”
Risks include deepfakes, sophisticated cyber hacks, and the use of generative AI to create malware. AI: Fighting the emerging threat Two-thirds (66 per cent) of financial industry respondents think the use of AI by fraudsters and other criminals poses a growing cybersecurity threat.
How can the technology driving novel and increasingly sophisticated fraud activities be harnessed to combat scams and cyber risks cost-efficient and effectively? AI can generate synthetic identities, create deepfakes and lead other persuasive and difficult-to-detect social engineering tactics.
Companies have deployed liveness checks and other robust measures to help verify identities and combat fraud, and they will have to continue to upgrade their defenses to stay ahead as scams like deepfakes become more prominent, Dinh predicted.
Is it even a regular Tuesday if someone in Singapore hasnt fallen for a scam? Its starting to feel like scams are as common as bubble tea outlets in this country they’re everywhere, always popping up in new flavours, and somehow, people just keep going back for more. And lets be real for a second.
Since the pandemic, fraud and scams have surged significantly, with mature markets like Singapore and Hong Kong facing increasingly complex challenges, including authorised push payment fraud and deepfakes. In countries like Cambodia, Laos and Myanmar, organised criminal groups, primarily from China, operate cyber scam centres.
Fighting deepfakes and fraudulent identities – Jumio’s holistic approach to building identity trust” with “Jumio Delivers Adaptive Verification as AI Fraud Projected to Hit US$40 Billion. Hong Kong police recently arrested 27 individuals linked to a deepfakescam that swindled victims out of $46 million.
In deploying AI in merchant services, enhanced fraud detection and prevention against new scamming schemes like deepfaking is as important, if not more important, as increasing operational efficiency,” says Yang Peng, Chief Executive Officer of Ant International.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content