This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
The report highlights a 76 per cent surge in phishing attempts and a 42 per cent increase in fake IDs and suspicious biometric transaction detections, signaling a shift towards more sophisticated fraud methods. Account takeover scams saw a dramatic increase of 250 per cent last year.
“Using AI-driven tactics such as deepfake selfies and synthetic identities, organised fraudsters are testing traditional security measures like never before. It says future-proofing means adopting AI-driven validation and multi-layer defences to combat deepfakes, synthetic identities, and emerging threats.
In recent years, the banking sector in the Association of Southeast Asian Nations (ASEAN) has witnessed a significant surge in scams and fraud activities. Recent phishing and fraudulent cases involving multiple banks in Singapore have highlighted the severity of this issue and the urgent need for banks to enhance their security measures.
Fraudsters are exploiting GenAI tools such as large language models (LLMs), voice cloning, and deepfakes to carry out increasingly sophisticated scams that are harder to detect and prevent. Romance fraud: Deepfake images and videos help fraudsters create convincing personas to manipulate victims emotionally and financially.
In the wrong hands, however, it can be incredibly harmful, especially with the emergence of deepfakes. A new survey from iProov , the biometric solutions provider, has revealed the attitudes on the threat of generative AI and deepfakes across the UK, US, Brazil, Australia, New Zealand and Singapore.
In its sixth edition, the 2025 Identity Fraud Report found that attacks involving deepfakes happened every five minutes in 2024, and digital document forgeries increased by 244 per cent year over year. Another way fraudsters are abusing financial firms is through AI-assisted deepfakes. per cent of attacks globally).
Sophisticated scams dominate the fraud landscape BioCatch’s report uncovers a startling surge in financial cybercrime in Asia-Pacific. With scams accounting for 54 percent of all cases, there’s a 200 percent increase in voice scams from the previous year.
Whether through phishing emails and text messages or social media, the biggest cyber threat facing banking customers today is the rise of sophisticated social engineering frauds. There is no escape.
As fraud continues to rise, especially with the emergence of AI-powered scams, is this new regulation enough to tackle the ever-evolving threat of financial fraud? As scams become more advanced, even the most cautious consumers are falling victim. AI is a double-edged sword in fraud prevention.
” Slipping through the cracks Increasingly sophisticated tactics, such as fake invoices, deepfake media, and phishingscams – driven by generative AI – have made attacks more convincing and harder to detect.
From high-profile ransomware attacks and terrorist financing to scams that wiped out millions in savings, global crypto crime has become an urgent concern. A majority of these funds were stolen through private key compromises, phishing schemes, and decentralised finance (DeFi) exploits. Asia-based crypto exchanges have not been immune.
With inherent portability and connectivity, mobile devices are prime targets for exploitation, and contemporary cybercriminals are leveraging these characteristics to perpetrate more fraud than ever before, including phishingscams, malicious apps and social engineering tactics explicitly tailored for mobile platforms.
Other AI applications include enhancements to traditional scams, such as phishing and social engineering attacks. million for a purported new headquarters building in an email scam. By personalizing approaches and crafting tailored messages, fraudsters can appear more convincing than ever to unsuspecting targets.
Risks include deepfakes, sophisticated cyber hacks, and the use of generative AI to create malware. AI: Fighting the emerging threat Two-thirds (66 per cent) of financial industry respondents think the use of AI by fraudsters and other criminals poses a growing cybersecurity threat.
How can the technology driving novel and increasingly sophisticated fraud activities be harnessed to combat scams and cyber risks cost-efficient and effectively? AI’s capabilities, like machine learning algorithms, are being trained to identify and exploit vulnerabilities, automate phishing attacks and bypass traditional security measures.
Online romance scams reach new levels of deceit with artificial intelligence (AI), turning hopeful hearts into targets for sophisticated fraud. Fraudsters can use AI to develop highly convincing scams, deceiving even the savviest people and businesses. So, how can a criminal so convincingly create a fake identity?
Understanding AI-enabled fraud As fraudsters invest in more sophisticated technology, including deepfakes, synthetic identities and automated phishing, the need for payments firms to adopt better AI-driven fraud detection systems grows.
A clear example is the attack on WWP ‘s CEO, where the attackers combined WhatsApp to build trust, Microsoft Teams for further interaction, and an AI-generated deepfake voice call to extract sensitive information and money.
With tools like deepfake identity generators, fake KYC documents, and automated phishingscams, cybercriminals are launching sophisticated attacks at scale. Her words landed with weight. Generative AI has created a new breed of fraudsters who dont need years of experience to manipulate financial systems.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content