This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
The report highlights a 76 per cent surge in phishing attempts and a 42 per cent increase in fake IDs and suspicious biometric transaction detections, signaling a shift towards more sophisticated fraud methods. The use of deepfake technology and generative AI is also on the rise, compounding the threat range.
“Using AI-driven tactics such as deepfake selfies and synthetic identities, organised fraudsters are testing traditional security measures like never before. It says future-proofing means adopting AI-driven validation and multi-layer defences to combat deepfakes, synthetic identities, and emerging threats.
AI-generated deepfakes , synthetic identities and hyper-targeted phishing attacks are just some of the cyberthreats on the rise. As a result of the partnership, SEON will proactively detect deepfake KYC attempts, synthetic identities, and mass-registration fraud before accounts are created through Intergiro.
Fraudsters are exploiting GenAI tools such as large language models (LLMs), voice cloning, and deepfakes to carry out increasingly sophisticated scams that are harder to detect and prevent. Romance fraud: Deepfake images and videos help fraudsters create convincing personas to manipulate victims emotionally and financially.
In the wrong hands, however, it can be incredibly harmful, especially with the emergence of deepfakes. A new survey from iProov , the biometric solutions provider, has revealed the attitudes on the threat of generative AI and deepfakes across the UK, US, Brazil, Australia, New Zealand and Singapore.
In its sixth edition, the 2025 Identity Fraud Report found that attacks involving deepfakes happened every five minutes in 2024, and digital document forgeries increased by 244 per cent year over year. Another way fraudsters are abusing financial firms is through AI-assisted deepfakes. per cent of attacks globally).
Meanwhile, in a separate incident in Singapore, the cybersecurity landscape saw a significant surge in phishing attempts targeting local entities in 2022. Over 80 percent of these phishing campaigns mimicked banks or financial services, including institutions based in China, as well as local services like Singpass and SingPost.
The financial sector is facing an unprecedented surge in AI-driven fraud, with deepfake-related attacks increasing by a staggering 2,137% over the past three years. of all fraud attempts detected in the financial sector now involve AI-generated forgeries, with deepfakes leading the charge. According to the research, 42.5%
Recent phishing and fraudulent cases involving multiple banks in Singapore have highlighted the severity of this issue and the urgent need for banks to enhance their security measures. DBS isn’t the only bank making news for phishing scams. US$129,841).
US-based identity platform Persona has partnered with Okta to deliver an automated identity verification solution and support organisations to safeguard against phishing and deepfakes.
Whether through phishing emails and text messages or social media, the biggest cyber threat facing banking customers today is the rise of sophisticated social engineering frauds. No matter where you live, scammers will come for you.
Finally, it highlights the rise of AI-powered cyberattacks, with deepfakes and email phishing attacks increasing substantially. Sumsub’s 2023 Identity Fraud Report reveals a tenfold increase in the number of deepfakes detected globally from 2022 to 2023, with the crypto and fintech sectors accounting for 96% of these cases.
” Slipping through the cracks Increasingly sophisticated tactics, such as fake invoices, deepfake media, and phishing scams – driven by generative AI – have made attacks more convincing and harder to detect.
Deepfakes and threats of AI Such activity could lead to identity theft, which IDnow believes should be a concern to the UK public, especially given the rise in deepfake technology. Developments in generative artificial intelligence (AI) mean deepfake technology can now be used to create hyper-realistic fake documents.
This increase was driven by fraudsters leveraging sophisticated techniques, such as fraud networks, phishing scams and AI-generated deepfakes, to trick victims into authorising payments. While fraudsters use it to create deepfakes and manipulate unsuspecting victims, financial institutions can also harness AI to combat these threats.
A majority of these funds were stolen through private key compromises, phishing schemes, and decentralised finance (DeFi) exploits. They are currently leveraging AI-driven chatbots and deepfake technology to create more convincing frauds. Asia-based crypto exchanges have not been immune.
Fraudsters are leveraging artificial intelligence (AI) tools like voice cloning and deepfakes to create highly convincing fake audio or video messages to deceive victims into transferring money or revealing sensitive information. This leaves many vulnerable to sophisticated AI-powered attacks like deepfakes and voice cloning.
. “With new technology, however, comes elevated risks, which will lead financial institutions to adapt their strategies to counter these issues such as new types of fraud, deepfakes and advanced phishing schemes.
In fact, 71 per cent of respondents named it as the number one issue, particularly in automated attacks and deepfake technologies. Seventy-three per cent of respondents expect an uptick in these types of fraud: phishing, synthetic fraud, identity theft, account takeovers (ATO) and money-laundering.
With inherent portability and connectivity, mobile devices are prime targets for exploitation, and contemporary cybercriminals are leveraging these characteristics to perpetrate more fraud than ever before, including phishing scams, malicious apps and social engineering tactics explicitly tailored for mobile platforms.
Verizon Business estimates that 74% of data breaches are down to individuals being absent-minded or tricked by social engineering attacks such as phishing emails. The diagram below shows the typical sequence of actions that phishing attackers take when carrying out social engineering attacks. The answer is security awareness training.
Entrust would also be able to leverage biometric, phishing-resistant authentication for use in high-value transactions. “Deepfakes and synthetic identity are driving a global need for a powerful level of identity assurance that facilitates crucial digital journeys in banking, finance, government, travel, and more,” Wilkinson said.
Risks include deepfakes, sophisticated cyber hacks, and the use of generative AI to create malware. AI: Fighting the emerging threat Two-thirds (66 per cent) of financial industry respondents think the use of AI by fraudsters and other criminals poses a growing cybersecurity threat.
AI’s capabilities, like machine learning algorithms, are being trained to identify and exploit vulnerabilities, automate phishing attacks and bypass traditional security measures. AI can generate synthetic identities, create deepfakes and lead other persuasive and difficult-to-detect social engineering tactics.
Other AI applications include enhancements to traditional scams, such as phishing and social engineering attacks. By personalizing approaches and crafting tailored messages, fraudsters can appear more convincing than ever to unsuspecting targets.
On top of this, those industries that continue to rely on checks utilize them regularly because the presence of a physical paper trail means that transactions cannot be interfered with digitally, such as through hacking and phishing emails.
“Investments in cybersecurity will increase to tackle this emerging threat while hackers are having a field day leveraging it for everything from phishing attacks and password cracking to producing malware and deepfakes.”
Inside APACs Identity Fraud Trends: From Tutorials to Industrialised Fraud For example, many beginner fraudsters turn to dark web forums, fraudster communities, or private chats to learn the basics of hacking, phishing, or financial fraud. Tutorials or guides are often sold or shared for free on these platforms.
Understanding AI-enabled fraud As fraudsters invest in more sophisticated technology, including deepfakes, synthetic identities and automated phishing, the need for payments firms to adopt better AI-driven fraud detection systems grows.
Here are some examples of how criminals are exploiting Gen AI for their gain: Deepfake voice and video scams: AI-generated deepfake voices and videos are being used to impersonate executives, bank representatives, or family members, convincing victims to transfer money or reveal sensitive information.
A clear example is the attack on WWP ‘s CEO, where the attackers combined WhatsApp to build trust, Microsoft Teams for further interaction, and an AI-generated deepfake voice call to extract sensitive information and money.
With tools like deepfake identity generators, fake KYC documents, and automated phishing scams, cybercriminals are launching sophisticated attacks at scale. Her words landed with weight. Generative AI has created a new breed of fraudsters who dont need years of experience to manipulate financial systems.
Automated laser phishing. Automated laser phishing: the hyper-targeted use of malicious AI to mimic trustworthy entities that compel targets to act in ways they otherwise would not, including the release of secrets. Deepfake Society bans pornography (it has sister sites that do not). Table of contents. Computational propaganda.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content