Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
Get access to immediate incident response assistance.
Get access to immediate incident response assistance.
Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
The FBI issued an advisory on December 3rd warning the public of how threat actors use generative AI to more quickly and efficiently create messaging to defraud their victims, echoing earlier warnings issued by Trustwave SpiderLabs.
The FBI noted that publicly available tools assist criminals with content creation and can correct human errors that might otherwise serve as warning signs of fraud. This effectively removes one of the easiest ways to spot a phishing email: spotting poor sentence structure, grammar, and spelling.
Threat actors can use AI to create several types of deceptive messages, including text, video, and audio. These fall under the category of being a deepfake.
Trustwave SpiderLabs Senior Consultant Jose Luis Riveros, who researched and wrote about the creation process for these items, noted how threat actors can find a large amount of freeware to create video deepfakes or use more advanced software and how they can be used in a variety of attacks.
Riveros’ conclusions were supported by Ed Williams, VP, SpiderLabs at Trustwave, in his recent 2025 Predictions blog. Williams highlighted how AI-enhanced phishing and social engineering capabilities will allow cybercriminals to craft highly convincing phishing emails, social media posts, and even deepfake content, making it increasingly difficult to discern between legitimate and malicious communications. With AI-driven social engineering, the stakes for user awareness training will be higher than ever.
The FBI agreed, saying “criminals use AI-generated text to appear believable to a reader in furtherance of social engineering, spear phishing, and financial fraud schemes such as romance, investment, and other confidence schemes or to overcome common indicators of fraud schemes.”
To make their fake persona appear as "real" as possible, criminals use generative AI to create voluminous fictitious social media profiles to trick victims into sending money.
Criminals can leverage AI to expand their reach by quickly generating messages that resonate with a larger audience. This allows them to create believable content more efficiently. Additionally, AI helps overcome language barriers that may hinder their ability to target individuals in various regions around the globe. They utilize AI to produce content for fraudulent websites, particularly for schemes involving cryptocurrency investments and other financial scams.
Criminals use AI-generated images to create believable social media profile photos, identification documents, and other images supporting their fraud schemes. This tactic is particularly dangerous because one of the methods one can use to determine if a person is real is to check social media profiles, which often include multiple images.
So, criminals create realistic images for fictitious social media profiles in social engineering, spear phishing, romance schemes, confidence fraud, and investment fraud. Criminals use generative AI to produce photos to share with victims in private communications to convince victims they are speaking to a real person, which is particularly effective with romance schemes.
Fraudsters can even take this a step further and generate fake identification documents, such as driver's licenses or credentials (law enforcement, government, or banking) for identity fraud and impersonation schemes.
Malicious actors also play to people's emotions by creating false images of natural disasters and conflicts to elicit donations to fraudulent charities.
Criminals can use AI-generated audio to impersonate well-known public figures or personal relations to elicit payments. Criminals can generate short audio clips containing a loved one's voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom. Criminals can also obtain access to bank accounts using AI-generated audio clips of individuals and impersonating them.
The last category the FBI covered was video. Criminals use AI-generated videos to create believable depictions of public figures to bolster their fraud schemes.
The FBI advisory noted that criminals generate videos for real-time video chats with alleged company executives, law enforcement, or other authority figures. These videos can also be used as part of a larger attack to "prove" to a victim the online contact is a "real person."
Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.
Copyright © 2025 Trustwave Holdings, Inc. All rights reserved.