How AI Is Making Life Easier for Cybercriminals

AI | Cybercrime | Hackers | Data security | ID theft | January 21, 2026

This Week's Quote:

"Dream big and dare to fail." 
     - Norman Vaughan

Thanks to rapid advances in artificial intelligence, the cybercriminals seeking to dupe you into handing over your retirement funds or revealing company secrets are getting smarter and stronger.

Just as AI can personalize the ads you see online, bad actors are using it to glean personal information that enables them to create custom-tailored scams quickly and on a large scale. AI companies including Anthropic, OpenAI and Google say cybercriminals are using their technology to conduct elaborate phishing schemes, build malware and carry out other cyberattacks. Experts say similar AI tools are being used to create deepfake audio and video of corporate executives to try to pry information out of unwitting employees.

Corporations and government agencies could soon face armies of AI agents that are learning how to identify vulnerabilities in computer networks, then plan and execute an attack almost without any human intervention at all.

How are criminals using AI? How advanced and autonomous have their cyberbots become? Here are answers to these and other questions.

How is AI changing cybercriminals’ capabilities?

AI is making cybercriminals more efficient, allowing them to scale up operations. Anthropic, the company behind the AI agent Claude, says AI can amplify the speed, reach and automation of attacks. “The real change is scope and scale,” says Alice Marwick, director of research at Data & Society, an independent nonprofit tech think tank. “Scams are bigger, more targeted, more convincing.”

Half to three-quarters of global spam and phishing are now AI-generated, says Brian Singer, a Ph.D. candidate at Carnegie Mellon University who researches the use of large language models for cyberattacks and defenses.

What are some examples?

Phishing attacks are becoming more credible. For instance, AI trained on a company’s communications can now draft thousands of fluent, on-brand messages that imitate an executive’s tone or reference current events pulled from public data.

AI also can be used to clean up grammar and language, helping foreign scammers overcome language barriers that might have made their phishing attempts seem less credible in the past. AI also can help criminals impersonate others through deepfakes and voice cloning, and even use the same fake persona to target multiple people.

The biggest change is “credibility at scale,” says John Hultquist, chief analyst at Google Threat Intelligence Group.

Criminals also are getting better at finding vulnerable targets. For example, they can deploy AI to scan social media to identify people who are going through big life changes—divorce, a death in the family, a job loss—that might make them more likely to fall for a romance, investment or job scam.

Has AI made it easier for people to get into cybercrime?

Absolutely. There are now markets on the dark web where less tech-savvy people can rent or purchase AI tools for criminal operations for as little as $90 a month. “Developers sell subscriptions to attack platforms with tiered pricing and customer support,” says Nicolas Christin, head of Carnegie Mellon’s software and societal systems department. With names like WormGPT, FraudGPT, and DarkGPT, these tools can be used to create malware and phishing campaigns, and some even offer hacking tutorials.

“You don’t need to know how to code—just where to find the tool,” says Margaret Cunningham, vice president of security and AI strategy at Darktrace, a cybersecurity firm.

A newer trend, called vibe-coding or vibe-hacking, could enable wannabe cybercriminals to use AI to build their own malicious programs instead of buying them on the dark web. Anthropic revealed earlier this year that it had thwarted several instances of its AI Claude being used to create ransomware by “criminals with few technical skills.”

How is AI changing criminal organizations?

Even before the rise of AI, cybercrime operated like a marketplace, experts say. A typical ransomware attack involved separate actors: access brokers who penetrated corporate networks and sold entry points, intrusion teams who moved through systems and stole data, and ransomware-as-a-service operators who deployed the malware, handled negotiations and split the profits.

What has changed with AI is the speed, scale and accessibility of that ecosystem. Tasks once done by humans and requiring technical skill can now be automated, enabling these organizations to downsize, minimize risk and maximize profit. “Think of it as the next layer of industrialization. AI increases throughput without requiring more skilled labor,” says Christin, the Carnegie Mellon professor.

Can AI launch a cyberattack autonomously?

The short answer: not yet. Experts compare it to the race to build completely self-driving cars. The first 95% has been accomplished, but the last stretch that would enable the vehicle to reliably drive itself anywhere, anytime remains elusive. Researchers are testing AI’s hacking capabilities in laboratory settings, and a team at Carnegie Mellon, backed by Anthropic, earlier this year replicated the infamous Equifax data breach using AI. “It’s a big leap,” says Singer, the Ph.D. candidate who led the project at Carnegie Mellon’s CyLab Security and Privacy Institute.

While researchers like Singer have shown that AI is capable of planning and carrying out an attack on its own in a lab, most experts don’t think the technology has advanced to that point in the real world. Still, Anthropic recently revealed that its AI Claude was used to carry out an attack almost on its own, which suggests AI is closing in on autonomy.

“Within two or three years, cybersecurity will be AI versus AI, because humans won’t be able to keep up,” says Singer.

Do we need to change the way we defend ourselves?

Hackers may be exploiting AI to do evil, but AI companies say the same technology can be used by organizations to shore up their cyber defenses, as well. Call it the new AI arms race. Anthropic and OpenAI, for example, are developing AI models that can autonomously and constantly inspect software code to find vulnerabilities that criminals might use to gain access, though humans still have to approve any changes. Recently an AI bot created by Stanford researchers outperformed some human testers in looking for security flaws in a network.

Even AI won’t be able to prevent all breaches, however, which is why organizations need to focus on building resilient networks that can continue to perform even when under attack, says Hultquist, the Google analyst.

For office workers routinely handling email, sending and receiving documents, or anyone at home on their personal computer, a healthy dose of skepticism and some good online habits may still be the best defense. Don’t open suspicious attachments unless you have verified the sender independently of the original email. Multifactor authentication is very effective, experts say. And should you receive a voicemail, email or video of your boss or a family member asking you to transfer money or pay a bill you weren’t aware of, check with them first to make sure it’s real.

“Generative AI makes fakes so convincing that skepticism is your best defense,” says Marwick, the consumer-data expert.

Credit goes to William Boston, Wall Street Journal, December 26, 2025.

Thank you for all of your questions, comments and suggestions for future topics. As always, they are much appreciated. We also welcome and appreciate anyone who wishes to write a Tax Tip of the Week for our consideration. We may be reached in our Dayton office at 937-436-3133 or in our Xenia office at 937-372-3504. Or, visit our website.
 
This Week’s Author, Mark Bradstreet

Next
Next

Why Every Family Needs a Code Word