paint-brush
AI Can Outsmart You, and Cybercriminals Know Itby@vaditya04
165 reads

AI Can Outsmart You, and Cybercriminals Know It

by Aditya Visweswaran4mFebruary 19th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The AI revolution has given rise to an arms race in cybersecurity. Novel attack vectors such as model poisoning have emerged. Old attack vectors such as phishing have been supercharged through AI. Organizations need to think like adversaries and evaluate all the links in the product chain to be well-defended. In this article, I explain the most pressing threats and how to defend against them.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - AI Can Outsmart You, and Cybercriminals Know It
Aditya Visweswaran HackerNoon profile picture
0-item
1-item


Cybersecurity and artificial intelligence share the foundational concept of an ‘adversary’.


In cybersecurity, the adversary is a threat actor— a lone hacker, organized crime group, or even a nation-state—seeking to exploit vulnerabilities in a system. In AI, an adversary is a mechanism designed to manipulate models into making incorrect decisions.

The two adversaries converge in the ongoing AI revolution, where a silent arms race is underway - attackers leverage advanced AI to craft hyper-personalized scams, poison training data, and fool real-world systems, while defenders scramble to harden algorithms and infrastructure.


To be protected in this new era, organizations must adopt a holistic and adversarial view of their systems - tightening up all the links in the chain and leveraging the attackers’ tools in defensive setups. In this article, I discuss novel cybersecurity threats and how to defend against them.

1. Hyper-personalized phishing

In February 2024, a Hong Kong finance worker transferred $25 million to fraudsters, persuaded of the legitimacy of the transaction by a video call where all the other attendees were deepfake representations of the company staff. A similar attack, impersonating WPP’s CEO in an audio message, was foiled in May 2024. Darktrace reports a 135% surge in novel social engineering attacks. Attackers are using AI in increasingly creative ways for phishing attacks. AI impersonation technology today is the weakest it’s ever going to be - and it’s already scarily good. How should organizations guard against this?


  1. Better verification and authentication protocols - no communication through unofficial channels should be trusted.
  2. Consider using decentralized identifiers (DIDs) as a self-identification mechanism. This will help validate that communications originate from you robustly and securely.
  3. Train employees to recognize phishing attempts. A novel strategy - using sophisticated AI-powered ‘phishing drills’ to inoculate employees against suspicious requests.

2. Real-world exploits

It is a mistake to assume that AI attacks only exist in the digital world. Researchers have shown that it’s possible to alter medical reports in an imperceptibly small way to change the classification of a tumor from malignant to benign, and to use innocuous stickers to fool self-driving cars into thinking a stop-sign is a 45 mph speed limit sign. The reason these attacks work is that AI models operate within a high-dimensional space where changes that appear insignificant to us can put the model into ‘uncharted territory’ where it makes incorrect decisions.


If you’re building an AI product, it’s important to put it to the test adversarially - your cyber adversaries will certainly do the same. Large companies often deploy strong ‘adversarial models’ that find inputs that break the core model and then use those inputs to make the target model more resilient.


Attackers can also target the physical components of your product. If you’re operating in a high-risk product space (such as medicine or autonomous robotics), you cannot assume that the hardware has not been compromised. Trusted Platform Modules (TPMs) provide hardware-level security guarantees and are increasingly deployed in self-driving cars. Think adversarially about what an attacker could do if they had full access to the product hardware - could they compromise it in a way that’s hard to detect but can cause disastrous consequences?

3. Data poisoning

Research has shown that contaminating a training dataset by even a small percentage (less than 1%) can profoundly alter the quality of the final model. In a world where training data is often scraped from the internet, it is easy for attackers to sneak in poisoned data that effectively builds backdoors into the models it will be used to train.


Some essential preventative measures:

  1. Evaluate the sourcing of your data carefully - rely on trusted data suppliers wherever possible.
  2. Think of creative ways to filter out poisoned data before it gets into your model. For instance, if your data is text, consider using LLMs to pre-validate the corpus for contamination and inappropriate content.

4. Code vulnerability identification

The coding prowess of LLMs can be put to use in nefarious ways. One emerging attack vector is deploying agents that continually search for vulnerabilities to exploit inaccessible code bases. The cost of detecting zero-day exploits has become lower than ever. Implement the following protections for your code base:


  1. Use LLMs to comb over any code for security vulnerabilities before it can be submitted.

  2. Be careful when taking on any third-party dependencies - your system is only as secure as its weakest link.


Soon enough, the use of LLMs as security reviewers will become increasingly common, to the point where it will be considered as essential to software developers as hand-washing is to medical professionals.

Think adversarially and holistically

The advent of generative and agentic AI promises a time of great upheaval and change. The internet was an enormous boon to human productivity, but it also gave malicious actors new tools and landscapes with which to harm others.


The AI revolution will be no different. Companies looking to stay ahead of novel security threats should think adversarially - leveraging the same tools as the attackers to make their systems more secure and resilient. Companies should also think holistically, not limiting themselves to the digital realm, and examining all the links in their product chain as potential targets.