
Cybersecurity and artificial intelligence share the foundational concept of an ‘adversary’.
In cybersecurity, the adversary is a threat actor— a lone hacker, organized crime group, or even a nation-state—seeking to exploit vulnerabilities in a system. In AI, an adversary is a mechanism designed to manipulate models into making incorrect decisions.
The two adversaries converge in the ongoing AI revolution, where a silent arms race is underway - attackers leverage advanced AI to craft hyper-personalized scams, poison training data, and fool real-world systems, while defenders scramble to harden algorithms and infrastructure.
To be protected in this new era, organizations must adopt a holistic and adversarial view of their systems - tightening up all the links in the chain and leveraging the attackers’ tools in defensive setups. In this article, I discuss novel cybersecurity threats and how to defend against them.
In February 2024, a Hong Kong finance worker
It is a mistake to assume that AI attacks only exist in the digital world. Researchers have shown that it’s possible to
If you’re building an AI product, it’s important to put it to the test adversarially - your cyber adversaries will certainly do the same. Large companies often deploy strong ‘adversarial models’ that find inputs that break the core model and then use those inputs to make the target model more resilient.
Attackers can also target the physical components of your product. If you’re operating in a high-risk product space (such as medicine or autonomous robotics), you cannot assume that the hardware has not been compromised. Trusted Platform Modules (TPMs) provide hardware-level security guarantees and are
Research has shown that contaminating a training dataset by even a small percentage (less than 1%) can profoundly alter the quality of the final model. In a world where training data is often scraped from the internet, it is easy for attackers to sneak in poisoned data that effectively builds backdoors into the models it will be used to train.
Some essential preventative measures:
The coding prowess of LLMs can be put to use in nefarious ways. One emerging attack vector is deploying agents that continually search for vulnerabilities to exploit inaccessible code bases. The cost of detecting zero-day exploits has become lower than ever. Implement the following protections for your code base:
Use LLMs to comb over any code for security vulnerabilities before it can be submitted.
Be careful when taking on any third-party dependencies - your system is only as secure as its weakest link.
Soon enough, the use of LLMs as security reviewers will become increasingly common, to the point where it will be considered as essential to software developers as hand-washing is to medical professionals.
The advent of generative and agentic AI promises a time of great upheaval and change. The internet was an enormous boon to human productivity, but it also gave malicious actors new tools and landscapes with which to harm others.
The AI revolution will be no different. Companies looking to stay ahead of novel security threats should think adversarially - leveraging the same tools as the attackers to make their systems more secure and resilient. Companies should also think holistically, not limiting themselves to the digital realm, and examining all the links in their product chain as potential targets.