Let me start with a statement. I am a huge advocate of the opportunities that Artificial Intelligence will bring.
I am also deeply conscious of the negative impacts that AI could have if it’s used improperly or by malicious actors.
In order to clarify my thinking, and to get opinions from you, this article looks at some of the threats, and risks of AI with a security lens. I offer a number of credible scenarios where the malign use of AI could impact an individual, a company, or even a country.
What is GrAI?
We are all aware of a quite binary argument for and against AI. On one hand, it will bring ‘vast technological advantages’ and ‘save the world.’ On the other, it will ‘take lots of peoples jobs’, ‘create autonomous killer robots’ and ‘destroy the world.’ OK, I am being a little flippant, but I do think there is quite an expanse between those two outlooks. Call it the grey space if you will.
Pronounced ‘Gray-i’ I see this as AI tools that are being developed and manipulated by either an individual or group, to gain an unfair advantage or benefit over another individual or group.
It’s not the global destruction scenario, but it is everything from malign influence to actual physical harm as you will see below.
The Threat of GrAI
Before we talk about the risk of such technologies, let’s explore the threat of GrAI. In security speak we break that down into capability and Intent. Simply put, you assess if someone wants to do something nasty and whether they have the ability to actually do it.
With bleeding edge technologies, the capability part of the equation is currently a problem for most people. That’s especially true with the throttles and controls that the major AI companies put on their technology for the mass consumer. However, as we are seeing, more and more of those tools are becoming available, the costs are reducing, and the difficulty levels are considerably lower. Powerful tools are well within reach of governments, many companies, and some oligarchs.
So let’s look at the intent part of the equation. Who is the threat actor and what are they trying to do? Is it an individual criminal, looking to use AI to generate wealth, or cause harm? Is it a company looking to gain competitive advantage? Or is it a nation state, looking to use AI to analyse massive data sets. With that knowledge they can do everything from controlling their populations, to influencing election outcomes. They could even damage an unfriendly country’s energy infrastructure.
If someone has good capability, and is really intent on attacking a particular target, we say that it is ‘likely’ to happen. That takes us to risk.
What’s The Risk of GrAI
Simply put, risk is how likely someone is going to attack a target and the impact or damage if they do. That impact could be injury or death. It could be reputational, environmental or political.
Any one risk might only impact one of those, or it could be designed to impact all of them at the same time.
This is best explored with some scenarios.

GrAI uses against an individual.
Extortion
A small criminal gang uses AI to create deepfake videos of a celebrity. Their intent is extorting money in return for not releasing the videos and damaging the celebrities reputation.
Currently this is technically very easy and is already happening.
Identity Theft
An AI algorithm is set to automatically scan social media and the internet looking for sufficient data on an individual to steal their identity. That data is then used to create fake bank accounts, for criminal use, or for the application and receipt of loans in the fake identities name.
Currently this is technically very easy and is already happening.
Assassination
AI is used to help a malicious actor conduct a cyber attack against an electric car brand. This particular car happens to be driven by the targeted victim. Whilst the victim is driving, the AI assumes control of the car. It accelerates massively out of control and causes a fatal accident.
Currently this is technically possible.

GrAI uses against a company.
Phishing / Cyber
AI is used to develop and run a mass cyber campaign using phishing emails. That creates a pathway into a company’s IT servers. The actual impact is theft of intellectual property for competitive advantage. However, it’s hidden behind a ransomware disguise and a demand for cryptocurrency.
Currently this is technically easy.
Automated Disinformation Campaign.
AI is used to create misinformation about a company’s products at scale. Media articles, product reviews, and social media accounts are generated persistently with consistent negative messaging. Brand damage is enormous resulting in a long term impact and loss of sales.
Currently this is technically possible.

GrAI uses against a country.
Energy Infrastructure
A hostile foreign state uses AI as part of a large-scale cyber-attack against a country’s energy infrastructure. By manipulating something as simple as causing every electric car charger to turn on, or off at the same time frequently, the electricity grid could be massively overloaded causing extensive hardware damage.
Currently this is technically possible.
Botnets
AI is used to manage a social media botnet army to influence an election outcome. Not only does the system create vast amounts of content, pushing for a particular candidate, but it also runs disinformation campaigns against other candidates. The system self-propagates, supporting, liking, and commenting on posts from other members of the botnet. All that activity plays the social media algorithms at its own game. It’s achieved faster than the social media firms are able to close down the content. Assuming of course that they have the will to do so. The impact is direct interference in a political outcome. Depending on scale, or the level of surprise, civil unrest and loss of the rule of law could result in fatalities and property damage.
Currently this is technically possible and has been detected several times.
Mass Surveillance and Social Control
A government uses a system on its own population. AI is combined with country wide video monitoring systems, with facial recognition. Then social media monitoring, banking and other data sources to monitor its citizens are added. The country uses the data to exercise absolute control. The impact is loss of freedom of speech, mass arrests and the detention of dissenters.
Currently this is technically possible and is likely already in place in some countries.
And Finally.
Let’s come back to that assassination scenario. Imagine that scenario, applied all at once, to every car under a specific brand, in all countries deemed unfriendly to the threat actor. Aside from the individual impacts in their hundreds of thousands, the car brand is destroyed, the country dealing with the aftermath will be inundated with hospitalised victims, paralysed logistically temporarily, and politically branded a failure for not protecting its citizens. That is an opening move for war.
This evolved scenario is probably not technically possible at the moment, but it is perhaps only a matter of time.
Let’s bring things back down a notch though. Remember the very first thing we discussed was capability and intent. Just because a country might have the capability to do something, does not mean they intend to do it.
So where is the grey line, and how easily is it crossed? – Well, that is the debate isn’t it. What might be unacceptable to some people in times of peace, might be acceptable in times of conflict. But once the genie is unleashed, how does it get put back in the bottle?
What are the credible AI scenarios that you worry about? Contribute to the debate on the LinkedIn article here.
What can I read next?
If you want to take this concept a little further, then subscribe to the Reepaman Newsletter on LinkedIn. It’s a combination of thought provoking articles and fictional short stories that explore the misuse of AI and drone technologies. It’s published monthly. Alternatively, the Reepaman website is here.
Note:
AI has not been used to create the text of this article. However, AI has been used to create the images. Credit ChatGPT.
Want to read more by Rob Phayre? – www.robphayre.com