The job interview is the new front line: North Korean AI spy imposters are infiltrating British businesses
Reports that a “mini army” of North Korean IT operatives are using AI to pose as remote workers should be a wake-up call for UK businesses.
Listen to this article
What sounds like the plot of a spy thriller is quickly becoming a reality. After taking hold in the US – where hundreds of companies have already been infiltrated – there are now growing signs the tactic is spreading into Europe, with North Korean agents setting up operations in the UK.
These operations often rely on so-called “laptop farms” which are rooms filled with company-issued laptops, all controlled remotely, allowing individuals overseas to log in and appear as legitimate UK-based employees.
This isn’t about breaking into systems in the traditional sense. The goal is to get hired. Instead of forcing their way in, these operatives are being welcomed through the front door, joining organisations as seemingly legitimate employees.
Insider threats are nothing new and have historically been categorised into three groups: malicious insiders, negligent employees and compromised accounts.
What’s changing is that the “insider” may not be an employee at all, but a state-backed actor hiding in plain sight.
Unsurprisingly, AI is making this easier. Convincing CVs, deepfake video interviews and realistic digital footprints can now be generated en masse, making it far harder for hiring teams to distinguish genuine candidates from sophisticated imposters.
With hybrid working now the norm, many of the traditional checks that relied on face-to-face interaction are being stretched or lost altogether.
The impact on UK businesses is already significant. Recent researchshows nearly a third (29%) have reported a rise in malicious insider-related incidents over the past year.
The financial toll is substantial, with incidents costing an average of £9.78 million and occurring around six times a month.
Organisations need to rethink how people are hired, managed and offboarded. HR and security teams should work more closely together, ensuring identity checks, access controls and behavioural monitoring are aligned from the outset. Detection also needs to go beyond credentials.
Behaviour matters, and small inconsistencies from unusual working patterns or unexpected access requests, can be early warning signs.
There is, however, a balance to strike. Overly intrusive monitoring risks damaging culture and morale, creating the very conditions in which insider risk can thrive.
As AI makes deception easier, businesses must raise the bar for verification without losing sight of the human element – because the next insider threat may not be a disgruntled employee, but someone who was never real in the first place.
_
Adenike Cosgrove is a Cyber Security Strategist and CMO at Mimecast, has close to 20 years’ experience in the cybersecurity industry, with expertise across human risk management, AI, data privacy and compliance.
LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.
The views expressed are those of the authors and do not necessarily reflect the official LBC position.
To contact us email opinion@lbc.co.uk