Tackling the cyber skills gap with AI

Why the future of cyber security could be fully autonomous where the AI works independently

Sponsored Feature The cybersecurity sector, it is now routinely attested, is in the midst of a long-term skills crisis.

According to the recent and widely cited (ISC)² Cybersecurity Workforce Study, the global cybersecurity workforce reached a record 4.7 million people in 2022. Which sounds like plenty until you read that this is still 3.4 million fewer than the organization believes is needed to meet current demand.

While countries paying higher wages were in a slightly better shape, it's clear that closing even modest gaps in workforce numbers is not going to happen quickly or easily anywhere. This raises the question of what, if anything, can be done to address a shortfall that has become a structural drag on a sector whose fortunes matter considerably to everyone.

(ISC)²'s solution is to train more people, and to take long-term planning for skills and qualifications more seriously. This is logical but will take years to achieve in a tech sector that needs people now. And then there's the fact that skills shortages in IT date back to the 1980s. The industry has had more than three decades to solve its skills shortfalls and seems no nearer today to finding a solution.

Clever machines

What caused the shortfall? Undoubtedly, the tech sector hasn't been training enough people for the job and has been drawing on too small cross section of the working population when it did. The economic signals and incentives were weak. However, there is a second and more controversial view which is that many aspects of cybersecurity can no longer be efficiently done by people alone. There is simply too much to know - knowledge which takes years to acquire - and too many decisions to make, too quickly.

This has led to the rise of cybersecurity automation, which is the idea that machines are now needed as helpmates across a wide range of tasks. A symptom of this is the way that 'AI' features have become a standard part of every cybersecurity platform spec sheet. This, it is claimed, saves human effort – skills if you like – for other more important tasks.

It's an open secret where this might be heading; AI will eventually become a primary cybersecurity system that not only helps out but performs threat detection and response without human intervention. AI will become cybersecurity system, taking over threat triage in a way that matches or even surpasses what human SOC teams can do.

The idea of clever machines is not a new idea, of course. A British company founded in 2013 to realize its potential is Cambridge-based Darktrace. For its founders, the potential of AI wasn't simply about improving established technologies but eventually replacing them. Even a decade ago it was possible to see that threats were outpacing defensive technologies and the availability of skilled people to maintain them.

Today, what is consuming skills is as much a huge rise in cybersecurity complexity, argues Darktrace Director of Enterprise Security, Asia Pacific and Japan, Tony Jarvis. Organizations have become overloaded with tools, with predictable results.

"Every additional tool they put in makes their life more difficult rather than easier because now you've got to manage training and resources to run it and having that integrate with all the other tools they've got."

If you're a small company, you crave simplicity. If you're an enterprise, you crave the same thing but don't believe you will ever achieve it. A threat is detected, which SOC teams must follow as it manifests across multiple systems. Teams find themselves chasing ghosts across systems, desperately trying to work out what is real and what is a false positive.

"Not everything unusual is going to be malicious but pretty much everything malicious is going to show up eventually as some sort of unusual behavior." says Jarvis.

Human confirmation

While Darktrace's AI-driven approach has sometimes been dismissed as unproven, the sudden ascent of AI to respectability during 2022 appears to have changed this perception. Now, the company can legitimately claim to be a pioneer of something others have been compelled to follow.

"We were formed from the ground up as an AI organization that wanted to marry threat intelligence specialists with people from academia and mathematics," explains Jarvis. "The belief was always that the current way of doing things was not working."

After 2013, Darktrace set about building a collection of AI based tools which has now matured into a fully-fledged feedback loop which includes prevention, detection, response, and recovery from cyber incidents across network, email, cloud, SaaS, endpoints, and operational technology (OT). The basis of this platform is the idea of advanced anomaly detection. It sounds simple enough: instead of trying to spot threats by detecting their signature or attack pattern, look for the anomalies and unusual events that correspond to threat behavior.

The basic concept of anomaly detection has been around for more than 20 years but has always struggled with underlying limitations. First, you need to establish a baseline, which is harder than it sounds. There's simply a lot of data points to sift through, which is difficult to assess without ending up with false positives. What is an anomaly in one context might not be in another, or at another time. Finally, there's the fact that attackers now target legitimate accounts through credential compromise to impersonate genuine users. That access might not show up as an anomaly at all.

However, Jarvis believes that there is always an anomaly somewhere if you are able to find it. The key is to look far and deep enough, which is where AI comes in.

"We start looking for a number of red flags such as users doing things that are unusual. We correlate this with other unusual things that happened before this was observed. It might even be across different timelines, but this is where we start joining the dots."

Find the anomaly, find the threat

Cybersecurity has been able to detect behavioral anomalies for a long time, he says, but it has always been heavily scripted. That is, based on assumptions about what anomalies look like. Unfortunately, today's best attacks are much more sophisticated than that and increasingly avoid known behaviors.

The Darktrace system, by contrast, uses mathematical models that take account of large numbers of metrics to score anomalies, spotting novel patterns conventional systems miss. Doing this would simply be impossible without AI, Jarvis maintains. In theory, AI used in this way should make SOC analysis quicker, leading to more accurate detections. But does it remove the need to throw people and skills at every problem? To solve this, an AI-powered cybersecurity platform would have to carry out the tasks done by traditional SOC analysts, freeing up the humans to make other, bigger decisions.

Launched in 2019, Darktrace's Cyber AI Analyst (PDF) addresses this issue head on. On the one hand it saves effort by automating numerous triage tasks while on the other it claims it can ask the same sorts of questions as a human to test a hypothesis. The data set it uses to make these judgments is based on a database of real-world human cyber-analyst behaviors.

"We're going to find things a lot faster. We're not relying on eyeballs looking at screens," says Jarvis, who points to an internal Darktrace study that estimates that the time saved compared to a manual approach is 92%.

Importantly, the system is not a black box, he emphasizes, and falls into the category of 'explainable AI' which can document the way it came to a particular conclusion and the data it used to do this.

"Just as with a human SOC analyst, you can see the process it followed," says Jarvis. "Our AI can also be used to figure out how an attacker might penetrate your systems. You can then give this information to a pen tester to assess more deeply."

Slow takeover

Where does Jarvis see Darktrace's AI going in terms of autonomous decision making and is the ability to dispense human decision making imminent? Currently there are three modes; detect-only mode, which recommends actions to be taken manually but will not request permission to go out and action those recommendations itself, human confirmation mode, where the system will offer to fix a threat while allowing a human to OK a decision, and a fully autonomous mode where the AI works independently.

Right now, most organizations are somewhere in the middle but contemplating a future that looks more autonomous. Jarvis believes that autonomous AI will initially be deployed to counter extreme threats where time-to-action is critical. One example of an extreme threat might be ransomware infections that are spreading faster than manual detection and response methodologies are able to contain them.

Autonomous AI will also see higher take up in smaller organizations without cybersecurity headcount and which struggle to triage advanced threats. For these organizations (and their managed service providers) having an AI system to manage complex tasks could prove liberating. In larger organizations, AI's use will take longer as they re-tool processes towards machines rather than people.

"It's something we're going to become more comfortable with as we see how it can be used," predicts Jarvis. "But we're facing AI threats and the only way we can keep up is AI cybersecurity."

Sponsored by Darktrace.

Similar topics

Similar topics

Similar topics

TIP US OFF

Send us news