What are AI and machine learning adding to threat intelligence – brains, brawn or both?

As with any apparently game-changing technology, the benefits and drawbacks of artificial intelligence (AI) should be qualified by an accurate definition of what AI means. The term has become so ubiquitous in the materials than describe cyber vendors’ products  (for example), and so quickly, that the only logical conclusion is that the bar for what constitutes AI is set rather low.

For instance, none of the so-called AI cyber technologies are fully autonomous, self-aware or otherwise ‘intelligent’ in comparison to the human mind. Rather they follow complex algorithms and utilize enormous computational power to process information ‘intelligently’. But that hasn’t stopped AI solutions being increasingly prominent in cybersecurity.

The AI arms race in cybersecurity

AI and machine learning play active roles on both sides of the cybersecurity struggle, enabling both attackers and defenders to operate at new magnitudes of speed and scale.

On the attack side, the rise of ‘adversarial AI’ has included relatively lightweight machine learning algorithms used to devastating effect in spear phishing attacks. Here, by harvesting open-source intelligence and studying correspondence gleaned from a compromised account in an automated and ‘intelligent’ fashion, the human cyber attacker can deploy effective social engineering techniques with a high likelihood of success and almost zero effort.

Other examples include DeepFake attacks  that use AI to mimic the voice and appearance of individuals in audio and video files. These have grown amid the Covid-19 pandemic and there are fears of the influence they may bring to significant future events such as the upcoming US Presidential Election. IBM’s DeepLocker proof of concept is one of many showing how AI will spur on the development of advanced forms of malware.

According to the Capgemini Research Institute , in its 2019 survey of 850 senior execs at billion-dollar organizations, 73% are actively testing use cases for AI in cybersecurity, and 69% believe they will be unable to respond to future cyberattacks without it. Adoption of AI in cybersecurity is poised to skyrocket, from less than 20% pre-2019 to 63% in 2020. This rate of growth is borne out in analyst predictions that size the market for AI in cybersecurity at over $38bn by 2026 .

AI in Threat Intelligence

Artificial intelligence and machine learning are critical to effective threat intelligence in two major ways: coping with overwhelming data volume and ensuring freshness of that data.

Volumes are extraordinarily large and growing all the time. Blueliv’s broad range of threat sources generate an evolving landscape of over 150m qualified threat items. Processing information at this scale for use in real-time decision making is impossible without advanced automation software. Algorithm-driven sensors, crawlers, sinkholes and honeypots can massively expand the discovery and categorization of threat data, and sift through it all at machine speed to detect anomalous activity.

Capgemini’s research supports the value of this approach, finding that 60% of respondents believe AI drives higher efficiency for in-house cyber analysts and 74% say AI enables a faster response to breaches.

However, for organizations that rely too heavily upon AI in these processes, there is a real risk of losing accurate, contextual understanding of threats. This, in turn, leads to too many false positives and an overly automated, playbook-oriented defense strategy  that quickly falls out of step with the current threats facing it.

This is because freshness of information is everything. And while AI-based approaches will optimize the fast, large scale collection and categorization of threats, there is limited evidence to suggest that they can do it without significant human help if you are going to stand any chance of rapidly converting this into accurate actionable intelligence.

Complementing human intellect and experience

We know that cyber skills are in short supply around the world, with as many as 3.5m job vacancies presently unfilled . This is putting even greater pressure on operating an AI-driven cyber strategy that requires minimal human intervention.

Good threat intelligence goes beyond envisaging human analysts as mere supervisors of automation. It sees the value-added wisdom of experienced people who can break the mould, think outside the box and apply much-needed context to the ‘almost-finished’ product supplied solely through AI and machine learning processes.

Blueliv’s analyst, research and reverse-engineering group (‘Labs’) are pivotal to the Blueliv threat intelligence proposition, providing added tactical and strategic context to threats, discovering new data sources and delivering detailed insights and analysis to support both individual customers and the wider cyber community. They ‘team’ with our automated systems to optimize threat intelligence performance, with a number of them actively working on new patent-pending machine learning technologies and pioneering entity extraction techniques.

Human/machine teaming is also central to another of AI’s contributions to cyber defense: simulating relevant scenarios. The ability of these technologies to help predict and prevent new attacks explains their increasing significance to the ethical hacking toolkit.


AI plays a growing role in both cyber attack and defense, yet neither side maximizes their objectives when they rely upon it completely. In the same way that threat actors reap greatest rewards when they add human intellect to the increasingly sophisticated logic and industry of machines, so this has also emerged as the optimum formula for security teams.

Nothing beats the unique ability of humans to think, at least not yet. Only people can add the final 10% – the missing piece of the puzzle that means the whole makes sense – and make the kinds of decisive judgement calls that business leaders would prefer not to entrust to a machine. Working in harmony together, they make the best possible team.