Article by ThreatQuotient director Anthony Stitt.
Understanding an organisation’s threat landscape requires having both the right threat data sources and the proper prioritisation to derive actionable threat intelligence for your organisation.
When analysing the threat data an organisation has access to, it’s natural to wonder about the quality of the data received for the money, time and people resources invested to action it.
Threat data may come ‘free-of-charge’ with an existing security solution. However, it’s unlikely that there is zero cost to its implementation.
The typical security operations centre (SOC) is staffed with analysts, who themselves are an extremely limited resource. Minimising their chances of missing an IOC or other important warning, while wasting time chasing false positives, should be a priority.
A recent paper by researchers from Germany and the Netherlands outlined how false positives is one of the critical buying criteria organisations think about when looking at threat data services.
The mythical black box of intelligence that will foresee every threat to come unfortunately does not exist, according to the research. The findings showed that overlap among threat sources averages 2.5% to 4.0%, which means even if a SOC had an unlimited budget to buy every source available, it would be difficult to achieve anything close to full visibility.
However, that does not mean the threat data is not valuable. With the average cost of a data breach in Australia costing AU$3.35 million, threat data management systems only need to save an organisation once to pay for itself multiple times over.
This is why organisations utilise an average of eight threat data sources, as highlighted in the research. Having this many sources may seem excessive, until considering the actual types of sources available, including: paid, open, community, governments and CERTs, industry groups or Information Sharing & Analysis Centers (ISACs), vendors’ sources (provided with products such as endpoint protection), trust relationships with partner organisations, and even internal sources.
With so many types of threat data sources, it begs the question: Why is overlap among sources so low?
Whilst complete overlap is unlikely, it is fair to expect that the amount of overlap should be higher than a few per cent. However, from a behavioural perspective this stacks up, adversaries are known for adapting their behaviour to maintain an advantage over defenders, with the research demonstrating that any commonality that exists between any two sources was a month or more out of synchronisation. This alone may explain a lack of overlap, yet timeliness matters when actioning threat intelligence.
It may be that threat intelligence vendors are actively trying to maintain differentiation from their competitors, including open communities. Each source or vendor tends to specialise in certain areas and on specific threat groups.
This differentiation even goes as far as adversary naming, with each source using different names for the same threat groups, complicating the process of merging data from multiple sources, providing confusing analysis, and ultimately impacting the ability for security teams to action the intelligence.
The threat data sources chosen by the organisation should be based on how relevant the data is to its industry, places of operation, the types of assets owned, and the organisation’s adversaries.
Each source includes metadata to help with prioritisation and turning data into intelligence, which reduces the frequency of false positives a security team has to deal with as it actions this intelligence and supports longer-term defensive planning.
The most relevant and timely intelligence may come from attacks or breaches of the organisation’s own environment. As a result, companies are increasingly shifting their requirements to internal intelligence collection, using phishing email analysis, malware sandboxing, and DNS investigation, as examples of quick wins.
In fact, the latest Gartner market guide for Security Automation, Orchestration and Response (SOAR), labels phishing email analysis as the top use-case mentioned by organisations implementing SOAR.
Internally gathered threat data may lack details such as adversary attribution, which this research found was valuable for situational awareness, informing business decision making, and enriching internal intelligence.
This is why a threat intelligence platform (TIP) is beneficial, as it will automatically find any relationships that exist in the data, for example, matching an IoC from an internal source with the same IoC from an external source and merging related intelligence like adversary, campaign and TTPs.
Frameworks, like MITRE ATT&CK, also have a wealth of adversary information, mapped down to the tools, techniques and procedures (TTPs) that each adversary uses, and a TIP can link these attributes to lower-level indicators that ATT&CK specifically avoids due to how often they change.
TIPs utilise scoring to prioritise threat data based on the policies set by the organisation, which allows the security team to broaden its collection of sources without overloading or filtering through false information.
This is why Gartner includes threat intelligence platforms as a core capability for implementing SOAR because effective threat intelligence scoring enables confident automation – with network defence automation listed as the top use-case for threat intelligence according to the research, according to 93% of respondents.
An organisation may not have eight threat data sources to manage right now. However, after a closer look, there is probably more than one source that security analysts are responsible for actioning and managing.
Specifically, think about what internal sources the organisation is not currently leveraging and how these might reduce security risk. It might be time to consider the bigger picture about how best to turn threat data into threat intelligence for confident automation.