In the previous two blog entries Part 1 and Part 2 we covered all of the previous OWASP top 10, that had either remained static, moved or merged and of course the reasons for the entries being as they were. So, first up, a reminder of the top 10 for 2017 vs 2013.
The New Entries
The OWASP 2017 top 10 contains three new entries. A4 - XML External Entities, A8 – Insecure Deserialization and A10 – Insufficient logging and monitoring. Lets review each of these in turn, in the order they appear.
A4 – XML External Entities (XXE):
Older XML processors often allow specification of an external entity, or a URI that is evaluated during XML processing. This can result in attackers tricking the processor into running hostile content, exploiting vulnerable code, dependencies or integration.
For example, a billion laughs attack can be used to flood the XML parser and create a DDoS scenario. The idea is to define ten (10) entities, each consists of ten (10) instances of the previous entity with the document containing a single instance of the largest entity, often Lol or derivatives of,resulting in one billion instances of the first entity being created. This effectively floods the parser by taking up or exceeding the amount of computer memory available to process the XML. Other forms of XXE can result in the exposure of protected information or files to the attacker – such as passwords.
DAST scanners are often unable to detect XXE based vulnerabilities and so a more traditional manual testing approach is needed to determine the presence of these attacks. Its worth pointing out that many penetration testers are not familiar with the methodology needed to identify these attacks and as a result organization should ensure that the companies they work with are fully capable of testing for these attacks as part of their overall testing methodology.
Preventing and mitigating XXE based attacks can be fairly straight forward. Starts with training the development teams to ensure they understand both how the attack works and how to mitigate against these attacks in their coding, along side training using less complex data formats such as JSON, or updating and patching any XML processors and libraries to the most recent versions. Finally use validation methods such as XSD to ensure the incoming XML is in the correct form and rejected if it is not.
A8 – Insecure Deserialization:
To understand Deserialization lets first address Serialization – Serialization is the process of taking objects and converting them into a data format that can be easily stored and used later. This is often used to reduce the amount of storage needed or to send them as part of a communication process.
Deserialization is the reverse of this - taking data and turning them into objects. Today the most common data format for this is JSON and prior to that XML. However, many traditional programming languages provide capabilities for serializing objects, often providing more scope and flexibility in the process than JSON (or XML). The downside to this is the ability for attackers to use these deserialization processes for malicious use which can result in the ability to perform remote code execution or data tampering attacks.
The impact of deserialization attacks will vary depending on the criticality of the targeted application but remote code execution attacks are some of the most dangerous, so organizations should be vigilant in both their detection and remediation efforts. Remediation, as is often the case, starts with developer education, ensuring that the applications are designed to not accept serialized objects from untrusted sources, or when it isn’t possible, to ensure they are in the most primitive data type possible. Additionally, isolating the deserialization code to run in low privilege memory, logging successes and failures of the deserialization process, and restricting incoming and outgoing network connectivity to/from servers and containers performing desterilization will contribute to minimizing the risks
A10 – Insufficient Logging and Monitoring:
Organizations are bombarded by logs, a whole segment in the security industry exists around parsing and aggregating log data to provide analysts a better view of their world. And yet, at A10 in the OWASP top 10 2017 we have an entry for logging and monitoring. This often leads to analysis paralysis – too much information – difficulty in spotting the wood from the trees, and by the time the team discover the attack its too late. Or certain types of information are simply ignored such as failed login attempts from users. And worse still some logs are simply not captured or generated such as logs from Inflight Penetration test, or the automated DAST scanning that occurs each week.
Attackers rely on this flood (or lack therefore) of log information to mask their actions, passing unseen into an organization’s infrastructure and achieving their ultimate goal. With the average time to detect a breach in 2016 being 191 days, this leaves a huge opportunity window for attackers.
Mitigation for this is beyond a traditional DAST scanner (or SAST scanner). Rather organizations need to ensure their security analysts have a full view of the risk intelligence and are not discounting attacks because ‘its scan time’ or ‘there’s a penetration test happening soon’ - ensuring that applications, especially critical business apps – log failures, transactions and other events can be sent to central log repositories and correlated with other logs for wider context and understanding. And ensuring there is a robust incident response process in place that Analysts can call upon should they deem it appropriate.
Although there are some changes to the 2017 OWASP top 10, when you consider the attacks that have remained in the list over the past 10 years we consistently see the same ones on the list, especially at the top. Injection based attacks at the no 1 spot continues to be the biggest source of successful attack and breach despite the availability of abundant training documents, coding best practice and even scanners available to help organizations address the challenges.
Ultimately the challenges facing organizations are – in many ways – self-perpetuating. Business pressures, time pressures, budgetary pressures all cause challenges in how applications are brought to market. Single point in time penetration testing whilst a great start is only as relevant as the point in time of the test. If a company develops the application further afterwards then they have no way to spot the issues.
Likewise with application scanning – as we have seen some of the top 10 cannot easily be detected by scanners. However, implanting human intelligence and automated scanning in tandem, as well as educating the development team can help to reduce the risks. Ultimately as we move into 2018, these risks remain a big challenge and it’s one for each of us – vendors, developers and organizations to make it as difficult as possible for attackers to gain a foothold into the business through weak application development processes.