Software used to be bits-and-bytes, but over the years matured into modern programming languages, prepared classes and libraries, and later an increased dependency on the work of others – the larger community. This means it is possible for a developer today to build things beyond what could even be imagined less than a decade ago at speed and at scale. But to ensure that we keep functioning, keep fixing bugs and issues, in the parts of solutions that are in fact made by someone else, depending on and including updates to those components that form the foundation of our solutions.
This can be seen when we look at Heartbleed, a vulnerability that hit so wide and so hard because of a component reused in so many other solutions. It can also be seen in Log4j where it’s still a challenge to determine where this vulnerability will surface next in other applications as attackers launch new vectors to trigger the vulnerability deeply embedded in other’s solutions.
This is a supply chain issue that will never end. Just as we see how buying software from a small company with automatic updates poses a risk – let’s just imagine that a ransomware group places a legit bit and simply purchases a browser plugin which is widely deployed, or some other small piece of software of low value but high distribution. An uncomfortable thought.
We can also see how the trend is taking hold within the open-source community recently, showcasing exactly how trusted relationships can go completely and utterly wrong.
Snyk has an excellent writeup here. What we can see here is that a creator has setup a piece of malicious code, which if it finds itself in the wrong hands like Belarus or Russia can trigger a range of damaging components to corrupt files. This has then been released and made available to the general public unknowingly. After some time, it is included and depended upon in a more popular module. This module in turn is reused in many other projects, and the chain of events is in motion.
Essentially, a creator has full control of their own code – it is theirs after all. It’s not their fault if you or anyone relies on it. It’s not their fault if you opt to auto-update based on their submissions. But, one can easily argue, and I would agree – it is their fault if they intentionally hide a destructive update, and abuse the trust of others to propagate malicious script across the dependency of the ecosystem. However, it is only possible because of the rather important security and design decisions made by others.
Quite clearly, this was deliberate. Quite clearly, this has caused a great stir in the open-source community and garnered strong opinions. So how would you defend against this?
Firstly, decide if you can afford trusting someone else with running code in the context of your application. Most of those looking into open-source components will already have answered yes here. Secondly If you do, decide if you consider the code safe, including its dependencies. This is hard, but integral to the use of open source. Thirdly, once you are happy, decide if you are prepared to auto-update dependencies, or if you want to be in control of the updates done via pinning dependencies, which you must track the dependencies and see if bugs are addressed and then decide on implementing the updates.
To sum up, this is a concern, but it is also not the first and will not be the last time malicious libraries are made available to users. It’s not long since we saw similar scandals surrounding ‘colors’ and ‘faker’, and interestingly colors is one of the dependencies resurfacing again in the bundle.