Edition 4: The mad maze of supply chain attacks and what it means for AppSec
There is some consensus on how to handle security defects in software we write. We have lesser luck with managing vulnerabilities in 3rd party software we use. This edition outlines the challenges.
I excitedly picked up Gregory Rasner’s latest book “Cybersecurtiy and Third Party Risk” hoping to understand more about supply chain attacks. Since the onset of the pandemic, we have all read in horror the rise in supply chain attacks. From Solarwinds to VGCA to Zyxcel, there have been enough examples of hapless organizations being attacked due to defects in 3rd party software they used. The book seemed like a good way to learn more.
The book started out by laying out the problem. The author documents recent attacks, categorized them, explains the possible motives and in general, made the reader smarter. Excited with my newfound baseline, I quickly jumped to Chapter 12, which promised to tell us how to evaluate and audit the 3rd party software we used.
Unfortunately, the excitement was shortlived. The chapter contained the usual tropes of Secure SDLC and pen testing and static analysis and software composition analysis. Don’t get me wrong, none of the points made were incorrect or un-important. It’s just that it told us a bunch of things “we should do” (input) as opposed to spell out where we should be (output & outcome) at the end of those activities.
The cliche goes that Security is a cat and mouse game. The bad fellows finds a exploitable bug in your code, you fix and deploy. They find a new way to send a phishing email, you improve your spam filters. If you have a more mature AppSec program, you find these issues in your SDLC and fix them even before the code is in prod. Essentially, for most areas of AppSec, there are reasonably well-defined problem statements, reasonably well-defined solutions and enough success stories to go around (many horror stories too, but that’s not the point :) ). But the more I read about supply chain attacks in the context of AppSec, finding needles in a haystack is a better cliche.
While there is reasonable consensus (as Rasner’s book also laid out) on “what” needs to be done, the tradeoffs needed to implement them to be too damn high!
What we are supposed to do and why it fails?
For simplicity, let’s narrow the scope to the introduction of risk due to the usage of open source software in the software your organization writes. Here is what you usually hear about what needs to be done:
Make an SBOM - Ensure you have an inventory of all the open source software you use and where they are used. Also track some additional metadata such as version numbers and licensing information.
Scan your code for vulnerable dependencies - Use one of many SCA tools to scan your repo and look for vulnerable versions. Ensure the data flows back to the SBOM
Ensure the vulnerable dependencies are fixed - You could do this in one of three ways:
Upgrade the library to a non-vulnerable version
Replace the library with a comparable (and hopefully non-vulnerable) library
Write a wrapper around it to mitigate the vulnerability (this works in theory only. But more on that later)
Rinse and repeat - 1-3 are not one time activities. They need to be done all the time. With every release. With every new app you build.
Assuming you can convince your dev teams to follow along with the above <sarcasm> masterplan </sarcasm>, get ready to deal with 3 serious unseen effects of implementing these changes. Here’s a list:
Transitive dependencies: Your microservice S has a dependency on open source component O. O, further has a dependency on T which has a critical bug. Now, you can’t really upgrade to the latest version of T given you don’t actually use T in your code. Your only option is to *hope* O has a version which uses a non-vulnerable version of T. Not convoluted at all, right? :|
Unclear security impact: In a vast majority of the cases, determining the impact of using a vulnerable component is very hard to determine (note: this is not a corner case). Sure, NVD tells you it’s a “critical” defect, but there are many caveats to understand if that criticality makes sense for you:
Do we use the vulnerable function of that dependency in our code? How do we even determine that (other than non-scalable approaches like manually reviewing code)?
Even if we use the vulnerable function, what other factors need to be true exploit this defect?Are those factors present in our app? How do we determine that at scale?
In certain frontend frameworks, it’s common to use dependencies while coding, but those dependencies never make it to prod (e.g.: devdependencies in node.js) , should such corner cases just be ignored?
Opporunity cost: If you are the Security team, the opportunity cost of determining impact is too high (imagine hundreds of apps using thousands of dependencies). “Just fix the damn thing” seems like the right advice. I’ve often heard arguments like “updating the go.mod takes less time than arguing about impact” . If you are the dev team, you have to update the code, run all your tests (and those tests are never “complete” are they?), ensure nothing breaks and all for an update where the security impact isn’t even obvious.
These complications are often too nuanced to explain “simply” to all stakeholders (we need to get better at this) and creates a un-productive drama between security and dev teams. This unfortunately increases the chances of looking for silver bullet solutions that don’t seem to exist.
While I have personally not seen a AppSec program that addresses all these problems well, companies like Netflix’s seem to have come up with an approach that seems to work for them.
Vulnerabilities in open source libraries is only the tip of the iceberg. If you are an AppSec team, you need to think about the APIs you integrate with, the SaaS and COTS products your team use and so on. The road to maturity in software supply chain attacks is a long one.
That’s it for today! Have you seen an AppSec program that handles software supply chain risk well (especially open source risk)? How do they solve the 3 problems outlined above? Drop me a line on twitter, LinkedIn or email.
If you find this newsletter useful, do share it with a friend, colleague or on your social media feed.