Edition 3: "What AppSec assessment type are you?"
Among the oldest problems in AppSec is making tradeoffs on assessment types (SAST, DAST, IAST and so on). This edition attempts to design a framework to help evaluate what works best for you
Alright, I know there are already a ton of articles, magic quadrants and analyst “waves” which answers this question (or some variant of it). So, why are we talking about this again? Mostly because most of those frameworks don’t work for me. The irony of creating another framework because there are too many frameworks isn’t lost on me (HT xkcd), but let’s try this anyway :) . To be clear, I am not suggesting one type is “better” than the other. But given we all have limited resources (time & budget), tradeoffs are inevitable. I hope this framework helps you make those tradeoffs.
So whether you are a small company figuring out where to start your AppSec journey , a growing company which needs to scale security or a large enterprise figuring out how to reach the next level of maturity, my assertion is that the framework is broad enough to be used as a starting point for all assessment types. As with all frameworks YMMV, so feel free to modify to suit your needs.
How to categorize assessments?
Before we get to the questions, it may help to be on the same page on how we categorize assessments.
While we can easily go into a long list of acronyms (SAST, DAST, SCA IAST, RASP, VAPT and on and on), I think of assessments as any activity that finds security defects Most assessments fall under one of three buckets:
Analyze the design of an application (Threat modeling, design review etc.)
Review code or config (manual code review, static analysis, linting, software composition analysis and so on)
Simulate an attacker against a running application (penetration testing, vulnerability assessment, red teaming etc.)
OK. Now we are ready to go :)
The key questions
There are three lines of questioning I would use before making a determination. Note that some of these areas overlap and that’s fine. The list of seven questions in the below diagram may change for your org, but I think the three areas will remain the same.
How Customizable is it?
While Dr. Gary McGraw once (somewhat) famously said there is no special snowflake in software security, each of us have slightly different requirements. Most tools and methodologies are built for specific use cases and then generalized (think STRIDE for Microsoft). Given that, it’s important to evaluate if the tool/methodology in question can be extended to meet your specific needs, even if that needs a bit of work. Modern AppSec tools do a real good job at this. Burpsuite’s extensions or Semgrep’s framework to add new languages to their SAST is a good example of where customization works. On the contrary, comprehensive threat modeling frameworks which takes weeks to build, cannot be miniaturized to a thrice-a-day release schedule.
A simpler form of customization is ability to discover custom defects or define custom ways to find known defects. Most SAST/DAST tools have some kind of a custom rule engine, but the really good ones make it real easy to write effective custom rules (think Semgrep, ZAP proxy etc. )
Finally, most manual assessments shine at customization. For instance, a skilled penetration tester will be able to modify her methodology to meet your requirements with ease. It’s another matter altogether if you can “scale” these efforts (see next section)
How Scalable is it?
It’s a cliche to say “we are never 100% secure”(and like most cliches that statement has a hint of truth in it). A important corollary is that, we can never do anything once (or twice) and be “secure”. Meeting your security goals means you will have to do certain things (assessments are a great example) over time without compromising on the quality of your work.
That’s a roundabout way of saying scalability is important. In a tradeoff, consider choosing tools/techniques which will scale over time as opposed to approaches that have diminishing returns over time.
To me, scale essentially means 2 things:
Can some parts of the task (if not all of it) be automated?
Can these assessments be performed by non-security folk too? This may seem like “detail”, but enabling devs/DevOps and other groups is key to a successful AppSec program.
How much Drama does it cause?
Security teams often cause a L.O.T of drama in organization. Picture this: You are an engineering manager who has all her teams tasks planned out for the next sprint. Things are going well and we are on track, and BAM, a Slack message from Security tells you there’s a Critical defect in an app you maintain (the dev who wrote the code is long gone). You’ve got to fix this in 48 hours or escalations will reach the entire org! And just like that, your sprint plan goes for a toss (and probably your quarterly plan too).
That seemed bad, but at least it was a legitimate defect. Imagine tools which always tell you there 987 open Critical defects (I am looking at you, old school big-box SAST tools). It causes FUD and takes us nowhere good.
Drama is not always bad though. If you are dealing with a team where it’s hard to convince them about the importance of Security (where you need to “sell” security), a red team finding which shows a dump of their database is a lot more convincing than your static analysis tool politely asking you to parametrize that query.
My point is: We rarely consider the impact assessments have on humans in the team/company. Adding this to your evaluation criteria will go a long way making the write tradeoffs.
What next?
So, that’s the framework. What’s the best way to use it? I’d follow the below steps:
Define your customization and scalability requirements. I find it useful to think what the requirement will look like 24 months from now and build for that, but your timelines may wary
Think about your org culture and determine what kind of drama helps v/s what kind of drama hurts. If your leadership and dev teams are already sold on Security, focus on assessments which enforce best practices and can operate at scale. If you still need to convince your stakeholders that Security is important, nothing works better than a well target penetration test or a red team by a skilled third party
The framework leaves out "type of defects discovered" on purpose. Given how quickly new kinds of defects show up, you are better off choosing something that is customizable and scalable, instead of (say) coverage. However, this fails if you are in a truly speciality industry, you should add a fourth dimension to the framework. Let's call it "applicability to your domain"
Now, evaluate each assessment method in scope against the framework. Sprinkle in budget and bandwidth constraints and you have a framework that (hopefully) works!
OK. That’s it for today. Are there other factors you would consider which does not fit the framework? Would you like to see this framework applied to a fictional company (to see what the results look like?) HMU! You can drop me a line on twitter, LinkedIn or email. If you find this newsletter useful, do share it with a friend, colleague or on your social media feed.
One of my team members just sent this to me and said how they were using the ideas here to help them with the Capability views I'm working with them to define :). I read half of it before I even noticed you wrote it bud :) Great job, keep it up. Podcast soon?