Edition 13: SAST primer - Goals of a SAST program
Third in a 4-part primer on SAST. This edition talks about what a successful SAST program looks like.

Back from a break
Boring AppSec is back from a 3 week Deepavali break. After some quality time with the family and a few extra inches around the waist, regular programming returns. Newsletters to drop every weekend.
Before we go ahead
Two notes before we get started:
While parts 1 and 2 of the SAST primer presented facts, parts 3 and 4 are opinionated guides. There is no one, set way of building a SAST program. Based on your company’s culture, some or all of the points made in this edition may not work for you.
SAST can also be used by Security teams as a part of security assessments. I would not say that's part of a "SAST program". Security teams need to use everything they can to find security defects and SAST tools are one of them. This post focuses on the program which uses SAST to find security issues as early as possible in the SDLC.
Goal of a SAST program
A mature SAST program must identify all violations of securing coding practices and provide actionable guidance to remediate the violations. Furthermore, the program must provide enough data to measure MTTD and MTTD for each violation.
Let’s break this down.
Identifying violations to secure coding practices v/s finding defects
Finding defects and proving exploitability is a lofty goal for a SAST tool. By doing this, you are taking away a key advantage of SAST (automated & scalable. More in part 1 of this primer). You are better off convincing developers to fix all violations to secure coding guidelines as opposed to asking for proof of exploitation.
But this begs the question if secure coding guidelines (SCG) a prerequisite for implementing a SAST program. Not quite. You can get to a lot of low-hanging fruit (e.g.: secrets checked in code) with SAST without these guidelines, but building a mature SAST program without SCG is hard. Building SCGs effectively is a program of its own, but that’s for a different post.
Actionable guidance
Most remediation guidance is often canned and without context. However, once you have SCGs, it makes it much simpler to write more meaningful remediation guidance. This isn't uniformly applicable to all rules though (e.g.: Hard to give "actionable" advice if you are using a symmetric encryption algorithm incorrectly. You could say "just use Y algorithm", but that's not actionable).
In addition to setting context, it may also be useful to provide sample “secure” code snippet. Ideally, it should be possible to just replace the insecure snippet with the secure code snippet. Taking this to its logical conclusion, an interesting development in SAST tools is their attempt to auto-fix code. It’s a fascinating , but for now, it’s just an experiment.
Measure MTTD and MTTR
There are many other numbers which are important to track while you build your program (e.g.: Tool adoption across departments), but once you have a mature program, the only 2 pieces of information you need are MTTD and MTTR for each violation your SAST tools finds. Everything else is downstream of these 2 numbers (more details in Edition 6 of the newsletter).
While “goals” are desirable outcomes of a program which can be measured, “rules” are inputs we need to provide to get to the desired outcome (a.k.a. goals)
Rules to follow and traps to avoid
We will talk about how to achieve the goals in the next edition of this newsletter. Before we get there, here are some rules to follow as you build your SAST program.
Rule 1: Ensure every line of code change goes through SAST with relevant set of rules turned on
This can be automated in the build process (e.g.: integrate in your CI process) or this can be done on a "regular" basis (e.g.: weekly scans on all changed code).
The “relevant set of rules” is more interesting. In this case, it is more useful to focus on high-confidence rules with a low false positive rate (again, we will talk about how in the next edition).
side note: When you begin your SAST program, you will have a lot of “debt” to deal with. It’s important to pay off the debt, in addition to scanning all the new code.
Rule 2: Ensure developers look at each result and take appropriate action
To repeat what’s now a cliche on this newsletter, your security posture does not improve when you find defects, it improves when you fix them. It’s important to ensure developers look at the violations and fix them. Nudging developers and incentivizing them is a better way to do it than diktats and ultimatums. (e.g.: Use MTTR scores as an incentive. More next week.).
Rule 3: Have sensors across your AppSec program and use it to improve SAST
Given SAST is the most scalable of all your AppSec assessment methods, there is value in finding as many violations as possible through SAST, while still adhering to Rules 1 and 2. To do this, its important to constantly think about how we can find more violations through SAST. Examples of how you can do it include:
Write custom rules for defects found in penetration testing and apply them to relevant applications. Do this only if you can translate that defect into an SCG. Force-fitting defects to custom rules is a trap (See Trap 1)
Write rules for new kind of vulnerability published for the tech you use (e.g.: Heartbleed is out, write a SAST rule to detect it)
Phase out noisy rules with a higher chance of false positive. Nothing erodes confidence in SAST quicker than noisy rules. You can continue to run them as part of security assessments, but they shouldn’t be part of your SAST program.
Trap 1: Use SAST to find all security defects possible
Moving the focus away from SCGs to finding defects is a recipe for disaster. The nature of SAST makes it easy to write custom rules which lead to false positives. Forcing SAST tools to “find critical defects” (worse, measuring its success by the number of critical defects it found) will lead to too many false positives, making adoption harder.
Trap 2: Force developers to fix all violations before deployment
This one’s a little contreversial as following this trap can sometimes lead you to other traps (a “meta-trap?” :) ) . Security gates should certainly be a discussion and you need to arrive at a conclusion based on your company's culture (for e.g.: If you release code once in 3 months, gates are absolutely needed). In general though, for most modern SDLCs, strict gates are counterproductive. There are security benefits to gates, but they add a lot of drama and anxiety to the release process. Given deployments are not all or nothing anymore (limiting releases to a few users only is commonplace), there is a lot of room for nuanced discussions on remediation timelines.
That’s it for today! In the next edition, I will talk about how to get to the goals defined in this post. Do these goals and rules make sense? Are there others you’d like to add? Hit me up! You can comment here or drop me a line on twitter, LinkedIn or email. If you find this newsletter useful, do share it with a friend, colleague or on your social media feed or simply forward this email.