Edition 18: The diminishing returns of DAST
If your software development relies on continuous integration and deployment (CI/CD), this edition argues that DAST as an assessment methodology should be avoided.
Like me, if you started working in AppSec in the 2010s, you’d remember Dynamic Application Security Testing (DAST) as the go-to assessment methodology for web applications. At that time, the OWASP Top 10 was new(ish), most of the existing security tools did not focus on web applications (think nmap, wireshark, etc.) and static analysis tools (SAST) were so hard to use, it felt like you need to take Snape’s Defence against the dark arts to master that. Here’s why DAST tools stood out:
DAST tools had the ability to discover defects from the OWASP top 10.
DAST tools completed assessments between ~30m and 4 hours, even for large apps. Contrast this to SAST tools, which then took exponentially longer for large apps (I distinctly remember letting IBM AppScan source run all night on a Java app with a million LOC)
DAST tools made it simpler to triage results. It had nifty features like a "screenshot” of the evidence in the tool, allowed you to mark defects as false positives right in the tool, and so on. On average, it would only take you a few hours to triage a few hundred results (let’s say 2 hours / 100 defects for a skilled Pentester)
If you are new to AppSec, you are probably reading the above in horror. 4 hours for a scan? 2 hours to triage defects? Reliance on “skilled” Pentesters to even know if the defects are real?
Hypothesis
What were once advantages for DAST tools, are now liabilities. This is not because DAST tools have degraded over time, but because the way we built software has changed.
Specifically, here are the reasons why DAST is now a liability:
Most AppSec tooling is now part of the CI/CD pipeline. This means, even a 30m scan dramatically reduces the pace of development.
In theory, you could *only* scan changes made as part of the pull request (PR) being deployed and reduce scan time, but the efficacy of such a scan is very low. If your software is built using Microservices, it’s hard to find anything outside of low-hanging fruit (e.g.: missing headers) with high confidence using DAST.
Increasingly, the results of AppSec tools are reviewed by devs and QA teams. While these teams are trained in fixing security defects, triaging results is not a skill most of them possess. This means tolerance for false positives has gone down dramatically.
Looking for security defects in applications is now a primary skill set for most Pentesters. This means, in addition to looking for technical defects (such as injection attacks), they are also able to look for business logic defects when they perform manual penetration tests.
All this means, there is no real place for traditional DAST in today’s AppSec landscape. However, when used intelligently, there are still a few gaps that DAST can address:
DAST tools can be repurposed to only look for low-hanging fruit (e.g.: Missing security headers). This is especially helpful when run outside the pipeline (say on all production systems).
DAST tools can be run in a non-blocking way in the pipeline and the results passed on to the Security team. They can deep-dive if they see some red flags (YMMV. this may not work in companies with large dev velocity and a stretched Pentest team.)
Most companies now have automation suites for running quality assurance (QA) checks. You could take high-confidence rules from DAST engines (tools like ZAP really help with this) and import them to existing QA checks.
You can supercharge the above step with a security champions program. If your champions understand security well enough, they can start writing application-specific security test cases in the QA automation suite. DAST rules can be the first step, but there is no reason why it needs to stop there.
Tools such as nuclei can be used in regression (was the “fix” that went in the last release reverted?) or to test a single rule across many applications (handy when a new, dangerous CVE drops)
Finally (and I don’t expect this to change anytime soon), DAST tools are still a great companion to Pentesters. Kudos to tools like ZAP and Burpsuite who have transformed traffic proxy tools to include full-blown scan engines. This means, Pentesters can now leverage DAST to automate many of their test cases. Like always, Pentesters have a much higher tolerance to false positives, so you can afford to go crazy and turn on all the rules you want :)
In summary, depending on who you are, you will have to think differently about DAST today:
Security manager: If you have the charter of running the AppSec program, DAST should not be on the top 3 things you should do. There is some value in running DAST, but the opportunity cost of setting up the tool in the pipeline, tuning rules to reduce false positives, and driving adoption is not worth it.
Pentester: Nothing changes. You should still continue to use the tools you love (ZAP, Burpsuite & Nuclei seem to be the favorites) to discover more defects. Bonus points if you can customize the tools to meet your program goals.
Developer/QA: Consider how you can port some DAST rules into your QA automation suite. You can leverage the AppSec team or Champions to find out what rules work best. Depending on the purpose of the automation suite (unit, regression, integration, performance), the rules you may want to use will vary.
That’s it for today! Am I being too skeptical of DAST? Are there other use cases of DAST in the pipeline? Do you successfully run a DAST program within your organization? Tell me more! You can drop me a message on Twitter, LinkedIn, or email. If you find this newsletter useful, do share it with a friend, or colleague, or on your social media feed.