
One of the advantages of building a company is that you get to talk to many people. In the last 12-15 months, I’ve spoken to 100+ Security teams (mostly AppSec, but some others too). Over time, you start to see patterns you didn’t know existed. You’ll see similarities between how large, old-school enterprises and cutting-edge tech companies work. You’ll see how much things have changed with AI and how some remain the same.
One of the patterns I’ve noticed is how much of Security revolves around the seemingly boring IT function of “change management.” From approvals to reviews to secure defaults to “move fast and break things,” every company has its approach to change management, and how Security works (or doesn’t work) in these companies depends on their change management culture.
A little bit of the backstory
As you probably know, we’ve built a Security Design Review (SDR) product at Seezo. One reason we built this product is that I’ve struggled to scale SDRs in the past, both as a consultant and as an AppSec leader.
Once we built the initial version of the product, we started talking to many Security teams about it, and we found an interesting pattern. Most companies did not think they had a “security design review” problem, but almost all thought it solved a different problem internally. When we dug deeper, it turned out they called SDR by another name: “Security assessments,” “DesRev,” “Threat Modeling,” “Internal Security Reviews,” “Software risk assessments,” and so on.
Security & IT change management
When we abstracted the learnings from all these conversations, here’s what I found:
A large chunk of security is a subset of “technology change management” (the rest is managing production assets. Read Edition 28 for more). Here’s how it goes:
Companies decide to introduce an IT change (buy new software, build new features, deploy a new workload, integrate with a partner, etc.).
In a sufficiently large organization (say, anything> 50 people in tech), this introduces risk. Depending on the organization, it could be legal risk, privacy risk, availability risk, security risk, or all of them.
Before a change is implemented, the security team wants to review it and consider its potential impact on security.
#3 is called different things in different kinds of companies:
Tech-first companies, where most software changes are code written by internal developers, call these “Security design reviews.”
For companies that purchase software with few or no in-house developers, this is a subset of TPRM or “third-party risk management.”
For large, complex ( large enterprises, Fortune 500 companies) companies, a “security assessment” involves first understanding what kind of security reviews need to be done on this change (AppSec, CloudSec, SaaS-Sec, and so on), and then proceeding to perform the review
The outcome of the assessments is also different:
Some companies use this as an opportunity to provide “requirements” to engineering teams or purchasing teams. The idea is to enable them to do this securely.
In other companies, these reviews are gates. The engineering/purchasing team cannot proceed to the next stage without explicit approval.
Finally, this is merely a compliance checklist item in a small subset of companies. This is different from #2 because no one cares about the quality of the assessment itself, and a large majority of requests are approved with minimal or no caveats
What engineering and IT teams do with the results also differs:
When developers build software to implement the change, these requirements are expected to become part of the SDLC. The requirements are expected to be validated in code review (SAST), infra review (CloudSec/CSPM), and PenTesting. This is where almost all of the AppSec industry is focused today (think Snyk, Semgrep, etc.).
Where purchasing decisions need to be made, some of these requirements may become part of hard-to-enforce requirements on software vendors or system integrators.
Irrespective of which combination of things (activity & outcome) a company does, a few things are common patterns:
Hard to scale: From the Security team’s perspective, this review is manual, inconsistent, and hard to scale. The input is non-deterministic (plans, diagrams, documents), and many potential risks and solutions are considered “implementation details. This means it’s hard to determine what kind of requirements should be part of the output.
Slows things down: From the builder’s perspective (IT, engineering, product), this review is a blocker. Best case: It takes weeks to complete and provides some meaningful feedback on security. But the average case is that it takes too long to complete, offers non-actionable insight, and slows down the entire process. This leads many teams to find ways to subvert the process instead of participating in good faith.
So, this is the status quo: Risk managers perform activities with unclear outcomes to avoid introducing new risks. Builders get the point, but don’t always find the process meaningful and try to subvert it.
There’s got to be a better way!
I know AI agents that automate all of humanity are all the rage right now, but I still believe a core use case for LLMs is to “compress manual workflows.” Automating change management reviews seems to be the perfect use case for LLMs. There is a possibility that all review domains (Security, Legal, Compliance, etc.) can be automated, but my focus is on the security review.
We can essentially compress the process (irrespective of the kind of organization) into four steps:
Gather relevant information
LLM reviews all information and generates a list of requirements and a go/no-go decision
[Optional] A human reviews the results and augments them
Send the results to the consumer in a format they desire
This approach has the benefits of accepting all kinds of input (LLMs are surprisingly good at processing different types of input), dramatically reducing the amount of review time (step #2 alone can compress weeks to minutes), and still has the “human touch” when needed
What’s in a name?
We still have one open question: What do we call this new thing? Phil Karlton apparently once said, “There are only two hard things in Computer Science: cache invalidation and naming things”. I feel you, Phil.
Here are a few candidates:
Security Design Review - Works for tech-first companies that build their software, but not for software-assemblers (i.e., folks who mostly purchase and integrate software. More on Software builders v/s assemblers on another post :) )
Security Reviews (“drop the Design”) - Too generic. This does not appreciate that the review happens before anything is built. A PenTest is as much a Security Review as a Threat Model.
Threat Model - Too narrow. Threat Model means specific (and different) things to different people, but most of it revolves around diagramming and expensive/hard-to-hire security architects. I am tempted to attempt to “redefine” threat modeling instead of using a new term, but as Gartner can tell you, creating a “new category” is probably simpler than changing the course on an existing one.
Secure Change Management: This is confusing. Are we securing change management or incorporating security aspects into it? It’s obviously the latter, but the name does not clearly indicate that.
Architectural Risk Analysis: It has the same problem as SDR. It works well for a small subset of companies with architectural review boards or software architects who own technical decision-making, but it does not work for others.
Software Change Risk Assessment (SCRA) - The most precise of the names, but FLAs are worse than TLAs. Also, easy to be confused with SCA (software composition analysis)
We can make this acronym even more convoluted by calling it “AI-SCRA.” It’s even more precise, given that it’s hard to scale SCRA without AI, but I am rolling my eyes at my own suggestion.
Security Impact Assessment—This is good, but it does not focus on what needs to be done. It feels like another way to say, “We will tell you what can go wrong,” rather than " How can you secure what you are building?”
I don’t think there is a clear winner. At Seezo, we are sticking to “Security Design Reviews” for now, but open to changing it in the future.
What does this mean for Security teams?
LLM-powered security reviews are a rare chance to make change management faster and more consistent. You can prototype an in-house workflow (expect to dedicate engineering bandwidth for several quarters) or adopt an off-the-shelf solution. Either way, the math is compelling: if an assessment that used to take two weeks now closes in two days, you reclaim 12 days of cycle time while keeping or raising the quality bar. Downstream, tighter requirements can translate to less time spent on security in the SDLC (but this is harder to measure for now).
That’s it for today! Have you tried automating security reviews in change management? What has experience using LLMs for security tasks? Do you have a better name than the 7 listed? Let me know! You can drop me a message on Twitter (or whatever it is called these days), LinkedIn, or email. If you find this newsletter useful, share it with a friend or colleague or on social media.