I feel like we all have that moment in our security programs maturation when things start to feel… comfortable. The dashboards look clean, the controls are documented, the audit findings are minimal and most importantly (in some people’s opinions) the reports are polished. Everything appears to be working as intended. That is the exactly the point where I start to get a little nervous (I hate when things look normal).
I think comfort in security programs can sometimes be a signal that something else is happening just beneath the surface. Not failure or negligence….something quieter in the under currents….optimization.
Yet it’s not the kind you want (wait I thought we wanted optimization). Many organizations don’t realize it, but over time their security programs begin to optimize for passing those audits instead of actually managing risk. It doesn’t happen overnight…it’s not intentional (or I’ve never seen it be), no one wakes up and says, “Let’s build a program that looks good instead of one that works.” (If they do…run)
It happens gradually, through incentives, pressure, and the natural human tendency to focus on what gets measured and ignoring what isn’t. Audits measure compliance with the rules and naturally programs optimize for compliance. I shouldn’t need to say compliance, while it’s very important important, is not the same as security. At some point, controls stop being tools to reduce risk and start becoming artifacts to demonstrate that risk is being managed for audits. Documentation becomes the product, evidence becomes the outcome and the actual effectiveness of the control becomes secondary.
While I am guilt of this in my early education, It’s the cybersecurity version of studying for the test instead of learning the material. We can all study to just pass the test, but that doesn’t mean you understand the subject. In my experience one of the most common places this shows up is in policy and control documentation. Policies are written, reviewed annually, approved by leadership, and neatly stored in a repository. Theoretically, they are comprehensive, they cover access control, incident response, acceptable use, vendor management, and more….I dare you to ask any team how often those policies are referenced in day-to-day decisions, and the answer is usually… less impressive…The policy exists….the behavior does not always follow.
I can hear us all think from an audit perspective, the requirement is met…the document exists, approvals are recorded and the review date is current….Check, but is the reality of that document felt?
Another example is access reviews…They are completed on schedule, the appropriate managers certify access, compliance and change reports are generated and the evidence is stored. Everything looks exactly as it should. Yet… if you look closer, very little access is actually removed, but the process was followed and the outcome is unchanged….I’ve talked about this before the review becomes a ritual and rituals, while comforting, do not actually reduce risk.
The same pattern appears in vulnerability management. Systems are scanned regularly, reports are generated, metrics show high percentages of vulnerabilities remediated within SLA and the numbers look strong. In reality not all vulnerabilities carry the same risk, some remediation findings affect systems that are rarely used, others impact critical business functions…if prioritization is driven purely by severity scores without business context, the program may be very efficient at fixing the wrong things…and again, the audit looks good, yet the risk remains.
This is where the disconnect becomes dangerous and risky. When programs optimize for audit success, they start to measure what is easy to prove instead of what is important to understand. I want to state that completion rates, documentation status and control existence are necessary, just want to remind people that none of them, on their own, guarantee security.
Think about any heist movie where the target has “state-of-the-art security.” Cameras are everywhere (no inch unwatched), there are guards at every entrance, and their access controls that look impressive with password and retinal scanner and a little ditty you need to sing to open the secure room….but what happens? The attackers succeed.
This is not because the controls didn’t exist, but because they were predictable, poorly aligned, or focused on appearance rather than effectiveness. The system was designed to look secure….but not actually to be secure. That’s the risk when audit becomes the primary lens, you design controls to pass the audit (appearance) but not really be secure.
I am a fan of audits, audits are essential to run any secure business. They provide the structure, accountability, and external validation to help test and check your program. They help organizations align with standards and regulatory expectations, they are not the enemy (and never should be). Yet like I preach a lot they are not (or should not) be the ending goal, but should be the start.
Security programs that mature beyond compliance understand this distinction. They use audits as checkpoints, not destinations. They design controls that function in real-world conditions, not just in documentation. That requires a different mindset, it requires asking questions that don’t always have clean answers.
Not just “Do we have a control?” but “Does the control work under stress?”
Not just “Is this documented?” but “Is this followed when it matters?”
Not just “Did we pass the audit?” but “Would this hold up during an incident?”
Those questions are harder because they introduce ambiguity and they sometimes reveal gaps that don’t fit neatly into a report…but they are the questions that actually move real security forward. From a leadership perspective, this is where incentives play a critical role. Teams respond to what leadership values and if success is defined by clean audit results, your teams will prioritize audit readiness. If success is defined by risk reduction and resilience, behaviors start to shift and that shift is not always comfortable (and that shows growth).
It means acknowledging that a “passed audit” does not mean “problem solved.” It means accepting that some controls may need to be reworked even if they already meet compliance requirements. It means investing time in areas that are harder to measure but more impactful. I think intellectually this is where the conversations often gets interesting, because optimizing for reality can sometimes create tension with optimizing for audit.
Documentation security can seem simple, but real-world security is messy. It involves tradeoffs, judgment calls, and evolving threats and audits prefer clarity, consistency, and documentation. Bridging that gap requires leadership that understands both perspectives and can combine them without sacrificing effectiveness. I think one way we can do this is by re-framing how controls are evaluated. Instead of asking whether a control exists, evaluate how it performs and instead of focusing solely on completion, examine outcomes or instead of measuring activity, we assess business impact.
Let me give you an example, instead of tracking how many incident response exercises were completed, consider how the team performed during those exercises. Were the decisions made quickly? Were communication clear to not technical partners? Were gaps identified and addressed because of the exercise? That will tell you far more about readiness of your program, than a simple completion metric.
The same applies to access management, monitoring, and vendor risk. The goal is not just to demonstrate that processes are in place, but to ensure they function effectively when needed.
We can’t have a post without mentioning another important aspect is cultural alignment. Security programs that optimize for audit often create environments where people focus on avoiding findings rather than identifying risk. This causes issues to be minimized, edge cases to be overlooked and conversations to become more about presentation than substance.
I hope that is not where you want your program to be. A strong security culture encourages transparency (even when it hurts), it should reward identifying gaps early and it should treat findings as opportunities to improve, not as failures to hide. In security, what you don’t see is usually what hurts you.
Security is not a checklist (but usually has a ton of them)…It is a practice and audits are part of that practice, but they should not define it. The real goal is not to build a program that looks good on paper but to build a program that works when something goes wrong.
Because when an incident happens, no one is going to ask how clean your documentation was….they are going to ask what happened, how you responded, and whether your controls actually made a difference.
And that is not a question you can answer with a checkbox.