Most organisations that commission a penetration test understand, broadly, what they are asking for: a skilled consultant to attempt to break into their systems and tell them what they find. The report arrives. There are findings. And then — for a surprising number of organisations — not much happens.
The value of a penetration test is not the report. It is the risk reduction that follows the report. Understanding what a pen test actually tells you — and equally, what it does not — is the prerequisite to getting that value.
It is a point-in-time snapshot. A penetration test tells you about the vulnerabilities present in your environment at the time of testing. It is not a continuous monitoring capability. Applications change, infrastructure changes, configurations drift. The vulnerability your tester found in March may have been remediated. The one introduced by a code deployment in April is not in the report.
It is scoped. Every penetration test operates within a defined scope. That scope is a constraint on what the test can tell you. If your web application was in scope and your cloud infrastructure was not, the report tells you nothing about your cloud infrastructure. This is not a criticism — scope boundaries are necessary for practical engagements. But the scope shapes what you know and what you do not know.
It reflects the tester's methodology. Two skilled testers working the same application will find different things. Not because one is better than the other, but because security testing involves judgement about where to spend time. AI-augmented testing reduces this variance by providing broader, more systematic coverage — but the report still reflects choices made about the engagement.
Severity ratings are contextual, not absolute. A critical-rated finding in one environment may be medium-rated in another, depending on the exposure, the data at risk, and the realistic exploitability in context. Good testers calibrate severity to your environment. Treat severity as a starting point for prioritisation, not a final verdict.
A penetration test is not a statement that everything outside the scope is secure. It is not a guarantee that everything within the scope has been found. It is not a compliance posture — passing a penetration test does not mean your security controls are adequate, and finding vulnerabilities in a penetration test does not mean they have been exploited in production.
These are important nuances for communicating findings to boards and executive teams, where the temptation to treat a clean report as a clean bill of health is real.
The most common failure mode after a penetration test is treating all findings as equally urgent. A report with 40 findings does not mean 40 things need to be fixed immediately. It means 40 things need to be understood, contextualised, and prioritised.
A practical framework for working through findings:
Triage by exploitability, not just severity. A critical finding that requires local access to exploit is different from a critical finding that is remotely exploitable from the internet. Exploitability — how realistic is it that an attacker could use this? — should weight your prioritisation alongside severity.
Group by remediation owner. Many remediation efforts stall because the findings are presented as a single list when they actually belong to different teams. Infrastructure findings go to the platform team. Application findings go to development. Configuration findings go to operations. Splitting the report by ownership accelerates action.
Separate quick wins from structural issues. Some findings can be fixed in hours — a misconfigured header, an unnecessary service, a default credential. Others require architectural changes that take months. Do not let the long-tail structural work delay action on the quick wins.
Retest after remediation. A finding marked remediated in a ticketing system is not the same as a finding that has been verified remediated under test conditions. Retesting confirms that fixes work and that remediation has not introduced new issues.
The most mature organisations treat penetration testing as a recurring input to their security program, not an annual compliance event. This means testing at meaningful points in the development and change cycle — before major releases, after significant architectural changes, when new high-risk functionality is introduced.
The cadence question is one of risk management, not compliance. An annual test on an application that receives weekly feature updates is a different risk proposition from an annual test on a stable, low-change system.
SALTT Technologies structures its technical testing engagements to support both compliance-driven and risk-driven testing programs. Talk to our team about what the right testing cadence looks like for your environment.