Every management system eventually encounters the question that tests whether it is genuine: what happens when something goes wrong?
The answer to that question determines the difference between an ISMS that is a governance instrument and one that is a compliance performance. An organization that responds to nonconformities by fixing the specific instance, closing the record, and moving on has a reactive compliance program. An organization that responds by asking why the nonconformity happened, fixing the underlying cause, verifying that the fix worked, and feeding the learning back into the risk assessment and improvement cycle has a management system.
Clause 10 is short — just two sub-clauses — but it carries the weight of everything that came before it. The planning in Clause 6 is only valuable if the ISMS improves when the plan falls short. The monitoring in Clause 9 is only valuable if what it detects is acted upon. The management review in Clause 9.3 is only valuable if the improvement decisions it produces are implemented and tracked. Clause 10 is the Act phase of the PDCA cycle — the mechanism that closes the loop and ensures the ISMS gets better with every turn.
Clause 10 at a Glance
Clause 10 contains two sub-clauses — and notably, the 2022 revision reversed their order from the 2013 version, placing continual improvement (10.1) before nonconformity and corrective action (10.2). This was a deliberate signal: improvement is the goal; corrective action is one mechanism for achieving it.
10.1 — Continual Improvement Ongoing enhancement of ISMS suitability and effectiveness | 10.2 — Nonconformity & Corrective Action Structured response when something goes wrong |
The relationship between the two sub-clauses is not one of precedence but of scope. Continual improvement (10.1) is the broad organizational commitment — the ISMS shall continuously improve its suitability, adequacy, and effectiveness. Nonconformity and corrective action (10.2) is the specific structured process for responding to failures — a subset of improvement, applied reactively when something goes wrong. Both are necessary; neither is sufficient alone.
Clause 10.1 — Continual Improvement
Clause 10.1 states, simply, that the organization shall continually improve the suitability, adequacy, and effectiveness of the information security management system. Three words in that requirement deserve attention because they represent distinct dimensions of ISMS quality.
Suitability asks whether the ISMS is still the right system for this organization — is its scope aligned with the actual risk landscape, is its risk appetite still calibrated to the business strategy, are its policies still relevant to how the business operates? Adequacy asks whether the ISMS has enough — enough controls, enough resources, enough documentation — to meet its obligations. Effectiveness asks whether the ISMS is actually working — are controls reducing risk, are incidents being detected and responded to, are objectives being met?
Continual improvement means progressing on all three dimensions over time. An ISMS can be adequate and effective but become unsuitable as the business evolves. An ISMS can be suitable and adequate but ineffective if controls are poorly implemented. Monitoring (Clause 9.1) and management review (Clause 9.3) produce the evidence that identifies which dimension needs attention at any given time.
Clause 10.2 — Nonconformity and Corrective Action
Clause 10.2 is the most operationally specific sub-clause in Clause 10. When a nonconformity occurs — from any source — the standard requires a structured seven-step response. The structure is important: skipping steps, particularly the root cause analysis and effectiveness verification steps, is itself a nonconformity in the following audit cycle.
The Seven-Step Corrective Action Process
ISO 27001 does not explicitly number the steps, but Clause 10.2's requirements map to a clear sequence that auditors test against:
| # | Phase | What it requires |
| 01 | React | When a nonconformity occurs, take immediate action to control and correct it where possible — contain the impact, restore normal operations, and prevent further damage. Document what happened, when, and the immediate actions taken. |
| 02 | Evaluate | Assess whether corrective action is needed by considering: could the nonconformity recur? Could similar nonconformities exist elsewhere? What is the significance of the gap revealed? Not every nonconformity requires a root cause analysis — proportionality matters. |
| 03 | Root Cause | For nonconformities that warrant corrective action, conduct a root cause analysis. Why did this happen? ISO 27001 does not prescribe a specific root cause methodology — 5-Whys, fishbone diagrams, fault tree analysis, or structured narrative are all acceptable if the analysis is genuine and documented. |
| 04 | Plan Action | Design corrective actions that address the root cause, not just the symptom. A corrective action that fixes the specific instance without addressing why it happened will not prevent recurrence. Document: what actions will be taken, who is responsible, and by when. |
| 05 | Implement | Execute the corrective action plan. Update relevant documentation if the nonconformity revealed a procedural gap. Communicate changes to relevant staff. Maintain records of implementation activities and completion evidence. |
| 06 | Verify | Review the effectiveness of the corrective actions taken. Did the actions address the root cause? Has the nonconformity been resolved? Is there evidence that recurrence has been prevented? This review is required by the standard and must be documented. |
| 07 | Close & Learn | Close the corrective action record with documented evidence of effectiveness verification. Feed lessons learned back into the ISMS: update the risk register if the nonconformity revealed an unidentified risk, update procedures if process gaps were found, update training if competence gaps were exposed. |
| THE PROPORTIONALITY PRINCIPLE | Not every gap requires a root cause analysis. Clause 10.2 requires the organization to 'evaluate the need for action' — meaning the response must be proportionate to the significance and risk of the nonconformity. A staff member who missed one training session requires a targeted corrective action but probably not a formal root cause analysis. A systematic failure in access review processes affecting multiple systems over eight months warrants both. |
The Corrective Action Register in Practice
The corrective action register (CAR register) is the primary operational tool for managing nonconformities. It tracks every nonconformity from identification through to verified closure, and it is one of the most reliable indicators of ISMS health that auditors review. A register that shows rapid identification, structured corrective actions, timely closure, and no recurring findings signals a well-functioning management system. A register that shows overdue actions, recurring findings, and evidence-free closures signals the opposite.
The sample register below illustrates four corrective action entries at different stages of the lifecycle, drawn from sources across the ISMS — external audit, internal audit, incident review, and management review:
| ID | Source | Nonconformity description | Corrective action plan | Status | Due / Closed |
| CAR-001 | External Audit (Stage 2) | No evidence of quarterly access reviews for cloud systems — policy requires quarterly review but no review records exist for the past 8 months. | 1) Conduct overdue access review immediately. 2) Assign IT Security Engineer as process owner with calendar reminder. 3) Integrate access review into ISMS quarterly activity calendar. 4) Produce written procedure for access review process. | CLOSED | 30 days |
| CAR-002 | Internal Audit (Q1 2026) | Two of 47 staff members have no record of completing the 2025 annual security awareness training — confirmed not in LMS completion records. | 1) Enrol missing staff in next available training session immediately. 2) Review LMS enrollment list against HR roster — identify any other gaps. 3) Add automatic LMS enrollment trigger on HR system new hire record creation. | CLOSED | 14 days |
| CAR-003 | Incident (Feb 2026) | Post-incident review found API key committed to public GitHub repository — exposed for 3 days before detection. Controls for secrets management were not operational in CI/CD pipeline. | 1) Rotate all exposed credentials immediately (done). 2) Deploy Gitleaks to all repositories with pre-commit hooks. 3) Conduct developer training on secrets management within 30 days. 4) Add secret scanning to CI/CD pipeline quality gate — blocks merge if secrets detected. | IN PROGRESS | Q2 2026 |
| CAR-004 | Management Review (Mar 2026) | Supplier security review schedule not met — 3 of 8 critical suppliers not reviewed in the past 12 months (payment gateway, CDN provider, SMS API provider). | 1) Conduct overdue reviews within 30 days. 2) Appoint dedicated supplier review owner in IT Security team. 3) Add supplier review schedule to ISMS calendar with automatic reminders. 4) Update supplier management procedure to reflect review obligations. | OPEN | 30 days |
| Corrective action closure quality: The most common CAR register failure is closing actions without adequate evidence of effectiveness. 'Training completed' is implementation evidence, not effectiveness evidence. Effectiveness evidence for a training corrective action is the next phishing simulation showing improved performance, or a post-training assessment showing knowledge improvement. Auditors test this distinction specifically — they look for verification evidence, not just completion records. |
Root Cause Analysis: Choosing the Right Method
Root cause analysis is required for nonconformities that warrant corrective action — but the standard does not prescribe a specific method. The choice of method should be proportionate to the complexity of the nonconformity and the resources available. The table below maps four practical root cause techniques to the ISMS scenarios where each is most appropriate:
| Technique | Complexity | Best for | How to apply | ISMS example |
| 5-Whys | Low | Simple, single-cause nonconformities — process failures, isolated human error, straightforward procedure gaps | Start with the nonconformity. Ask 'Why did this happen?' five times, each time using the previous answer as the new question. Stop when the root cause is within the organization's control to fix. | NC: Access review overdue. Why? → No reminder set. Why? → Process not calendared. Why? → No process owner assigned. Why? → RACI not updated after reorganization. Root cause: RACI not maintained when roles change. |
| Fishbone (Ishikawa) | Medium | Multi-cause nonconformities — incidents with several contributing factors, awareness or culture failures, systemic control gaps | Place the nonconformity at the head of the fish. Draw bones for major cause categories (People, Process, Technology, Environment). Populate each bone with contributing causes identified through evidence review. | NC: Staff clicked phishing simulation at 24%. Causes across bones: People (insufficient training depth), Process (no regular simulations), Technology (email filtering not tuned), Environment (high workload reducing vigilance). |
| Fault Tree Analysis | High | Complex security incidents where multiple failure chains converged — unauthorized access events, data breach scenarios, critical system compromises | Start with the top-level event (the incident or nonconformity). Work backward through contributing events using AND/OR logic gates to map the complete chain of failures. Identify where single-point failures exist. | Data exfiltration incident: top event = data exfiltrated. AND gate: unauthorized access obtained AND data not encrypted. Each branch further decomposed to identify where controls failed at each stage. |
| Structured Narrative | Low–Medium | Process gaps, documentation failures, communication breakdowns — nonconformities where the failure chain is straightforward but benefits from clear articulation | Write a structured narrative: what should have happened (per policy/procedure), what actually happened, and why the gap existed. Explicitly identify which process or control failed and at what point. | NC: Supplier not security-reviewed before onboarding. Policy requires security review before contract signature. Procurement processed contract without triggering security review. Failure point: procurement checklist did not include security review step. |
| The root cause quality test: A root cause analysis is credible when the corrective action it produces would, if fully implemented, prevent recurrence of the nonconformity. If the corrective action addresses only the specific instance — 'we enrolled the missing staff member in training' — without addressing why they were missed in the first place, the root cause has not been found. The test is: if we implement this corrective action and then forgot about this incident entirely, would the same nonconformity be likely to occur again? |
Beyond Corrective Action: Building a Proactive Improvement Culture
Clause 10.2 is reactive — it responds to problems that have already occurred. Genuine continual improvement, as required by Clause 10.1, is broader: it actively seeks ways to enhance the ISMS before problems occur. The best ISMS programs draw improvement inputs from multiple sources, not just corrective actions.
| Improvement source | Why it matters | How to capture and act on it |
| Corrective actions (Clause 10.2) | Every corrective action that addresses a root cause has improvement embedded within it — a fixed process, a new control, an updated procedure. These are the minimum baseline of ISMS improvement. | Track CAR closure as ISMS improvement evidence. Report corrective action outcomes at management review. |
| Internal audit findings | Audit observations (not just nonconformities) often reveal improvement opportunities that do not rise to the level of a formal finding but would enhance ISMS effectiveness if addressed. | Create improvement action register for audit observations. Review at management review — decide which to pursue and resource. |
| Management review decisions | Management review outputs (Clause 9.3.3) must include continual improvement decisions. These decisions are the highest-authority ISMS improvement directives — they carry management commitment and resource allocation. | Translate management review improvement decisions into tracked ISMS improvement actions with owners, dates, and success criteria. |
| Monitoring and metric trends | When KPI trends reveal sustained underperformance against targets — phishing click rate stubbornly above target, MTTD consistently above SLA — they signal systemic improvement needs beyond individual corrective actions. | Review metric trends quarterly. Identify persistent underperformers. Commission root cause analysis and improvement initiative for KPIs consistently off-target. |
| Incident lessons learned | Every significant security incident — regardless of whether it resulted in a formal nonconformity — carries learning about what the ISMS missed, what controls were ineffective, and what processes need strengthening. | Mandatory post-incident review for all P1/P2 incidents. Lessons learned entered in improvement register. Assign owner and timeline for each lesson. |
| Threat intelligence and sector benchmarking | Changes in the threat landscape — new attack vectors, new vulnerabilities in used technologies, emerging regulatory requirements — represent improvement opportunities before they become control gaps. | Quarterly threat intelligence review. Benchmark control coverage against sector peers. Identify and address gaps before they appear in risk assessments. |
| Staff and stakeholder feedback | Staff who work with ISMS controls daily often have the most practical insight into what is and is not working. Interested party feedback — from clients, regulators, suppliers — provides external perspective on ISMS effectiveness. | Annual staff security feedback survey. Include interested party feedback as required management review input. Track themes across feedback cycles. |
| Technology and tool improvements | New capabilities in GRC platforms, security tools, and automation present opportunities to improve ISMS effectiveness and efficiency — reducing manual effort, improving control evidence quality, and increasing monitoring coverage. | Annual ISMS tooling review. Identify automation opportunities for manual evidence collection tasks. Track tooling improvements as ISMS enhancements. |
The improvement register — a simple tracked list of improvement initiatives with owners, timelines, and status — is not explicitly required by ISO 27001 but is highly practical. It provides evidence of proactive improvement activity, gives management visibility into the improvement pipeline, and creates accountability for initiatives that are discussed but never executed. In a mature ISMS, the improvement register is reviewed at every management review alongside the corrective action register.
| Bitlion GRC improvement tracking: Bitlion's platform maintains an integrated improvement register linked to the ISMS framework — improvement initiatives from any source (audit findings, management review decisions, incident lessons learned, metric trends) are captured, assigned owners, and tracked to closure. The improvement register feeds the management review agenda automatically, ensuring no improvement initiative is lost between review cycles. |
ISMS Maturity Across Certification Cycles
The concept of continual improvement takes on a different character at different stages of the ISMS lifecycle. Understanding what improvement looks like in each cycle helps organizations set realistic expectations and focus improvement energy appropriately:
| Cycle | Focus | Characteristics | Typical audit findings |
| Cycle 1 (Certification) | Foundation — establish, document, certify | Risk register first completed. SoA produced. Controls implemented to pass certification audit. Some controls still partially implemented at certification. Management review cadence established. Internal audit program running. | Several minor nonconformities: incomplete training records, documentation version gaps, access reviews behind schedule. Possibly 1–2 major NCs requiring pre-certification remediation. |
| Cycle 2 (First Renewal) | Operation — embed, improve, mature | Risk register updated with new risks from business growth. Controls from Cycle 1 gaps now fully implemented. Internal audit program producing richer findings. Management engagement with ISMS visibly improved. Metrics program generating trend data. | Fewer minor nonconformities. Findings shift from 'not done' to 'could be done better'. Auditors begin testing operational effectiveness rather than just document existence. |
| Cycle 3+ (Ongoing) | Excellence — optimize, extend, integrate | ISMS integrated into product development, procurement, and HR processes. Threat intelligence actively feeding risk assessments. Controls automated where possible. ISMS scope expanded. Consider integration with ISO 22301 or ISO 42001. | Mainly observations rather than nonconformities. Auditors focus on leading indicators and continuous improvement evidence. Organization may seek recognized best practice acknowledgment. |
The progression from Cycle 1 to Cycle 3 is not automatic — it requires deliberate investment in the feedback mechanisms that make improvement possible. Organizations that treat each surveillance audit as an obstacle to pass rather than an opportunity to evaluate their ISMS objectively tend to find that their second and third cycles resemble their first. Organizations that genuinely use the audit findings, monitoring data, and management review outputs to drive specific improvements consistently find their ISMS getting meaningfully stronger with each cycle.
Required Outputs from Clause 10
Clause 10 produces three categories of documented outputs. Two are explicitly required by the standard's text:
| § | Required document / output | Status | What auditors examine |
| 10.2 | Nonconformity records | EXPLICIT | Documentation of every nonconformity — what it was, what immediate action was taken, the root cause analysis, the corrective action plan, and the outcome of effectiveness verification. Must be retained as documented information. |
| 10.2 | Corrective action records | EXPLICIT | Records of corrective actions taken, their implementation, and the results of effectiveness review. The corrective action record must be traceable to the nonconformity that triggered it. |
| 10.1 | Evidence of continual improvement activities | Required | Not an explicitly named document, but improvement activities must be evidenced — through CAR closure records, management review improvement decisions, metric trend improvements, and ISMS enhancement records. |
The nonconformity and corrective action records are the most scrutinized Clause 10 outputs in external audits. Auditors look at the complete lifecycle — not just whether nonconformities were recorded, but whether the root cause analysis is credible, whether the corrective actions address the identified root causes, and whether the effectiveness verification actually demonstrates that the corrective action worked. Quality matters more than quantity: a small number of well-documented, genuinely analyzed, effectively resolved corrective actions demonstrates a healthier ISMS than a large volume of superficially documented closures.
Common Clause 10 Nonconformities
Corrective actions that address the symptom rather than the root cause
The single most common Clause 10 finding. An access review was overdue: corrective action is 'completed the overdue review'. The root cause — that the access review process has no assigned owner and no calendar trigger — is not addressed. The next audit finds the same nonconformity again. Recurring findings are the clearest indicator that corrective actions are treating symptoms rather than causes, and auditors specifically look for recurrence patterns.
No documented evidence of effectiveness verification
Closing a corrective action with 'completed' and a date is implementation evidence. Closing it with implementation evidence plus verification that the nonconformity has not recurred and the root cause has been addressed is effectiveness evidence. Clause 10.2(f) specifically requires reviewing the effectiveness of corrective actions taken — which means there must be a step between implementation and closure that asks 'did this work?' and documents the answer.
Corrective actions from external audit findings not addressed before surveillance
External auditors issue corrective action requests when they find nonconformities during certification audits. These CARs must be closed — with documented corrective actions and effectiveness evidence — before or during the first surveillance audit. Organizations that treat external audit CARs as lower priority than internal work, or that close them on paper without substantive corrective action, face recurring findings at surveillance and risk certificate suspension.
Continual improvement claimed but not evidenced
Clause 10.1 requires continual improvement to be demonstrated, not merely stated. An ISMS that claims continuous improvement in its management review minutes but has no improvement register, no metric trend data showing progress, and no evidence of proactive enhancement activities beyond closing corrective actions has not demonstrated the requirement. The evidence of continual improvement is in the data: metric trends, CAR closure patterns, scope expansion, integration with other management systems, and the visible maturation of ISMS practices over successive cycles.
| The improvement trap: Some organizations focus so heavily on corrective actions that they neglect proactive improvement — treating Clause 10 as purely reactive. The ISMS becomes a system that responds to failures but never gets ahead of them. This approach produces an ISMS that is always fixing yesterday's problems rather than preventing tomorrow's. True continual improvement requires both: a rigorous corrective action process for reactive response AND a proactive improvement program that makes the ISMS more capable before the next failure occurs. |
Closing the Loop: Clause 10 and the Full ISMS Cycle
Clause 10 is the final clause in ISO 27001 — but it is not an ending. It is the point where the ISMS cycle completes and begins again. The improvements made in Clause 10 feed back into the context analysis of Clause 4, update the risk assessment in Clause 6, inform the objectives in Clause 6.2, shape the monitoring program in Clause 9.1, and set the agenda for the next management review in Clause 9.3.
This is what separates an ISMS from a compliance program. A compliance program achieves a state — meeting a set of requirements at a point in time — and then tries to maintain it. An ISMS is a system that continually improves its own effectiveness in response to what it learns about its own performance, the evolving threat environment, and the changing needs of the organization it serves.
The organizations that internalize this distinction — that treat ISO 27001 as an instrument for building organizational resilience rather than a certificate to display — are the ones whose ISMS programs genuinely protect them. The certificate is the evidence that the system was built correctly. Clause 10 is the mechanism that keeps it sharp.