The internal audit is the moment the ISMS tests itself. Not the moment management reviews performance data, and not the moment an external certification auditor examines evidence — but the moment an independent person within the organization asks the same hard questions that the external auditor will ask, in advance, with enough time to fix what is found.
Done well, an internal audit is the most valuable governance activity in the ISMS operational calendar. It finds nonconformities before the certification auditor does. It reveals the gap between documented intent and operational reality. It produces corrective actions that make the ISMS genuinely better. And it generates the audit trail of self-scrutiny that tells external auditors they are dealing with a management system that takes its own effectiveness seriously.
Done poorly — conducted as a document review that finds everything satisfactory, conducted by the same person who built the ISMS, or conducted in the weeks before the Stage 2 audit as a rehearsal exercise — the internal audit becomes a liability. It produces a clean record that the external auditor disproves within hours, which undermines confidence in the entire ISMS governance program.
This article covers the complete internal audit discipline: how to design an audit program, how to schedule audits risk-based, which evidence methods to use for which controls, how to write findings that are specific and traceable, and how to use the pre-certification readiness checklist to enter Stage 2 with genuine confidence rather than performative preparation.
The Internal Audit Program: Design Principles
An internal audit program is not a single audit conducted once a year — it is a structured schedule of audit activities that, over the program period, covers the entire ISMS. ISO 27001 requires internal audits at 'planned intervals' — the program defines those intervals, the rationale for the schedule, and the auditor assignments for each audit.
The seven elements below constitute a complete internal audit program. Each element has specific implementation guidance and produces a required output:
| Program element | What it is | How to implement it | Required output |
| Audit universe | The complete set of auditable areas — every clause requirement (4–10), every applicable Annex A control domain, every in-scope business unit, and every in-scope supplier relationship. | Derive from the ISMS scope, the SoA, and the risk register. Every applicable control is an auditable item. Every clause is auditable. The universe does not need to be covered in a single audit cycle — but must be covered across the program period. | Documented audit universe register listing all auditable areas with associated clause and Annex A references. |
| Risk-based scheduling | Prioritizing which areas get audited when — based on risk level, previous findings, and organizational change. Higher-risk areas should be audited more frequently. Areas with previous nonconformities warrant earlier revisit. | Apply a simple risk rating to each audit universe area: Critical = audit annually; High = audit every 18 months; Medium = audit every 2 years. Areas with open corrective actions from previous audits jump to Critical regardless of inherent risk. | Annual audit schedule showing which areas will be audited in which quarter, rationale for scheduling priority, and auditor assignments. |
| Auditor independence | Auditors must be objective and impartial — they cannot audit their own work. The ISMS Manager cannot audit the risk assessment they developed. An IT engineer cannot audit the technical controls they configured. | Map each team member's areas of responsibility and ensure auditors are assigned to areas where they had no design or implementation role. For small organizations, rotate areas between auditors or engage an external auditor for areas where independence cannot be achieved internally. | Auditor assignment matrix showing independence from each audited area. Auditor competence records. |
| Audit scope and objectives | Each individual audit has a defined scope (which areas, which time period) and objectives (what questions the audit will answer). This prevents audit scope creep and keeps the auditor focused. | Define scope and objectives in the audit notification letter sent to auditees. Objectives should be specific: 'Verify that access reviews have been conducted quarterly per the Access Control Policy and that evidence has been retained.' | Audit plan / notification document specifying scope, objectives, schedule, and methods for each audit. |
| Evidence collection methods | How the auditor will gather evidence: document review, staff interviews, system observation, log sampling, configuration review, or re-performance of a process. | Plan the evidence collection approach before the audit day. For each objective, identify the evidence type most likely to reveal whether the requirement is met. Do not rely solely on document review — operational controls require operational evidence. | Audit working papers recording evidence reviewed, sources consulted, and basis for findings. |
| Finding classification | How audit observations are classified: major nonconformity, minor nonconformity, or opportunity for improvement. Classification drives the response — major NCs require urgent corrective action; minor NCs require corrective action before surveillance; observations are discretionary. | Apply the standard definitions consistently: major NC = systemic failure casting doubt on ISMS effectiveness or total absence of requirement; minor NC = isolated lapse not indicating systemic breakdown; observation = area of improvement that does not rise to NC level. | Classified finding list with clause reference, evidence basis, and classification rationale. |
| Corrective action and follow-up | Auditors do not just identify nonconformities — they track corrective action closure. Open NCs remain on the audit program's watch list until the corrective action is verified as effective. | Assign each NC a target closure date. At next relevant audit or dedicated follow-up review, verify that the corrective action was implemented and that it addressed the root cause. Document verification outcome. | Corrective action register updated with finding status. Closure verification records. |
| WHAT MAKES AN AUDIT PROGRAM GENUINE | A genuine internal audit program is risk-based — it allocates more auditor attention to higher-risk areas and previous problem areas. It is independent — auditors do not audit their own work. It finds real problems — not everything is satisfactory in a first-cycle ISMS. And it drives improvement — corrective actions are tracked to verified closure. An audit program that is risk-blind, conducted by the ISMS designer on their own work, produces zero findings, and closes CARs without verification is a compliance exercise masquerading as governance. |
Annual Audit Schedule: A Pre-Certification Example
The following schedule distributes audit activities across four quarters of the certification year, sequencing them to maximize pre-certification readiness. Governance and documentation foundations are audited first (Q1), followed by operational controls and supplier security (Q2), technical controls and monitoring (Q3), and pre-certification verification (Q4):
| Period | Audit areas | Rationale for scheduling |
| Q1 2026 (Jan–Mar) |
| First audit — establish baseline. Governance and management system foundations are highest priority for pre-certification readiness. |
| Q2 2026 (Apr–Jun) |
| Operational controls and document control — areas with highest frequency of minor NCs in first-cycle audits. Supplier security typically shows largest gaps. |
| Q3 2026 (Jul–Sep) |
| Technical controls and monitoring — areas where partial implementation is most common. Incident management must be evidenced with operational records, not just procedures. |
| Q4 2026 (Oct–Dec) |
| Pre-certification quarter. Management review evidence must be complete. All previous NCs must be closed. Full-scope pre-certification review before Stage 2. |
| Audit scheduling and the Stage 2 date: The Q4 2026 pre-certification review in the schedule above should be timed to conclude no less than 6 weeks before the Stage 2 audit date. This allows time to address any findings discovered in the pre-certification review and produce evidence of corrective action before the external auditor arrives. Scheduling the final internal audit in the week before Stage 2 is a high-risk approach — it leaves no time to address what it finds. |
Evidence Collection Methods
Evidence collection is the heart of the audit. Without adequate evidence collection, audit findings are opinions rather than conclusions. The auditor's job is not to assess whether things seem right — it is to gather sufficient evidence to reach a conclusion about whether the requirement is met. Different types of requirements call for different evidence collection methods:
| Method | What it involves | Strengths | Limitations | Best applied to |
| Document Review | Review controlled ISMS documents: policies, procedures, risk register, SoA, training records, management review minutes. | Efficient. Covers large scope quickly. Essential for clause compliance verification. | Shows what is documented, not what is done. Cannot verify operational reality from documents alone. | Clause 4-7 governance requirements. Policy completeness. Document control compliance. Competence and training records. |
| Staff Interviews | Question process owners, ISMS role holders, and operational staff about how activities are performed, what they understand their obligations to be, and how they respond to scenarios. | Reveals operational reality vs. documentation. Tests whether staff know and follow procedures. Identifies informal practices not in documentation. | Time-intensive. Interviewee anxiety can distort responses. Not all interviewees are equally representative. | Clause 7.3 awareness verification. Process understanding. Identifying documentation-practice gaps. Leadership and culture assessment. |
| System Observation | Observe processes in real time — watch an access review being conducted, observe how change requests are processed, watch an incident response procedure executed. | Most direct evidence of operational reality. No interpretation required — either the process runs as described or it does not. | Time-consuming. May not be possible for infrequent processes. Observation effect (Hawthorne effect) may alter behavior. | Change management, access review, incident response. Any process where the procedure describes a specific sequence of steps. |
| Log and Record Sampling | Sample records produced by ISMS operations: access review reports, vulnerability scan results, incident log entries, training completion records, management review minutes. | Objective evidence. Tests whether records are actually being produced and retained as required. Reveals gaps quickly. | Records may have been created for audit rather than as genuine operational output. Auditor must assess record quality, not just existence. | Clause 8 operational controls. Monitoring and measurement (Clause 9.1). Training records (Clause 7.2/7.3). Corrective action register (Clause 10). |
| Configuration Review | Examine actual technical configurations: firewall rules, access control settings, MFA enrollment, encryption configurations, vulnerability scanning schedules, SIEM alert rules. | Tests whether technical controls are actually deployed and configured correctly — not just documented as planned. | Requires technical knowledge. Access to systems must be arranged in advance. Point-in-time snapshot may not reflect ongoing operational state. | Annex A technical controls (Domain 8). MFA deployment completeness. Logging and monitoring configuration. Encryption verification. |
| Re-performance | The auditor executes a process themselves to verify it can be completed as described: run the access review query, execute the backup restoration procedure, trigger the incident escalation process. | Definitive test of whether a procedure is executable. Reveals gaps between written procedure and operational reality immediately. | Requires system access and technical capability. Risk of disrupting operations if not carefully controlled. Most resource-intensive method. | Critical procedures where failure would have regulatory or certification implications: incident notification, DR recovery, backup restoration. |
Most effective audits use a combination of methods. A Clause 7.3 awareness audit might start with LMS record review (document review), then interview two non-IT staff members (interview), then run a live phishing simulation check to see current enrollment and click rate data (system observation). Each method reinforces or challenges the conclusions from the others.
The Internal Audit Checklist
The checklist below provides specific audit questions, evidence collection methods, and evidence descriptions for each major clause. This is not a complete audit checklist — it covers the highest-value questions in each area. A full internal audit program would expand each section with additional questions drawn from the specific risk register, the SoA control list, and the organization's own documented procedures:
| Clause 4 — Context |
☐ Is the ISMS scope statement documented and does it define both inclusions and exclusions with rationale? Method: Document review Evidence to collect: ISMS scope statement — check version, approval date, specificity of boundary |
☐ Does the context analysis (4.1) identify current external issues including UU PDP, POJK, and active threat vectors relevant to the sector? Method: Document review + interview Evidence to collect: Context analysis document — check regulatory references are current (2025–2026 regulations included) |
☐ Does the interested parties register include all relevant regulators (KOMINFO, OJK, BI as applicable) with their specific IS requirements? Method: Document review Evidence to collect: Interested parties register — verify regulators are listed with specific requirements, not generic entries |
| Clause 5 — Leadership |
☐ Was the Information Security Policy approved by the CEO/top management (not CISO)? Method: Document review Evidence to collect: IS Policy — check signatory role title. Must be CEO or equivalent, not CISO or IT Manager |
☐ Does the IS Policy include a risk appetite statement and specific commitments to UU PDP and applicable regulations? Method: Document review Evidence to collect: IS Policy Section 5 (risk appetite) and Section 3 (regulatory compliance) — check for specific regulatory references |
☐ Can top management explain the organization's risk appetite and the current top 3 risks without consulting notes? Method: Interview (executive sponsor) Evidence to collect: Interview notes — if management cannot articulate risk posture, Clause 5.1 leadership is nominal not genuine |
| Clause 6 — Planning |
☐ Does the risk register contain all risks identified in the risk assessment, with consistent scoring methodology applied? Method: Document review + sampling Evidence to collect: Risk register — sample 5 entries and verify L×I calculation, risk owner assignment, and treatment status |
☐ Does the SoA account for all 93 Annex A controls with applicability decisions for each? Method: Document review Evidence to collect: SoA — count controls and verify all 93 are listed. Check that excluded controls have specific justifications, not generic 'not applicable' |
☐ Does the risk treatment plan have specific actions, owners, and target dates — and is there progress evidence? Method: Document review + interview Evidence to collect: Risk treatment plan — verify entries are specific. Cross-reference with implementation evidence for 3 controls |
| Clause 7 — Support |
☐ Do all ISMS role holders have current competence evidence on file? Method: Record sampling Evidence to collect: Competence register — select ISMS Manager and 2 other role holders. Verify certificates/training records are current and role-appropriate |
☐ Has all in-scope staff completed security awareness training for the current policy version? Method: LMS record review + staff interview Evidence to collect: LMS completion report — verify 100% (or identify and document exceptions). Interview 2 non-IT staff on policy content. |
☐ Does the document register reflect current versions of all controlled documents? Method: Document control review Evidence to collect: Document register vs. active documents — spot-check 5 documents for version alignment between register entry and live document |
| Clause 8 — Operation |
☐ Is there evidence that access reviews have been conducted on schedule with documented outcomes? Method: Record sampling Evidence to collect: Access review reports — request last two quarters. Verify completed, signed off, changes actioned, evidence filed. |
☐ Has a vulnerability scan been run within the last 30 days and are critical findings being remediated within SLA? Method: Record review + system observation Evidence to collect: Vulnerability scan reports — check dates, severity of findings, and remediation tickets for critical items |
☐ Are security incidents being logged and classified regardless of severity? Method: Incident log review Evidence to collect: Incident register — check for entries covering the last 3 months. Absence of any entries for a 3-month period is suspicious — probe whether minor events are being captured |
| Clause 9 — Performance |
☐ Has at least one management review been conducted with all 8 required inputs addressed? Method: Document review Evidence to collect: Management review minutes — verify all 8 Clause 9.3.2 inputs are addressed. Check for documented decisions (outputs), not just status reports. |
☐ Are ISMS KPIs being tracked and reported? Method: Document review + dashboard observation Evidence to collect: ISMS KPI dashboard or equivalent — verify metrics are current and show trend data. Static or missing data signals monitoring is not operational. |
| Clause 10 — Improvement |
☐ Are corrective actions documented with root cause analysis and effectiveness verification? Method: CAR register review Evidence to collect: CAR register — select 3 closed CARs. Verify: root cause documented, action addresses root cause (not symptom), effectiveness reviewed. |
☐ Are all corrective actions from previous audits or findings closed or on track? Method: CAR register review Evidence to collect: Open CAR list — verify no CARs are significantly past their target closure date without documented justification. |
| Sampling strategy: For audit questions involving records (training completions, access reviews, incident logs), auditors should use risk-based sampling rather than selecting the most convenient records. Select a sample that includes: records from the earliest period in the audit window (not just recent months), records from high-risk systems or high-turnover teams, and records for staff who are not core ISMS team members. The least visible corners of the ISMS are where genuine gaps are most likely to hide. |
Writing Audit Findings That Drive Improvement
An audit finding that reads 'training records were found to be incomplete' tells the auditee what was wrong but not how to fix it, why it matters, or what specific standard was not met. A well-written finding provides five elements: a clear description of what was observed, the evidence it is based on, the specific clause or control requirement it violates, the classification and the rationale for it, and a proposed corrective action.
The sample finding below demonstrates what a well-written minor nonconformity looks like — with all five elements present and at sufficient specificity to drive a focused corrective action:
| NC-2026-003 · Minor Nonconformity | |
| Clause reference | Clause 9.2 / ISO 27001:2022 |
| Control area | Internal audit program |
| Finding description | The internal audit program for 2025 included quarterly audit cycles covering Clauses 4–10. Review of audit records shows that the Q3 2025 audit scheduled for July–September 2025 was not completed. No audit report exists for this period. The ISMS Manager confirmed the audit was deferred due to competing project priorities. No documented exception or rescheduling was recorded. |
| Evidence basis | Audit program schedule (Q3 2025 audit marked as planned). Absence of Q3 2025 audit report in the ISMS document register. ISMS Manager interview (25 February 2026). |
| Clause requirement | ISO 27001:2022 Clause 9.2.1 requires the organization to conduct internal audits at planned intervals. The 2025 audit program documented a quarterly audit schedule. Failure to conduct one of four planned audits, without documented rescheduling or exception, is a failure to implement the audit program as planned. |
| Classification rationale | Classified as minor nonconformity because the failure is isolated (one missed audit in four) and does not indicate a systemic failure of the audit program. The other three quarterly audits were conducted and documented. Classification would be escalated to major if the audit program had been systematically neglected. |
| Proposed corrective action | 1. Conduct the overdue Q3 scope audit within 30 days of this finding being issued. 2. Add a formal exception process to the audit procedure — if an audit cannot proceed on schedule, a documented exception must be raised and approved by the CISO with a revised completion date. 3. Add audit milestone reminders to the ISMS calendar. |
| Target closure date | 30 March 2026 |
| Responsible owner | ISMS Manager |
| Finding writing discipline: The most common weakness in internal audit findings is conflating the observation with the conclusion. 'Training records are incomplete' is an observation. 'Training records for three in-scope staff members are missing — this is a failure to demonstrate awareness as required by Clause 7.3, which requires persons doing work under the organization's control to be aware of the information security policy' is a finding. The finding identifies the standard, the evidence, and the gap — in enough specificity that both the auditee and a subsequent auditor can understand exactly what was wrong and verify that it has been corrected. |
Pre-Certification Readiness Checklist
The pre-certification readiness checklist is the final quality gate before the Stage 2 certification audit. It should be completed no later than 6 weeks before the Stage 2 date — giving time to address any gaps discovered. An honest pre-certification review that finds gaps and closes them before Stage 2 is infinitely more valuable than a superficial review that misses gaps and lets the external auditor find them instead.
| Documentation Completeness |
☐ ISMS Scope Statement — finalized, approved, version-controlled ☐ Information Security Policy — approved by CEO, current version, all staff acknowledged ☐ Risk Assessment — completed, all risks scored, risk owners assigned and briefed ☐ Statement of Applicability — all 93 controls addressed, exclusions justified, implementation status current ☐ Risk Treatment Plan — specific actions, named owners, target dates, implementation progress tracked ☐ IS Objectives Register — set, documented, monitored ☐ All supporting policies approved — Access Control, Incident Management, Data Classification, Supplier Security, Cryptography, BCP/DR ☐ All procedures written and current — Access Review, Incident Response, Vulnerability Management, User Provisioning/Deprovisioning |
| Control Implementation Evidence |
☐ MFA — deployed for ALL accounts (not just privileged). Evidence: IAM report showing MFA-enrolled users ☐ Access reviews — at least one complete quarterly cycle evidenced. Evidence: completed review form with manager sign-off ☐ Vulnerability scanning — at least two scans completed. Evidence: scan reports with remediation records for critical findings ☐ Security awareness training — 100% staff completion for current policy version. Evidence: LMS completion report ☐ Phishing simulation — at least one simulation completed with results recorded. Evidence: simulation result report ☐ Supplier security addenda — all critical suppliers (top 5 minimum) have signed security addenda ☐ Incident log — active with entries for the last 3+ months (absence of entries is suspicious, not clean) ☐ Change management — security gate added to change process. Evidence: change records showing security review field |
| Governance Evidence |
☐ Management review — at least one formal review conducted with all 8 Clause 9.3.2 inputs addressed. Minutes retained. ☐ Risk owner sign-off — all risk owners have formally accepted residual risks in their domain ☐ Competence records — all ISMS role holders have current competence evidence (certificates, training records) ☐ Internal audit — all scheduled audits for the certification period completed. No open major NCs. ☐ Corrective actions — all NCs from internal audit have documented corrective actions. Major NCs verified as resolved. ☐ ISMS KPI dashboard — active and current. At least one quarter of performance data available. |
| Certification Body Readiness |
☐ Certification body selected and contracted ☐ Stage 1 documentation package assembled and submitted ☐ Stage 1 findings (if any) addressed and responded to in writing ☐ Stage 2 audit date confirmed ☐ Audit logistics arranged — venue (virtual or physical), system access for auditor, staff availability during audit days ☐ Evidence library organized and accessible — not scattered across email and file shares |
Every item marked with ☐ that cannot be checked by the review date represents a risk to the Stage 2 outcome. Items in the Documentation Completeness and Governance Evidence categories are the most likely to generate major nonconformities if missing. Items in the Control Implementation Evidence category are the most likely to generate minor nonconformities.
| The evidence assembly risk: One of the most common pre-certification failures is discovering that evidence exists — controls have been implemented, processes have been run — but the evidence has not been collected and retained in an organized, accessible format. The MFA is deployed but there is no export from the identity management system showing enrollment status. The vulnerability scans were run but the reports were not saved. Evidence that cannot be produced within an audit session is evidence that does not exist for audit purposes. Conduct an evidence dry-run at least 4 weeks before Stage 2: for every applicable SoA control, attempt to retrieve the evidence. Address gaps immediately. |
Common Internal Audit Mistakes
The table below maps the six most common internal audit failures and their consequences — organized by severity to help prioritize which patterns to avoid most urgently:
| Common mistake | Audit impact | Why it matters | Fix |
| Auditing only documents — no operational testing | Nonconformity | Generates an audit that passes on paper but misses operational gaps. External auditors then find operational gaps that the internal audit should have caught — undermining confidence in the internal audit program. | Every audit plan must include at least one operational evidence method: log sampling, system observation, record review, or staff interview. Document-only audits are compliance reviews, not ISMS audits. |
| ISMS Manager audits their own work | Finding likely | Independence violation — Clause 9.2 requires auditors to be objective and impartial. An ISMS Manager who audits their own risk assessment or the policies they wrote cannot be impartial. | Map all ISMS team responsibilities. Audit the risk assessment using a team member who did not conduct it. For small organizations, engage an external auditor for areas where internal independence cannot be achieved. |
| Internal audit finds zero nonconformities | Finding likely | Implausible for first-cycle ISMS. External auditors encountering a zero-finding internal audit report in a first-cycle certification will probe the internal audit rigorously — and typically find it was conducted superficially. | Conduct genuine audit testing. First-cycle audits almost always find minor nonconformities in document control, training records, or process execution. If the internal audit finds nothing, the audit methodology is likely too shallow. |
| Corrective actions closed without effectiveness verification | Observation | Clause 10.2 requires reviewing the effectiveness of corrective actions taken. Closing a CAR with 'action completed' without verifying the root cause was addressed is an incomplete corrective action process. | For each closed CAR, document the verification: what was checked, what evidence demonstrated the action was effective, and who verified it. This verification step often reveals that the action fixed the symptom but not the root cause. |
| Audit report issued without clause references | Observation | An audit report that says 'we found issues with training' without citing Clause 7.3 or a specific Annex A control gives the auditee no clear corrective action target and gives the next auditor no comparative reference. | Every finding must cite the specific clause or Annex A control it relates to. Format: 'Finding: [description]. Clause: [reference]. Evidence: [what was reviewed]. Classification: [major/minor NC / observation].' |
| No follow-up audit to verify NC closure | Observation | Corrective actions that are documented but never verified create a false record of ISMS health. External auditors who find open NCs from the internal audit will raise this as a process failure. | Build NC closure verification into the next audit cycle — either as a dedicated follow-up audit or as a standing agenda item in the subsequent quarterly audit. Document the verification outcome. |
These mistakes share a common root: the internal audit was treated as a compliance exercise rather than a genuine governance tool. The certification audit that follows an internal audit conducted as a formality will find what the internal audit should have found — and will find it in a higher-stakes environment where there is no time for corrective action before the certification decision.
Internal Audit as Organizational Learning
The purpose of an internal audit is not to produce a clean audit record. It is to generate organizational learning — to surface what is not working before it becomes an external finding, a regulatory investigation, or a security incident. Every nonconformity an internal audit finds and closes is one fewer nonconformity an external auditor or regulator finds and records.
The organizations with the most effective internal audit programs are those where the ISMS team and business unit managers have shifted their relationship with audit findings from defensive to curious. When a finding is raised, the response is not 'how do we minimize this' but 'what does this tell us about where our ISMS is actually weak?'. That shift in orientation — from compliance management to genuine organizational learning — is what separates ISMS programs that improve continuously from those that manage the gap between audit cycles.
An internal audit that finds real problems and drives genuine improvement is the best preparation for certification. Not because it produces a cleaner audit record, but because it produces a better management system.
SECTION 3 COMPLETE
Section 3 — ISO 27001 Implementation Process covers the full implementation journey from project initiation (Article 3.1) through to internal audit preparation (Article 3.8). Articles 3.1 through 3.8 together form a complete implementation guide that takes an organization from standing start to certification-ready ISMS.