The risk assessment is the intellectual center of ISO 27001. Every control in Annex A that you implement, every policy you write, every objective you set — all of it traces back to the risk assessment. If the risk assessment is sound, the rest of the ISMS has a logical foundation. If it is superficial, the entire implementation becomes a documentation exercise that may satisfy auditors on paper but does not actually reduce risk.
ISO 27001 does not prescribe a specific risk assessment methodology. This flexibility is valuable but creates a decision the implementation team must make deliberately: which methodology, which scoring scales, which acceptance thresholds, and how will the assessment be documented and applied consistently? These decisions, once made and documented, become the risk assessment methodology — one of the required documented inputs to the ISMS and one of the first things a certification auditor examines.
This article walks through every aspect of risk assessment methodology design: the methodological approaches available, how to define scoring scales that are precise enough to produce consistent results, how to set risk acceptance criteria that genuinely reflect the organization's risk tolerance, how to structure the methodology document itself, and what a completed risk register looks like for a typical Indonesian regulated organization.
ISO 27001's Requirements for Risk Assessment
Before choosing a methodology, understanding exactly what ISO 27001 requires is essential. Clause 6.1.2 sets out the requirements: the organization shall define and apply a risk assessment process that produces consistent, valid, and comparable results; identifies risks associated with the loss of confidentiality, integrity, and availability of information; assigns owners to identified risks; analyzes likelihood and impact; evaluates risks against defined acceptance criteria; and prioritizes risks for treatment.
Three words in that requirement carry significant weight: consistent, valid, and comparable. Consistent means the same assessor would reach the same conclusion assessing the same risk at different times. Valid means the assessment reflects the real-world risk landscape, not a sanitized version designed to produce an acceptable result. Comparable means risks assessed by different people can be meaningfully compared and ranked against each other.
| THE CONSISTENCY TEST | If two members of the risk assessment team independently score the same risk scenario and produce scores that differ by more than one point on either dimension, the methodology's definitions are not precise enough. Before the full risk assessment begins, calibrate the scoring scales with a set of worked examples that the whole team discusses until they reach consistent results. This calibration exercise — documented and retained — is itself evidence of methodology application. |
Choosing the Right Methodology
The table below maps the main risk assessment methodologies used in ISO 27001 implementations, evaluating each against the requirements and organizational contexts typical in Indonesian regulated industries:
| Methodology | How it works | Strengths / Limitations | Suitability for Indonesian orgs | Verdict |
| Asset-Threat-Vulnerability (ATV) | Identify assets → identify threats per asset → identify vulnerabilities per asset → score likelihood and impact for each threat-vulnerability combination. | ✓ Thorough and traceable. Each risk clearly linked to specific asset and threat. Strong audit trail. Naturally produces SoA control selection rationale. ✗ Time-intensive for large asset inventories. Can produce very large risk registers if not scoped carefully. | Recommended for regulated organizations. Best audit traceability. Most common approach in Indonesian financial services implementations. | Best choice |
| Scenario-Based | Define a set of realistic attack and failure scenarios relevant to the organization. Assess likelihood and impact of each scenario occurring. | ✓ Intuitive for business stakeholders. Easier to communicate to non-technical risk owners. Fewer register entries. ✗ May miss risks not represented in predefined scenarios. Harder to link directly to specific Annex A controls. | Useful as a complement to ATV — especially for new threat scenarios. Less common as a primary methodology in Indonesian audits. | Supplementary |
| Control Gap-Based | Start from the Annex A control list. For each applicable control, assess whether it is implemented. Absent controls represent risks. | ✓ Very direct path to SoA. Simple to understand. Good for organizations starting from low maturity. ✗ Works backwards from controls rather than from risks — technically non-conformant with ISO 27001's risk-first requirement. Can miss risks that Annex A controls do not directly address. | Not recommended as primary methodology. May be used alongside ATV for control gap validation. Misses the risk-driven intent of Clause 6.1.2. | Use with caution |
| Qualitative Scoring (5x5 matrix) | Score likelihood (1–5) and impact (1–5) for each risk. Multiply to produce a risk score (1–25). Classify by score bands. | ✓ Simple to apply consistently. Well understood by auditors. Easy to visualize in heat maps. Adaptable to any methodology. ✗ Scores are subjective — different assessors may score the same risk differently. Requires calibration. | Standard scoring approach used in combination with ATV or scenario-based methodology. Define score descriptors precisely to ensure consistency. | Standard approach |
| Quantitative / FAIR-based | Calculate financial impact of risks using statistical models. Express risk in monetary terms (expected loss). | ✓ Directly supports ROI-based security investment decisions. Provides board-level financial framing. ✗ Requires significant data and expertise. Overkill for most first-cycle implementations. Rarely expected by Indonesian certification bodies. | Not recommended for first-cycle ISO 27001 implementation. May be valuable for mature organizations building advanced risk management programs. | Advanced only |
For first-cycle implementations in Indonesian regulated industries, the Asset-Threat-Vulnerability (ATV) methodology with qualitative 5×5 scoring is the recommended combination. It produces the clearest audit trail, aligns most naturally with how regulators think about information security risk, and generates the traceability between risks and controls that makes the SoA defensible.
Defining the Likelihood Scale
The likelihood scale is the first place where generic methodology becomes specific to the organization. A 1–5 scale with labels like 'Rare', 'Unlikely', 'Possible', 'Likely', and 'Almost Certain' is standard — but labels without precise definitions produce inconsistent scoring. Two assessors looking at the same threat will assign very different likelihood scores if their understanding of 'Possible' versus 'Likely' differs.
The definitions below calibrate the likelihood scale specifically for Indonesian financial services and technology organizations in 2026, incorporating the current threat landscape and typical attack frequency observed in the sector:
| Score | Level | Definition | Example in Indonesian financial services context | Typical timeframe |
| 1 | Rare | The threat has never been observed in this sector and there are no known active threat actors targeting this asset type. No indicators of compromise in the threat intelligence landscape. | Physical break-in to a fully remote organization with no office premises. | Less than once in 10 years |
| 2 | Unlikely | The threat has occurred in the sector but infrequently. Threat actors exist but the organization is not a known or likely target based on current intelligence. | Nation-state APT targeting a small fintech company's internal systems. | Once in 5–10 years |
| 3 | Possible | The threat is plausible and has been observed in peer organizations. The organization has characteristics that make it a potential target. | Business email compromise targeting finance staff. Ransomware affecting companies in the same sector. | Once in 2–5 years |
| 4 | Likely | The threat has been observed multiple times in peer organizations and the threat landscape suggests the organization is at elevated risk. Active threat actors are known to target this asset type. | Phishing attacks targeting financial institution staff. SQL injection attempts against customer-facing web applications. | Once per 1–2 years |
| 5 | Almost Certain | The threat is occurring continuously or with very high frequency. Evidence of active targeting exists. The vulnerability is well-known and actively exploited in the wild. | Automated scanning and probing of internet-facing systems. Phishing targeting organizations in current threat reports. | Multiple times per year |
| Calibration note: Likelihood scores should reflect the current threat landscape, not historical averages. Ransomware attacks against Indonesian financial institutions have increased significantly in the 2024–2026 period. Credential stuffing against customer login endpoints is effectively continuous for any organization with an internet-facing application. These realities should be reflected in the likelihood definitions — scoring ransomware as 'Unlikely' (2) for a financial services organization is not credible. |
Defining the Impact Scale
The impact scale measures the consequences of a risk materializing — across CIA properties, regulatory obligations, financial impact, and reputational damage. For Indonesian organizations subject to UU PDP, POJK, and PBI, the regulatory impact dimension is particularly important and should be calibrated to the actual notification and fine thresholds in those frameworks:
| Score | Level | CIA impact | Regulatory | Financial (IDR) | Reputational / Operational |
| 1 | Negligible | Minimal effect on C, I, or A. No customer data affected. No operational disruption. | No regulatory notification required. No reportable incident. | Direct cost < IDR 50M. No revenue impact. | No external awareness. Internal issue only. Normal operations continue uninterrupted. |
| 2 | Minor | Limited compromise of non-sensitive data. Brief operational disruption (<4 hours). | Potential minor regulatory notification. Unlikely to trigger formal investigation. | Direct cost IDR 50–500M. Limited revenue impact. | Internal awareness only or minor external coverage. Minor service degradation. Recoverable within hours. |
| 3 | Moderate | Compromise of limited personal data (<1,000 records). Operational disruption of 4–24 hours. | UU PDP notification to KOMINFO likely required. OJK notification may be required for financial sector. | Direct cost IDR 500M–5B. Measurable revenue impact. | Some external coverage. Client notification required. Significant service degradation. Recovery within 1–3 days. |
| 4 | Significant | Compromise of substantial personal or financial data (1,000–50,000 records). Operational disruption 1–7 days. | UU PDP breach notification mandatory. OJK/BI regulatory investigation likely. Regulatory fine possible. | Direct cost IDR 5–50B. Material revenue impact. | Media coverage likely. Client trust materially affected. Major service disruption. Recovery takes days to weeks. |
| 5 | Critical | Catastrophic data breach or system compromise (>50,000 records). Extended operational outage (>7 days). | Full regulatory investigation. Maximum regulatory fines. Potential license implications for financial institutions. | Direct cost > IDR 50B. Existential financial impact. | Major national media coverage. Severe reputational damage. Client exodus. Critical system failure. Business continuity risk. |
| UU PDP calibration: The impact scale's regulatory dimension should be calibrated to reflect UU PDP's actual enforcement posture as of 2026. UU PDP Article 57 provides for administrative sanctions including warnings, temporary prohibition of data processing activity, deletion or destruction of data, and administrative fines. The implementing regulations on maximum fine amounts were finalized in early 2025. For organizations with large personal data processing volumes, the regulatory impact of a material breach should be scored at 4 or 5 — not 2 or 3 as was common in pre-UU PDP risk assessments. |
The Risk Scoring Matrix
With defined likelihood and impact scales, the risk score is calculated as Likelihood × Impact, producing scores from 1 to 25. The matrix below visualizes the complete scoring space with color-coded risk bands:
| L \ I | 1 — Negligible | 2 — Minor | 3 — Moderate | 4 — Significant | 5 — Critical |
| 5 — Almost Certain | 5 LOW | 10 HIGH | 15 HIGH | 20 CRITICAL | 25 CRITICAL |
| 4 — Likely | 4 LOW | 8 MEDIUM | 12 HIGH | 16 CRITICAL | 20 CRITICAL |
| 3 — Possible | 3 MINIMAL | 6 LOW | 9 MEDIUM | 12 HIGH | 15 HIGH |
| 2 — Unlikely | 2 MINIMAL | 4 LOW | 6 LOW | 8 MEDIUM | 10 HIGH |
| 1 — Rare | 1 MINIMAL | 2 MINIMAL | 3 MINIMAL | 4 LOW | 5 LOW |
The risk matrix is a visual communication tool as much as a calculation device. When presented in management reviews, it provides immediate visual context for the risk portfolio — where risks cluster, which risks are in critical territory, and how the distribution shifts between assessment cycles.
Risk Acceptance Criteria
Risk acceptance criteria define the threshold below which residual risks are acceptable without further treatment. These criteria are a management decision — they express the organization's risk appetite — and must be approved by top management before the risk assessment begins. Setting them too low makes the risk register unmanageable (every minor risk requiring treatment). Setting them too high allows material risks to be accepted without adequate justification.
The criteria below represent a calibrated starting point for Indonesian regulated organizations. They may be adjusted upward or downward based on the specific regulatory obligations and risk appetite of the organization, subject to executive sponsor approval:
| Risk band | Acceptance criteria | Treatment obligation | Review cadence |
| CRITICAL (20–25) | Never. No risk in this band may be accepted without treatment. | Immediate treatment required. Risk owner escalation mandatory. Treatment plan completion target: within 90 days unless documented justification approved by executive sponsor. | Monthly progress review until resolved. |
| HIGH (10–16) | Not accepted without documented executive sponsor approval of residual risk and compensating controls. | Treatment required. Risk treatment plan target completion: within 180 days. If treatment delayed, compensating controls must be documented. | Quarterly risk register review. Treatment plan progress reviewed monthly. |
| MEDIUM (6–9) | May be accepted by the risk owner with documented rationale if residual risk after treatment would remain medium and treatment cost is disproportionate. | Treatment recommended but may be deferred if risk owner formally accepts. Acceptance must be documented with rationale and reviewed annually. | Annual risk register review. Re-evaluate if business context changes. |
| LOW (3–5) | Accepted by the ISMS Manager or risk owner with brief documented rationale. | Treatment optional. Monitor for changes in likelihood or impact that would elevate the risk score. | Annual risk register review. No escalation required. |
| MINIMAL (1–2) | Accepted. No treatment required unless risk score increases. | No treatment required. Record in risk register with accepted status. | Annual review only. Confirm risk score remains accurate. |
| Risk appetite as a board-level decision: The risk acceptance criteria above should be presented to the executive sponsor and, where appropriate, the board for approval before the risk assessment begins. The decision to accept a High risk (score 10–16) without treatment is a governance decision, not a technical one. Organizations where the CISO sets risk acceptance thresholds unilaterally are not meeting the leadership requirements of Clause 5.1 — risk appetite must be set by management. |
The Threat Library: Indonesian Context
A threat library provides a structured starting point for threat identification in the risk assessment — ensuring that common threats are not overlooked and providing a shared vocabulary for the assessment team. The library below is calibrated to the threat landscape relevant to Indonesian financial services, fintech, and technology organizations in 2026:
| External Cyber Threats |
|
| Insider Threats |
|
| Physical Threats |
|
| Technical / System Failures |
|
| Regulatory & Compliance |
|
The threat library is a starting point, not an exhaustive list. Assessors should supplement it with threats specific to the organization's technology stack, customer base, geographic operations, and sector-specific threat intelligence. Threat intelligence from BSSN advisories, OJK security circulars, and commercial threat intelligence services should be reviewed before each risk assessment cycle to ensure the library reflects the current environment.
The Methodology Document
ISO 27001 requires that the risk assessment process produces consistent, valid, and comparable results — and this consistency can only be achieved if the methodology is documented. The methodology document is the reference that all risk assessors use to ensure they are applying the same approach, the same definitions, and the same acceptance criteria. It is also an explicit audit artifact: certification auditors will request it and test whether the actual risk assessment was conducted consistently with its provisions.
A complete risk assessment methodology document has ten sections. The outline below maps each section to its required content:
| § | Section title | Content requirements |
| 1 | Purpose and Scope | The purpose of this methodology, the ISMS scope it applies to, and how it relates to the information security policy and risk appetite statement. |
| 2 | Risk Assessment Approach | The methodology selected (Asset-Threat-Vulnerability). The rationale for selection. How this methodology ensures results are consistent, valid, and comparable across assessments. |
| 3 | Asset Classification | How information assets are categorized (information, software, hardware, services, people, intangible). Asset grouping approach for risk assessment. Relationship between asset inventory and risk register. |
| 4 | Threat and Vulnerability Identification | Sources used for threat identification (ENISA threat landscape, BSSN advisories, sector intelligence, MITRE ATT&CK). How vulnerabilities are identified (technical testing, assessments, incident analysis). Threat library reference. |
| 5 | Likelihood Scale | The 5-point likelihood scale with precise definitions for each level. Examples calibrated to the organization's sector and threat environment. Calibration guidance for assessors. |
| 6 | Impact Scale | The 5-point impact scale with precise definitions across CIA dimensions, regulatory exposure, financial impact, and reputational impact. Calibrated to Indonesian regulatory context (UU PDP, POJK, PBI thresholds). |
| 7 | Risk Scoring and Classification | Risk score = Likelihood × Impact. Risk band definitions (Minimal/Low/Medium/High/Critical). Risk acceptance criteria by band. Treatment obligation by band. |
| 8 | Risk Treatment Options | The four treatment options (Modify/Retain/Avoid/Share). Decision criteria for each. Documentation requirements for treatment decisions. Risk owner role in treatment approval. |
| 9 | Residual Risk and Acceptance | Definition of residual risk. Risk owner acceptance process. Documentation requirements. Management review of residual risks above acceptance threshold. |
| 10 | Review and Update Schedule | Annual scheduled review cadence. Event triggers for out-of-cycle assessment updates. How updates are documented and traced to the triggering event. Version control for risk register. |
The methodology document should be approved by the CISO and the executive sponsor before risk assessment work begins. It should be version-controlled — if the methodology changes between assessment cycles, the change must be documented and the implications for existing risk register entries must be addressed. A methodology change is an ISMS change under Clause 6.3 and should be treated accordingly.
A Completed Risk Register: Sample
The sample below shows six completed risk register entries typical for an Indonesian fintech organization. Each entry demonstrates the ATV methodology applied with the qualitative 5×5 scoring approach, producing a traceable risk record that supports both SoA control selection and risk treatment planning:
| ID | Risk description | Threat scenario | L | I | Score | Treatment plan summary |
| R-001 | Customer payment database — unauthorized access via phishing / credential compromise targeting privileged accounts | Credential theft by external attacker; lateral movement to database | 4 | 5 | 20 | MFA all accounts (A.8.5); PAM solution (A.8.2); security awareness (A.6.3). RTP Q2 2026. |
| R-002 | Personal data breach — bulk customer PII exfiltrated through unsecured API endpoint | API enumeration attack; lack of rate limiting and authentication | 3 | 5 | 15 | API authentication and rate limiting (A.8.20); API security testing in SDLC (A.8.29). RTP Q2 2026. |
| R-003 | Ransomware encryption of production systems — payment processing unavailable | Ransomware delivered via phishing or RDP exploitation; lateral spread | 3 | 4 | 12 | Immutable backup (A.8.13); network segmentation (A.8.22); EDR deployment (A.8.7). RTP Q3 2026. |
| R-004 | Insider data theft by departing employee — customer and transaction data copied before offboarding | Privileged user exfiltrates data during notice period | 3 | 3 | 9 | Formal offboarding procedure (A.6.5); DLP monitoring (A.8.12); access revocation SLA (A.5.18). RTP Q1 2026. |
| R-005 | Cloud provider extended outage — payment processing unavailable for >4 hours | Third-party dependency failure; SLA breach by critical cloud provider | 2 | 4 | 8 | Multi-region redundancy (A.8.14); DR testing program (A.5.30). RTP Q3 2026. |
| R-006 | UU PDP breach notification failure — incident occurs but 14-day obligation not met | Security incident not escalated appropriately; notification process not defined | 3 | 4 | 12 | Incident notification procedure (A.5.26); regulatory notification checklist; staff training. RTP Q1 2026. |
| Bitlion risk assessment module: Bitlion's platform guides the risk assessment process from asset inventory through threat identification, scoring, and treatment selection. The risk register is maintained in the platform with full version history, risk owner assignment, and direct linkage to Annex A control entries in the SoA. Treatment plan milestones are tracked against responsible owners with automated progress notifications. Risk register changes are logged with date, assessor, and rationale — satisfying Clause 8.2's requirement to retain results of risk assessments as documented information. |
Common Risk Assessment Mistakes
Assessing risks at the wrong level of granularity
Risk assessments that list every individual system as a separate asset with its own risk entries produce registers with hundreds of entries — unmanageable for ongoing maintenance and overwhelming for risk owners who are supposed to review and accept them. The right level of granularity groups related assets into logical sets (the customer payment database, the customer-facing web application, the internal communication infrastructure) and assesses risks at that level. Individual system risks can be captured in a technical annex if needed for control selection detail.
Scoring without evidence
Likelihood and impact scores that are assigned by the ISMS team without input from process owners, technical staff, or threat intelligence produce scores that reflect assumptions rather than organizational reality. The risk assessment should involve the people who operate the systems and processes at risk — they have the most accurate view of what vulnerabilities exist, what controls are actually functioning, and what the business impact of disruption would be. Scores without supporting rationale are vulnerable to challenge in audit.
Treating risk acceptance as a formality
Risk registers where the CISO has accepted all risks on behalf of all risk owners — without business managers reviewing and formally accepting risks in their domain — do not satisfy the risk owner requirement of Clause 6.1.2. Risk acceptance is a business decision made by the person accountable for the assets and processes at risk. The CISO facilitates and records the acceptance; the risk owner makes it. This distinction is tested explicitly in external audits.
Never updating the risk register after the initial assessment
The initial risk assessment produces the baseline register. Every subsequent business change — new products, new systems, new regulatory requirements, security incidents — creates the obligation to update that register. Risk registers that look identical at the first and second surveillance audits, despite material changes in the organization's operations or the threat landscape, signal that the risk management process is not genuinely operating. The risk register is a living document, not a completed project deliverable.
| The most expensive risk assessment mistake: Under-scoring risks to make the register more manageable. An organization that scores all risks below the acceptance threshold to avoid the work of producing treatment plans has not conducted a risk assessment — it has conducted a risk avoidance exercise. Auditors identify this pattern quickly: if the risk register shows no HIGH or CRITICAL risks for a regulated financial services organization in 2026, the assessment is not credible. Honest risk assessment is harder, but it is the only kind that produces genuine risk reduction. |
The Risk Assessment as an Ongoing Discipline
The risk assessment methodology is designed for repeated use across the ISMS lifecycle — not just for the initial implementation. The methodology document, the scoring scales, the acceptance criteria, and the threat library should be reviewed and updated annually as part of the ISMS review cycle. The risk register should be updated at every management review and whenever triggered by the events mapped in Article 2.6 (Clause 8.2).
Organizations that internalize risk assessment as a recurring management discipline — rather than a compliance exercise to be endured annually — find that their risk registers become progressively more accurate, their control selection becomes progressively more targeted, and their security investment increasingly focuses on the risks that matter most. The methodology is the enabler. The discipline is the choice.