The gap assessment is the diagnostic exercise that transforms a theoretical compliance goal into a specific, costed, time-bound remediation plan. Without it, organizations implement controls they already have, miss critical gaps, and arrive at the QSA assessment unprepared. A well-executed gap assessment is the most important investment in the early implementation phase.
The gap assessment separates organizations that pass their first PCI DSS assessment from those that struggle. Organizations that rush through scoping and jump directly to remediation without a structured gap assessment are flying blind. They do not know what work is actually required, in what sequence, or by whom. A clear gap assessment changes that.
What a Gap Assessment Is (and Is Not)
A PCI DSS gap assessment is an internal or consultant-led review of your current security controls against all applicable PCI DSS v4.0 requirements. The output is a structured finding list — not a pass/fail assessment (that is the QSA's role). A gap assessment does not provide a compliance certification and should not be confused with a readiness assessment (which is typically conducted closer to the QSA assessment date).
The key distinction: a gap assessment is a planning tool that tells you what needs to be done. A readiness assessment is a dress rehearsal that tells you whether you are ready for the actual assessment. The gap assessment should be comprehensive and frank about gaps. The readiness assessment should closely mirror the QSA assessment methodology.
| KEY IDEA | A gap assessment is a planning tool, not a certification. Its value is in accurately identifying what needs to be done, in what sequence, by whom, and by when. Organizations that skip the gap assessment and proceed directly to remediation based on a general understanding of PCI DSS requirements almost always miss significant gaps — particularly in documentation and evidence requirements. |
Gap Assessment Methodology
Requirement-by-Requirement Review
Work through each PCI DSS v4.0 requirement systematically. For each requirement: review the current control or lack thereof, interview the personnel responsible for that control, inspect evidence (configuration screenshots, policy documents, access records, scan results), and record the finding with its current compliance status.
The assessment should cover all 12 requirements and their sub-requirements. v4.0 introduced new detailed sub-requirements (particularly in Requirements 6, 7, 8, and 10), so ensure the assessment covers the full v4.0 requirement set, not outdated v3.2.1 requirements.
Evidence Inspection
Gap assessment evidence inspection should include: network and system configuration reviews (examining firewall rules, security group configurations, hardening baselines), policy and procedure document review (reading the actual policies that are claimed to exist), access control list review (examining who has access to what systems and data), log sample review (looking at actual logs to verify they capture the required events and are reviewed as claimed), vulnerability scan results review (examining historical scan reports), staff interviews across IT, security, operations, and management (asking questions to verify that controls are understood and operational), and physical walk-through of CDE facilities (verifying physical security controls exist and are maintained).
Evidence inspection is the empirical verification of control claims. Do not accept a manager's statement that "we review access quarterly" — ask to see the access review records, the review methodology, and confirmation that all inappropriate access was removed. Do not accept a statement that "our database is encrypted" — ask to see the encryption algorithm, key management procedures, and evidence that the keys are managed securely.
Finding Classification and Documentation
Each gap finding should be classified by: the specific requirement and sub-requirement it relates to (e.g., 3.1, 8.4.2), the nature of the gap (control absent, control present but deficient, control present but undocumented), the risk level (Critical / High / Medium / Low), the estimated remediation effort (small / medium / large), and the owner (team or individual responsible). Document each finding clearly and specifically, not in vague language.
Poor finding documentation looks like: "Access control is weak." Good finding documentation looks like: "Administrative access to the production payment database is not protected with multi-factor authentication. Currently, DBAs authenticate with username and password only, from workstations without network segmentation. This violates Requirement 8.4.1. Estimated remediation effort: large (requires MFA infrastructure implementation). Owner: IT Security."
The Remediation Plan — Structure and Contents
| Column | Content | Notes |
|---|---|---|
| Requirement # | PCI DSS requirement and sub-requirement number | e.g., 8.4.2, 6.4.3 |
| Finding | Description of the gap in current state | Specific, factual — not generic |
| Risk Level | Critical / High / Medium / Low | Based on potential impact and exploitability |
| Remediation Action | Specific actions required to close the gap | Detailed enough to be assigned and tracked |
| Owner | Team or individual responsible for remediation | Named person, not just a team |
| Target Date | Date by which remediation must be complete | Based on risk level and overall timeline |
| Status | Not Started / In Progress / Completed / Verified | Updated weekly during implementation |
| Evidence | What evidence will demonstrate completion | Must match QSA testing procedures |
The remediation plan is the master project tracker. It should contain 30–150 findings, depending on the organization size and complexity. Each finding is tracked weekly during the implementation phase. Status updates, blockers, and evidence collection are managed through the plan.
Prioritizing Remediation — The Right Sequence
Not all gaps are equal. The remediation sequence should prioritize: First, architectural changes (network segmentation, encryption at rest) — these take the longest and affect the most other controls. Starting early ensures completion by the target assessment date. Second, critical and high-risk technical gaps (missing MFA, unencrypted PANs, flat networks). Third, vulnerability management infrastructure (ASV scanning, penetration testing, patch management) — these must be in place early to establish a baseline. Fourth, policy and documentation gaps. Fifth, evidence collection infrastructure and evidence gathering itself.
Critical findings must be remediated within 30 days. High findings within 90 days. Medium and Low findings can often be completed in parallel with other work, but must be tracked and not deprioritized to the point of non-completion.
| Network architecture changes — particularly implementing network segmentation where none exists — have the longest lead time of any PCI DSS remediation activity. They require design, testing, change management approval, maintenance window scheduling, and post-change verification. If your gap assessment reveals missing segmentation, start the architecture work immediately, even while other remediations are in progress. |
Common Gap Patterns in Indonesian Organizations
Based on Bitlion's assessment experience, the most common PCI DSS gaps in Indonesian payment organizations include:
- Missing or inadequate network segmentation — flat networks where the entire corporate environment is technically in CDE scope
- No ASV scan history — organizations that have never had an external vulnerability scan by an approved vendor
- Encryption gaps — PANs stored in readable form in application databases or log files
- MFA not deployed — particularly for internal admin access to CDE systems
- Missing documentation — policies exist in practice but are not written down, or written policies were never approved or communicated
- No access review process — access rights accumulate over time with no regular review
- Vendor management gaps — no inventory of third-party service providers with CDE access, no annual compliance verification
Organizations that address these gap patterns early in the remediation phase typically pass their assessments much more smoothly than those that discover them late.
Tracking and Reporting Progress
The remediation plan is a living document. It should be reviewed in weekly project meetings with updates to status and any blockers. A monthly executive summary reporting on the overall compliance posture (percentage of requirements compliant, critical findings remaining, days to target assessment date) keeps management engaged and enables resource allocation decisions.
Effective tracking requires discipline. Assign specific owners to each finding, not generic teams. Owners are responsible for updating status weekly and escalating blockers. If status is not updated, the project manager should follow up. Stale remediation plans (not updated regularly) become useless — management stops trusting the data, and the program loses momentum.
Common tracking metrics include: percentage of findings completed, percentage of critical findings completed (should approach 100% by month 4), average time to remediation by risk level, blockers by category (resource, technology, approval, other), and scheduled vs. actual remediation dates.
| Organizations that succeed in their first PCI DSS assessment treat the remediation plan as the project's source of truth — not a document that gets updated right before QSA kickoff, but a living tracker that drives weekly work. The organizations that struggle are those that do the gap assessment, file away the remediation plan, and then scramble to gather evidence in the weeks before the assessment. Make the remediation plan the center of your program, and update it ruthlessly. |
Evidence Planning and Requirements
For each finding in the remediation plan, document what evidence will prove the remediation is complete. Evidence types include: configuration screenshots (firewall rules, hardening settings), policy documents with approval dates, access review records, scan reports, test results, interview records, and signed acknowledgment forms.
The evidence list should match what a QSA will test. If the remediation action is "implement MFA for administrative access," the evidence should include: MFA system configuration documentation, a sample of MFA authentication logs showing MFA enforcement, policy documentation requiring MFA for admin access, and staff interview confirmation that MFA is required.
Start collecting evidence as soon as remediation begins, not at the end. A centralized evidence repository with clear naming conventions makes collection and organization straightforward. Evidence collected sporadically at the end of remediation is disorganized, incomplete, and harder to present to the QSA.