Protecting stored account data is the core obligation of PCI DSS — and the most technically complex to implement correctly. The choice between encryption, tokenization, truncation, and hashing is not arbitrary: each has different scope implications, different key management requirements, and different operational characteristics. This article covers how to implement each method and build the key management program that underpins them all.
Choosing the Right Protection Method
| Method | How It Works | Scope Impact | Key Management Complexity | Best For |
|---|---|---|---|---|
| Encryption (AES-256) | PAN stored as ciphertext; decrypted only for authorized functions | CDE remains in scope; key management adds scope | High — full key lifecycle management required | Systems that must access full PANs for processing |
| Tokenization | PAN replaced with non-sensitive token; vault maps token to PAN | Systems using only tokens may be out of CDE scope | Moderate — vault security is critical | Internal systems that reference cards but do not need the PAN |
| Truncation | Only first 6 and last 4 digits stored | Truncated values are not PANs — out of PCI DSS scope | None — irreversible | Display, receipts, reference purposes only |
| One-way hash (HMAC) | PAN transformed to fixed-length hash; original cannot be recovered | Hashed values out of PCI DSS scope if properly keyed | Moderate — HMAC key management required | Lookup/matching without needing actual PAN |
The choice of protection method affects not just the CDE but the entire application architecture. An organization that chooses tokenization for scope reduction must implement a tokenization vault and ensure all systems that reference cards use tokens instead of PANs. An organization that chooses encryption must implement a complete key management program. These are foundational architectural decisions that shape the entire implementation.
Implementing AES-256 Encryption for Stored PANs
Encryption at the Application Layer
The recommended approach for most organizations is application-level encryption: the application encrypts the PAN before writing it to the database, and decrypts it only when the business function requires the full PAN. The database stores only ciphertext — a database administrator who can access the raw data cannot read PANs without the encryption key.
Application-level encryption provides strong separation of duties: the application developer writes code to encrypt/decrypt using a key that is never embedded in the application code (keys are retrieved from a key management system at runtime), the database administrator can maintain the database without having any ability to read PANs, and the key manager (or HSM) is the only system that holds the keys and must enforce strict access controls.
Transparent Data Encryption (TDE)
Database Transparent Data Encryption (offered by Oracle, SQL Server, and PostgreSQL) encrypts data files and backups at the storage layer. TDE protects against theft of physical storage media but does not protect against a database-level attack — a DBA or attacker with database access can still read plaintext data if TDE is the only encryption layer. TDE is a valuable control for physical security, but it alone is generally insufficient for Requirement 3.
The right approach combines application-level encryption (for confidentiality from DBAs and attackers with database access) with TDE (for confidentiality of physical storage media). When both are in place, the database is encrypted at rest (TDE), and the PANs stored in the database are also encrypted (application-level), creating two layers of protection.
| IMPORTANT | Transparent Data Encryption (TDE) is often misunderstood as a complete solution for Requirement 3. It is not. TDE protects stored files from media theft but does not protect against database-level access — which is the primary attack vector in most card data breaches. PCI DSS QSAs will ask whether the encryption protects the data from DBAs and application administrators with database access. TDE alone typically cannot answer yes. |
Key Management Program — Implementation
Hardware Security Modules (HSMs)
An HSM is a physical or virtual computing device that safeguards and manages cryptographic keys and performs cryptographic operations. HSMs provide: tamper-resistant key storage (keys cannot be extracted in plaintext, even with physical access), FIPS 140-2 Level 3 validation (the expected standard for PCI DSS), hardware-accelerated cryptographic operations (faster than software), and key generation using certified random number generators (ensuring keys are truly random).
For organizations handling cardholder data, an HSM is the appropriate control for managing encryption keys. Cloud providers (AWS CloudHSM, Azure Dedicated HSM, GCP Cloud HSM) provide HSM-as-a-service, or organizations can deploy physical HSMs on-premises. The HSM never releases keys in plaintext — it performs encryption/decryption operations within the HSM, or it wraps keys (encrypts them with a key-encrypting key) before releasing them.
Key Management Lifecycle Implementation
- Key generation: Generate all data encryption keys (DEKs) and key-encrypting keys (KEKs) within the HSM using hardware random number generation
- Key distribution: Distribute keys using key-wrapping (encrypting the DEK under the KEK) — never transmit keys in plaintext
- Key storage: DEKs encrypted under KEKs, stored in the HSM or encrypted key store; KEKs stored in HSM only
- Key rotation: Establish cryptoperiods for each key type — typically 1 year for DEKs; define key rotation procedures and automate where possible
- Key archival: Archived keys required for decrypting historical data; maintain securely with strict access controls
- Key destruction: Zeroize keys when retired; document destruction with a key destruction certificate
Each step of the key lifecycle must be documented and auditable. When a new DEK is generated, there should be a record of the generation (timestamp, who initiated it, the key identifier, the algorithm). When a key is rotated, both the old and new key should be tracked. When a key is destroyed, there should be a destruction certificate confirming zeroization.
Key Access Control
Not all systems should have access to all keys. A fraud detection system that only needs to match PANs (using hashing) should not have access to encryption keys. A reporting system that only needs to show merchant transaction summaries should not have access to card data. Implement a key access control policy: each key is associated with a specific purpose and list of authorized systems, and access to keys is logged and monitored.
Tokenization Implementation
Third-Party Tokenization Services
For most Indonesian organizations, particularly merchants and smaller fintech companies, using a third-party PCI DSS-compliant PSP that provides tokenization is the most practical approach. Stripe, Braintree, Adyen, and major Indonesian PSPs (Midtrans, Xendit, Doku) all provide tokenization services. The customer enters their card details on the PSP's payment page or the PSP's JavaScript, the PSP handles the PAN and performs all required security controls, and the merchant receives a token in response. The merchant stores and uses only the token.
This approach is the simplest from a compliance perspective — the merchant does not need to implement its own HSM, key management program, or data protection controls. The PSP is responsible for those controls and can be verified through annual compliance assessments. For merchants, this is the lowest-friction path to PCI DSS scope reduction.
Internal Tokenization Architecture
For organizations that need internal tokenization (e.g., PSPs that process card data for multiple merchants), the tokenization architecture requires: a tokenization vault (highly protected database mapping tokens to PANs), an HSM for token generation and vault key management, strict access controls to the vault (only authorized systems can query the vault, and all queries are logged), network segmentation isolating the vault from other systems, and detailed logging of all vault access and transactions.
An internal tokenization service is a significant engineering effort. The vault must be highly available (tokenization is in the critical path of transaction processing), highly secure (it is the highest-value target in the system), and auditable (every access is logged). Only larger organizations with sophisticated engineering teams and the resources to maintain a specialized service should attempt internal tokenization.
| For Indonesian PSPs and payment gateways that handle card data for multiple merchants, building an internal tokenization service is often the highest-leverage investment in the compliance program. Once the tokenization service is in place, all downstream systems — fraud engines, reporting platforms, customer service tools — can operate on tokens, dramatically reducing the scope of PCI DSS controls required across the organization. The front-end (accepting cards) is in scope. The vault (mapping tokens to PANs) is in scope. Everything else can be out of scope. |
Evidence Package for Requirement 3
QSAs testing Requirement 3 require: data retention policy and data inventory (documenting what account data is stored, where, for how long), cryptographic algorithm documentation (confirming AES-256, HMAC-SHA256 or other approved algorithms), key management policies and procedures (documenting the entire key lifecycle), key inventory (all active keys, their purpose, algorithm, expiration, and approved use), HSM configuration documentation (HSM model, FIPS certification, certificate of calibration), evidence that SAD is not stored (queries or data discovery results showing that sensitive authentication data is not stored), and results of periodic data discovery exercises (confirming that card data only exists in expected locations).
The evidence package must demonstrate that encryption is actually implemented, keys are actually managed securely, and stored PANs are actually encrypted. Do not prepare evidence based on what should be happening — gather actual configuration screenshots, actual key inventory lists, actual HSM logs. QSAs are skeptical of perfect documentation that does not reflect reality.
For tokenization, the evidence package includes: tokenization architecture documentation (showing the token vault and authorized systems), token validation procedures (confirming that tokens are non-reversible), access logs to the tokenization vault (showing which systems access it and what queries are made), encryption key management for the vault (applying the same standards as for other encrypted systems), and confirmation that systems not authorized to access the vault cannot do so.