Network Security Implementation for PCI DSS

Network security implementation is the foundation of PCI DSS technical compliance. Without effective network controls, every other technical requirement is undermined — an attacker with network access to CDE systems can potentially bypass logical access controls, intercept data in transit, and access stored cardholder data. Getting the network architecture right is the most critical early implementation task.

Designing the CDE Network Architecture

The Three-Zone Model

The classic PCI DSS network architecture uses three zones: an untrusted external zone (the internet), a demilitarized zone (DMZ) for internet-facing services, and the trusted CDE zone for systems that store, process, or transmit cardholder data. All external access to CDE systems must pass through both the internet-facing firewall (external firewall) and the CDE boundary firewall (internal firewall).

The DMZ is the buffer between the untrusted internet and the trusted CDE. It contains systems that need internet connectivity — web servers, API gateways, payment application front-ends — but do not store or directly process PANs. The DMZ is not part of the CDE, but it is strictly controlled. Traffic from the DMZ to the CDE must be explicitly allowed and documented.

A web server in the DMZ accepts requests from the internet, validates and routes them to an application server in the CDE, receives the response, and sends it back to the internet. The web server is internet-facing; the application server is not. If the web server is compromised, the attacker can only reach the application server through the explicit allowed paths, not through unrestricted network access.

Network Segmentation Implementation

The CDE must be separated from all other internal networks — corporate LANs, office networks, development environments, and management networks. Implement dedicated network security controls (firewalls, security groups) at every boundary. All traffic between the CDE and other zones must be controlled by explicit allow rules. Implicit deny-all default policies are the foundation of network segmentation.

In a segmented network, a user on the corporate LAN cannot ping a CDE system. A development database cannot be backed up to the same backup server as the production CDE database. A developer workstation cannot access CDE systems without going through a jump server that enforces logging and access controls. These restrictions are sometimes inconvenient — but they are essential for security.

KEY IDEAThe most common network architecture mistake in PCI DSS implementations is routing CDE traffic through shared infrastructure — a shared load balancer that also handles non-CDE traffic, a shared Kubernetes cluster where PCI and non-PCI workloads coexist, or a shared database server that hosts both PCI and non-PCI databases. Any such shared component becomes in-scope for PCI DSS. Architecture decisions made for cost or convenience early in the project create compliance scope that is hard to reduce later.

 

Firewall Design and Rule Management

The Deny-All Default Rule

Every firewall protecting the CDE must implement a default deny-all policy — all traffic is blocked unless explicitly permitted. Each permit rule must have: a documented business justification (what business function does this rule enable?), a specific source and destination (no "any-to-any" rules), and a specific port and protocol (no "all ports" rules).

A well-designed firewall ruleset for a CDE typically has 20–50 explicit permit rules, each with clear business justification, and a final deny-all rule that catches everything else. Every rule is documented and reviewed periodically. Ad-hoc rule additions, temporary rules that never get removed, and overly broad rules (e.g., "allow 0.0.0.0/0 to 0.0.0.0/0 any service") are the signs of firewall configuration drift.

Firewall Rule Review Process

All firewall rules must be reviewed at least every six months. The review process must: confirm each rule is still required, confirm the business justification is still valid, identify and remove any rules that are no longer needed, and document the review with the date and reviewer identity. The review documentation is evidence that will be presented to the QSA.

Rule review is a discipline that must be integrated into normal operations. If rules accumulate without review, they become a compliance and security liability. A rule that was needed for a temporary third-party integration three years ago persists, creating a security gap. Regular review prevents this drift.

Firewall Rule TypePCI DSS RequirementCommon Implementation
Default deny-allAll traffic blocked unless explicitly permittedFinal rule in all rulesets: "deny any any"
Inbound to CDEOnly explicitly required traffic permitted; documented business justification per rulePayment network IPs, specific port/protocol, source IP restrictions
Outbound from CDECDE systems must not initiate unrestricted outbound connectionsExplicit allow rules for patch servers, NTP, DNS — all else blocked
DMZ to CDEOnly specific application-layer traffic from DMZ to CDE back-endWeb server to app server port only; no direct internet-to-CDE path
Rule review documentationSix-month review with documented approvalChange management ticket, signed review spreadsheet, or GRC platform record

 

Cloud Network Security Implementation

AWS Network Architecture for PCI DSS

In AWS, the CDE should be deployed in a dedicated VPC (Virtual Private Cloud) with: CDE subnets in private subnets (no direct internet access), non-CDE subnets in separate VPCs or private subnets with different security zone permissions. Implement Security Groups with deny-all defaults and explicit ingress/egress rules. Network ACLs add subnet-level filtering. Do not use VPC peering between CDE and non-CDE VPCs — use AWS Transit Gateway or VPN with explicit routing and firewall inspection if interconnection is necessary. Enable VPC Flow Logs for all CDE subnets to capture all network traffic for audit purposes.

In AWS, the equivalent of network segmentation is having a dedicated VPC for the CDE with no routing paths to non-CDE VPCs. Internet access from the CDE goes through a NAT Gateway (one-way outbound only). Inbound access is through a load balancer in a DMZ security group with explicit rules.

Azure Network Architecture

In Azure, use dedicated Virtual Networks (VNets) for the CDE, separate from non-CDE VNets. Implement Network Security Groups (NSGs) at both subnet and NIC (network interface card) levels. Use Azure Firewall for perimeter control with explicit deny-all default policies. Azure Bastion provides secure jump host access for administrative functions. Do not enable peering between CDE VNets and non-CDE VNets — if interconnection is necessary, use Azure Firewall with explicit rules and monitoring.

GCP Network Architecture

In GCP, use dedicated VPC networks for the CDE, separate from non-CDE VPC networks. Configure firewall rules with explicit deny-all defaults and permit rules for required traffic. Use VPC Service Controls to isolate sensitive services at the API level. Cloud VPN provides secure interconnection if on-premises systems need to access the CDE. VPC Network Peering should not be used between CDE and non-CDE projects — authentication and encryption should be enforced at the application layer for any cross-project traffic.

IMPORTANTCloud security groups and VPC configurations do not automatically satisfy PCI DSS Requirement 1 documentation requirements. You must export and maintain documentation of your security group rules, their business justifications, and evidence of periodic review — just as you would for a physical firewall. Most cloud platforms provide APIs or export functions for this purpose. Use them.

 

System Hardening Implementation (Requirement 2)

Developing System Hardening Standards

Create a documented hardening standard for each system component type in the CDE: server operating systems (Windows Server, Linux distributions), network devices (routers, switches, load balancers), database systems (MySQL, PostgreSQL, Oracle, SQL Server), containerized environments (Docker, Kubernetes), and cloud services (S3 buckets, RDS instances, Lambda functions).

Hardening standards should include: unnecessary services disabled, unnecessary user accounts removed or disabled, weak authentication protocols disabled (telnet, rsh), unnecessary listening ports closed, access controls configured (filesystem permissions, database user privileges), logging enabled, and security updates applied. The standards should be version-specific — a hardening standard for Windows Server 2022 will differ from Windows Server 2019.

CIS Benchmarks as the Foundation

The Center for Internet Security (CIS) Benchmarks are the industry standard for system hardening. Using CIS Benchmarks for your hardening standards gives you a defensible, industry-accepted baseline and makes QSA conversations straightforward. CIS publishes benchmarks for all major operating systems, databases, and infrastructure platforms. Each benchmark has two levels: Level 1 (practical, minimal impact on functionality) and Level 2 (more restrictive, may impact usability). Most organizations target Level 1 compliance with some Level 2 controls for highest-value systems.

CIS Benchmarks include detailed configuration guidance, test procedures, and remediation steps. They are regularly updated as new vulnerabilities are discovered. Using CIS as your baseline ensures you are not relying on outdated guidance.

Infrastructure as Code for Compliance

Organizations using Infrastructure as Code (Terraform, Ansible, AWS CloudFormation, Google Deployment Manager) can encode hardening standards into their deployment templates. Every new CDE system is automatically hardened at launch, ensuring consistency. This approach is both more reliable than manual hardening and more auditable — the code templates serve as evidence that hardening standards are applied.

In Terraform, define a hardening module that applies security group rules, enables logging, configures encryption, and sets OS-level hardening via user data scripts. Developers launching new CDE infrastructure reference the hardening module, and the infrastructure is compliant by default.

For Indonesian fintech organizations deploying infrastructure on AWS or GCP, encoding PCI DSS hardening requirements into Terraform modules or AWS CloudFormation templates is the most sustainable approach. A hardening standard that exists only as a PDF document gets drift over time — developers forget to apply it, junior engineers do not understand why certain controls are needed, and configurations diverge. Hardening encoded in deployment automation is applied consistently to every new resource and is validated in code review.

 

Documenting Network Security for QSA Assessment

Network security documentation for QSA assessment includes: network architecture diagrams (showing CDE boundaries, DMZ, external networks, firewall placement), firewall and access control configurations (actual ruleset with business justifications), evidence of firewall rule review (documentation that reviews have occurred and rules have been validated), system hardening standards (for all system types), evidence that hardening standards are applied (configuration management reports, infrastructure-as-code repositories), and segmentation test results (proof that network isolation is effective).

Clear network architecture documentation sets the stage for the entire assessment. If the QSA understands your network design from clear diagrams and documentation, the assessment focuses on evidence that controls are in place and operating. If network documentation is vague or incomplete, the QSA will spend time understanding the environment before assessing controls, delaying the overall assessment.