GDPR and AI — The Emerging Landscape

Artificial intelligence systems that process personal data are subject to GDPR in the same way as any other processing — they require a lawful basis, must comply with data minimisation, must respect data subject rights, and must implement appropriate security. But AI creates specific challenges that GDPR’s architects did not anticipate in full: the opacity of algorithmic decision-making, the use of large-scale personal data for model training, the generation of inferred personal data that the data subject did not provide, and the difficulty of meaningful transparency when the decision logic is complex or proprietary.

The EDPB’s guidelines on automated decision-making (WP251) and the GDPR’s Article 22 provisions provide the core framework. Overlaid on these is the EU AI Act, which entered into force in 2024 and creates a risk-based regulatory framework for AI systems that operates alongside GDPR with both complementary and additive obligations. Organisations deploying AI systems that process personal data must now navigate both frameworks simultaneously.

 

Article 22: Automated Decision-Making and Profiling

Article 22(1) provides that data subjects have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or similarly significant effects concerning them. This is both a right and a prohibition — solely automated significant decisions cannot be made without one of the three Article 22(2) gateways: contract necessity, legal authorisation, or explicit consent.

ARTICLE 22 — SCOPE, GATEWAYS, AND SAFEGUARDS

ElementDefinition / RequirementExamples
Solely automatedNo meaningful human involvement in the decision; human review that is rubber-stamping without genuine assessment does not count as meaningful human involvementAI credit scoring with automatic approval/rejection; automated recruitment screening that eliminates candidates without human review; automated insurance premium setting
ProfilingAny form of automated processing of personal data to evaluate personal aspects, particularly performance, economic situation, health, preferences, behaviour, location, or movementsCustomer lifetime value scoring; churn prediction models; fraud risk scoring; behavioural segmentation; creditworthiness assessment
Legal or similarly significant effectEffects that significantly affect an individual’s legal rights (credit refusal, job rejection) or produce effects similar in magnitude to legal effects (pricing discrimination, exclusion from services)Credit application rejection; job application elimination; insurance refusal; loan pricing based on risk profile; targeted advertising based on sensitive category inference
Gateway (a): Contract necessitySolely automated decision is necessary for entering into or performance of a contract with the data subjectAutomated fraud check as part of payment processing; automated credit check required to offer a credit product
Gateway (b): Legal authorisationSolely automated decision authorised by EU or member state law, with appropriate safeguardsAutomated tax fraud detection; AML automated transaction blocking under regulatory authorisation
Gateway (c): Explicit consentData subject has given explicit consent to the solely automated decisionIndividual explicitly opts into an automated investment advisory service; data subject consents to AI health risk assessment
Safeguards required for (a) and (c)Right to human review of the decision; right to express point of view; right to contest the decision; right to explanationHuman reviewer with genuine authority to overturn the decision; clear explanation of the factors that led to the decision; contesting mechanism with defined response SLA

 

Transparency and Explainability for AI

Articles 13(2)(f) and 14(2)(g) require controllers that use automated decision-making to provide meaningful information about the logic involved, the significance of the processing, and the envisaged consequences for the data subject. This is the GDPR’s explainability requirement — and it applies to any automated decision-making with significant effects, not only solely automated decisions under Article 22.

AI TRANSPARENCY REQUIREMENTS UNDER GDPR

Transparency ObligationWhat It Requires for AIImplementation Challenge
Art. 13/14: ‘meaningful information about the logic’Explain the general approach of the algorithm; the key factors or inputs that influence the decision; what the output representsCannot be satisfied by publishing a model architecture; must be explained in plain language to a lay person; does not require full algorithmic disclosure but must be substantively informative
Art. 13/14: ‘significance of the processing’Explain what the automated decision-making is used for; what it determines; what the consequences are for the data subjectMany organisations describe automated processing in vague terms; must specifically state that AI is used to make or contribute to decisions affecting the individual
Art. 13/14: ‘envisaged consequences’Explain what outcomes the data subject may experience as a result of the automated processingExplain the range of possible decisions; what triggers a favourable vs. unfavourable outcome; what the data subject can do if they receive an unfavourable outcome
Art. 22(3): Right to explanation of individual decisionUpon request for human review, the controller must explain the specific decision taken in the individual caseRequires ability to generate individual decision explanations; explainable AI (XAI) tooling recommended for high-volume decision systems; generic explanations are insufficient

 

AI Model Training and Personal Data

Training AI models on personal data is a processing activity subject to GDPR. The lawful basis, data minimisation, and purpose limitation principles all apply to the training dataset. An organisation that trains a model on customer data collected for service provision must have a lawful basis for using that data to train a model — which may or may not be the same basis as the original collection.

AI TRAINING DATA — GDPR COMPLIANCE FRAMEWORK

IssueGDPR RequirementCommon Compliance Gap
Lawful basis for model trainingSeparate lawful basis for using personal data to train AI; original collection basis does not automatically extend to model trainingUsing customer data collected for service provision to train an AI model without assessing whether original basis covers this use; no LIA for model training purpose
Purpose limitationTraining purpose must be compatible with the original collection purpose; scientific research or legitimate interest may support compatibilityBroad AI improvement purposes added to privacy notice without assessing compatibility; data collected for one purpose used for unrelated model training
Data minimisation for trainingTraining data should be the minimum necessary for the model’s purpose; consider whether synthetic data or pseudonymised data can replace real personal dataUsing full production dataset for training when a representative subset or synthetic data would achieve the same model quality; training data not deleted after model training is complete
Special category data in training setsTraining on special category data requires Art. 9(2) condition; high-risk if model infers or predicts special category characteristicsDemographic data used to train models that infer health status, political views, or other special category characteristics without Art. 9(2) basis or Art. 22 safeguards
Data subject rights for training dataData subjects may exercise access, erasure, or objection rights in relation to personal data used for model trainingNo mechanism to identify which training records relate to a specific data subject; erasure of training data requires model retraining or machine unlearning procedure

 

The EU AI Act and GDPR: Navigating Both Frameworks

The EU AI Act (Regulation 2024/1689) applies a risk-based framework to AI systems placed on the EU market or used in the EU, regardless of where the provider is established. It distinguishes between unacceptable risk AI (prohibited), high-risk AI (subject to stringent obligations before deployment), limited risk AI (transparency obligations), and minimal risk AI (no specific obligations beyond GDPR). For AI systems that process personal data, both GDPR and the AI Act apply simultaneously.

EU AI ACT — RISK CATEGORIES AND GDPR INTERACTION

AI Act Risk CategoryExamples Relevant to Personal DataGDPR Interaction
Unacceptable risk (prohibited)Social scoring systems; mass real-time biometric surveillance in public spaces (with narrow exceptions); subliminal manipulation; exploitation of vulnerabilitiesGDPR would also prohibit most of these (no lawful basis for social scoring; biometric data without Art. 9 basis); AI Act adds explicit prohibition regardless of GDPR basis
High risk — education and employmentAI systems for CV screening, candidate ranking, interview analysis, exam proctoring, student assessmentGDPR Art. 22 applies if solely automated with significant effects; DPIA required; Art. 9 if health or other special category data inferred; AI Act adds conformity assessment, documentation, human oversight, and accuracy requirements
High risk — credit and financialCredit scoring AI; insurance risk assessment; loan pricing algorithmsArt. 22 applies; explicit consent or contract gateway; explanation right on request; DPIA required; AI Act adds requirements for data governance, traceability, and transparency
High risk — law enforcement and borderAI crime prediction; biometric identification; border security screeningArt. 9 biometric data basis required; Art. 22 for law enforcement decisions; Art. 35 DPIA required; AI Act imposes additional law enforcement AI restrictions
Limited risk — chatbots and synthetic contentCustomer service AI chatbots; AI-generated contentNo Art. 22 unless significant decision effect; GDPR lawful basis for any personal data processed; AI Act requires disclosure that user is interacting with AI

GDPR + AI ACT DPIA REQUIREMENTS FOR HIGH-RISK AI

RequirementGDPR (Art. 35)EU AI ActUnified Approach
Assessment requiredDPIA required for high-risk processing including systematic automated profiling, large-scale special category data, systematic monitoring of public spacesFundamental rights impact assessment required for high-risk AI systems that process personal dataCombined DPIA and fundamental rights impact assessment; single document addressing both regulatory requirements
TimingBefore the processing beginsBefore the high-risk AI system is deployed on the EU market or put into serviceBefore deployment; assessment feeds product development process
Content overlapProcessing description; necessity and proportionality assessment; risk to individuals; mitigation measuresDescription of AI system; purpose; fundamental rights risks; measures to mitigate; human oversight mechanismsSingle assessment document covering GDPR and AI Act requirements; reduces duplication
Human oversightArt. 22(3): human review mechanism for solely automated decisionsHigh-risk AI must include human oversight as a system requirementHuman oversight mechanism satisfies both Art. 22(3) and AI Act; must be meaningful, not nominal
BITLION INSIGHTThe convergence of GDPR and the EU AI Act creates the most complex personal data compliance landscape that technology organisations have ever faced. Organisations deploying AI systems that process personal data should treat GDPR compliance and AI Act compliance as a single design challenge rather than two separate regulatory workstreams. The DPIA is the natural integration point: a well-constructed DPIA for a high-risk AI system addresses the GDPR risk assessment, the AI Act’s fundamental rights impact assessment, the Article 22 safeguards design, and the explainability and transparency obligations simultaneously. Privacy engineers and AI ethics practitioners need to work from a shared framework.