Personal data and AI

What is the AI Act?

The AI Act is a new European Union regulation designed to ensure that artificial intelligence systems used in the EU are safe, transparent, and respect fundamental rights.

It:

  • Classifies AI systems by risk level — unacceptable, high, limited, and minimal.
  • Imposes obligations based on risk, especially for high-risk systems such as biometric identification, credit scoring, or recruitment tools.
  • Sets clear rules for transparency, human oversight, data quality, and technical documentation.
  • Applies to any provider or user of AI systems in the EU market, even if the provider is based outside the EU.

Read more in our AI Act Guide.


What are the AI Act risk levels?

The AI Act introduces four levels of risk, each with different compliance requirements:

  • Unacceptable risk: AI systems that pose clear threats to safety or fundamental rights (e.g., social scoring, manipulative or exploitative technologies) are prohibited.
  • High risk: AI systems used in sensitive areas such as employment, credit, or law enforcement must meet strict requirements — including risk assessments, documentation, human oversight, and transparency obligations.
  • Limited risk: Systems such as chatbots or AI tools interacting with users must ensure transparency, clearly disclosing that users are interacting with AI.
  • Minimal risk: Low-risk AI systems like spam filters or video game AI have no additional obligations under the Act.


What should we know about the interconnection of the GDPR and the AI Act?

The AI Act complements the GDPR, particularly for AI systems that process personal data or make automated decisions. Both regulations aim to protect individuals’ rights, but the AI Act adds a risk-based approach and sector-specific obligations.

Organisations using AI systems that involve personal data should:

  • Document the legal basis for AI-related data processing in line with Article 6 of the GDPR.
  • Conduct DPIAs (Data Protection Impact Assessments) addressing both GDPR and AI Act risks.
  • Maintain detailed logs of AI system operations, including data sources, risk mitigation measures, and human oversight.
  • Be transparent about automated decision-making and provide individuals with rights to understand and challenge such decisions.

Integrating GDPR principles into AI governance helps organisations meet AI Act requirements and ensures ethical, trustworthy deployment of AI technologies.


How can GDPR documentation help with AI compliance?

Much of the documentation required for AI Act compliance already exists under the GDPR. This overlap allows organisations to build on existing records and assessments, including:

  • Lawful basis assessments under GDPR Article 6.
  • Data Protection Impact Assessments (DPIAs) that cover AI-related risks.
  • Data minimisation and accuracy documentation.
  • Human oversight records demonstrating accountability and responsible AI use.

By aligning GDPR and AI Act processes, organisations can streamline compliance efforts and strengthen overall governance of data-driven technologies.