GDPR Compliance for AI Automation in UK Businesses

If you are sending customer data through an AI model as part of an automated workflow, you have GDPR obligations. Most UK businesses using tools like Make, n8n, or direct API calls to OpenAI and Anthropic do not realise they are acting as data controllers with processors located outside the UK. The good news is that compliance is straightforward once you understand where your data goes and what paperwork needs to be in place.
This guide covers the specific GDPR requirements for AI automation workflows. Not SaaS tool compliance. Not chatbot privacy policies. The actual data protection steps you need when personal data flows through LLM APIs inside your business processes.
What GDPR Means When You Add AI to Business Workflows
UK GDPR still applies in full to AI automation. The core principles of lawfulness, purpose limitation, data minimisation, and accountability do not change because you are using a language model instead of a human to process information.
What changes is the supply chain. When you build an automation workflow that sends a customer email to GPT-4o for summarisation, or routes a CV through Claude for screening, you are transferring personal data to a third-party processor. That processor may be based in the US. The data may pass through servers in multiple jurisdictions before a response comes back.
You need three things in place before any personal data touches an LLM API. First, a lawful basis for processing under Article 6 of the UK GDPR. For most business automation, this will be legitimate interests. Second, a Data Processing Addendum (DPA) signed with every AI provider whose API you call. Third, updated privacy notices telling data subjects that their information is being processed by AI systems.
If you are new to what AI automation involves at a practical level, start there. The compliance requirements scale with the complexity of your workflows.
Where Your Data Goes When You Use OpenAI, Claude, or Gemini in Automation
Three things to know about LLM provider data handling:
- OpenAI API and ChatGPT business products do not use customer data for model training. OpenAI acts as a data processor under a signed DPA and processes UK data through OpenAI OpCo, LLC in the US, with Standard Contractual Clauses (SCCs) and the UK Addendum in place.
- Anthropic offers a similar DPA for Claude API users. Data submitted through the API is not used for training. Anthropic processes data in the US with contractual safeguards.
- Google’s Gemini API operates under Google Cloud’s data processing terms. Enterprise API usage is covered by Google’s existing DPA framework, with data processing locations dependent on your Cloud configuration.
The distinction that matters is between consumer products and API access. If your team is copying customer data into a free ChatGPT account, you have no DPA, no contractual safeguards, and no compliant data processing arrangement. Consumer-tier products from all three providers lack the contractual protections that business and API tiers include.
We covered how we approach data isolation in legal automation builds in a previous post. The same principles apply to any workflow handling personal data.
| Provider | DPA Available | Training on API Data | Data Location | UK Transfer Mechanism |
|---|---|---|---|---|
| OpenAI (API / Enterprise) | Yes | No | US (primary) | SCCs + UK Addendum |
| Anthropic (Claude API) | Yes | No | US (primary) | SCCs + UK Addendum |
| Google (Gemini API via Cloud) | Yes | No (API tier) | Configurable | SCCs + UK Addendum |
| OpenAI (free ChatGPT) | No | Yes (default) | US | None for business use |
| Any provider (consumer tier) | No | Varies | Varies | Not suitable for personal data |
The Data (Use and Access) Act 2025 Changed the Rules on Automated Decisions
The Data (Use and Access) Act 2025 (DUAA) became law on 19 June 2025. Its provisions are being phased in through June 2026, and the changes to automated decision-making (ADM) rules took effect on 5 February 2026.
Before the DUAA, Article 22 of the UK GDPR restricted solely automated decisions with legal or significant effects to only two lawful bases: explicit consent or substantial public interest. This made it difficult for businesses to deploy AI classification, scoring, or routing workflows that affected individuals without obtaining explicit consent first.
The DUAA broadens this. Businesses can now use any lawful basis under Article 6 for automated decisions that do not involve special category data (health, ethnicity, political opinions, and similar). Legitimate interests is now a valid basis for ADM in many business automation scenarios. This includes lead scoring, application screening, and automated triage workflows.
Three safeguards remain mandatory regardless of which lawful basis you use. You must inform individuals that automated decision-making is happening. You must give them a way to contest the decision. You must provide meaningful human intervention on request.
For AI automation workflows that process special category data, the rules are tighter. You still need explicit consent or a substantial public interest condition, plus the processing must be necessary for entering into a contract or authorised by law.
The practical impact for most UK SMBs: if your AI automation classifies, scores, or routes people based on their data, you can now do this under legitimate interests as long as you build in the three safeguards. That is a meaningful reduction in compliance friction compared to the pre-DUAA position.
How to Run a Data Protection Impact Assessment for AI Workflows
A Data Protection Impact Assessment answers four questions: what data are you processing, what are the risks to individuals, what safeguards are you putting in place, and is the processing proportionate to the purpose?
- Map every point where personal data enters, moves through, and exits your automation workflow. Include the automation platform (Make, n8n, Zapier), every API endpoint, every database or spreadsheet, and every notification channel.
- Identify the categories of personal data. Names and email addresses carry different risk profiles than financial records or health information. If special category data is involved at any stage, your DPIA requirements are stricter.
- Document the lawful basis for each processing activity. A single workflow may involve multiple lawful bases if data is used for different purposes at different stages.
- Record the safeguards: encryption in transit, data retention limits, access controls, DPAs with all processors, and human review mechanisms for high-risk decisions.
A DPIA is mandatory under Article 35 of the UK GDPR when processing is likely to result in a high risk to individuals. AI-powered decision-making about people almost always meets this threshold. Even if your workflow only summarises or categorises data without making decisions, a DPIA is good practice and demonstrates accountability.
We recommend a structured AI readiness audit before building any workflow that handles personal data. It catches data protection gaps before they become compliance problems.
Building GDPR-Compliant Automation Workflows Step by Step
Data minimisation is the principle most automation builders ignore. The temptation is to send an entire customer record to the LLM when you only need three fields. Every extra data point increases your compliance burden and your risk surface.
Start by stripping personal data before it reaches the AI model wherever possible. If your workflow classifies support tickets by urgency, the model does not need the customer’s name, email, or account number. Send only the ticket text. If you need to reconnect the classification to the customer record afterwards, do that in your automation platform, not inside the LLM call.
Where personal data must reach the model, apply these controls. Set data retention to the minimum period your workflow requires. Most LLM APIs do not store input data beyond the request lifecycle when using business-tier access, but confirm this in your DPA. Use HTTPS for all API calls. Restrict API key access to the specific team members or service accounts that need it. Log which data was sent to which model and when, so you can respond to subject access requests.
For document processing workflows that handle personal data at scale, consider running OCR and extraction locally before sending only the extracted fields to an LLM. This reduces the volume of personal data leaving your infrastructure.
If you are evaluating why custom-built workflows give you more control over data routing than off-the-shelf tools, data minimisation is one of the strongest arguments. Custom builds let you control exactly which fields reach external APIs.
What the ICO Is Watching in 2026 and Why It Matters for AI Automation
In January 2026, the ICO published its Tech Futures report on agentic AI. While the report states it is not formal guidance, it signals where enforcement attention is heading. The ICO flagged five areas of concern: controller and processor role clarity in multi-vendor AI chains, purpose creep when agents have broad access to data, scaled-up automated decision-making, special category data inference, and cybersecurity risks from increased system autonomy.
The ICO is also developing a statutory code of practice on AI and automated decision-making. A public consultation is expected in May 2026, with the final code due in summer 2026. This code will reflect the DUAA changes and set binding expectations for how businesses deploy AI systems that make decisions about people.
For regulated sectors like legal services where enforcement attention is strongest, the time to get compliant is before the code arrives, not after.
The ICO’s AI and biometrics strategy, published in June 2025, confirmed that formal audits of AI systems are on the roadmap. While initial audits are focused on police use of facial recognition, the strategy makes clear that ADM in recruitment and other commercial contexts is a priority.
Businesses building AI automation now should document their compliance decisions thoroughly. If the ICO asks how you determined your lawful basis, how you conducted your DPIA, and what safeguards are in place, you need answers that are already written down.
Common GDPR Mistakes in AI Automation Projects
Using consumer-tier AI accounts for business data is the most common failure. A team member pasting customer information into free ChatGPT has created an uncontrolled data transfer with no DPA, no SCCs, and no lawful basis for the processing. This happens in almost every organisation that has not set clear AI usage policies.
- Failing to sign DPAs with AI providers. OpenAI, Anthropic, and Google all offer DPAs for business and API access. You need to execute them. Having API access does not automatically mean the DPA is in effect. OpenAI requires you to actively complete the DPA process through their platform.
- Sending more data than necessary. If your automation sends full customer records to an LLM when only a name and postcode are needed, you are breaching data minimisation principles regardless of your other safeguards.
- Not updating privacy notices. If you added AI processing to existing workflows, your privacy notice must reflect this. Data subjects have the right to know their data is being processed by AI systems, which providers are involved, and what decisions are being made.
- Ignoring subject access requests that involve AI-processed data. If a customer asks for all data you hold on them, that includes any data sent to LLM APIs and the outputs generated. You need logging in place to fulfil these requests.
Businesses running AI automation for law firms running AI automation under SRA and GDPR obligations face additional professional conduct requirements on top of GDPR. Sector-specific obligations do not replace GDPR. They stack on top of it.
Yes. Any time personal data is sent to an external processor, a Data Processing Addendum is required under Article 28 of the UK GDPR. This applies regardless of whether the data belongs to customers, employees, or any other individuals. OpenAI’s DPA is available through their business platform and must be actively executed.
Not exactly. The DUAA broadened the lawful bases available for automated decision-making that does not involve special category data. You can now rely on legitimate interests for many AI automation scenarios. But if your workflow processes health data, ethnic origin, political opinions, or other special category data, you still need explicit consent or a substantial public interest condition. And regardless of lawful basis, you must still inform individuals, allow them to contest decisions, and provide human intervention on request.
Under the UK GDPR, individuals have the right to contest automated decisions and request human review. Your workflow must include a mechanism for this. If the wrong decision caused harm, you may also face liability under general data protection principles. Logging inputs, outputs, and decision logic is the best protection.
Yes, provided the correct transfer mechanisms are in place. OpenAI, Anthropic, and Google all use Standard Contractual Clauses with the UK Addendum for international data transfers. The DUAA introduced a revised transfer test where the standard of protection must not be “materially lower” than UK levels. With a signed DPA and SCCs in place, using US-based AI providers is compliant. The European Commission also launched the process to adopt new UK adequacy decisions in July 2025, which reinforces the viability of UK-US data flows.
A DPIA is mandatory when processing is likely to result in high risk to individuals. AI-powered workflows that classify, score, screen, or route people based on their personal data will almost always meet this threshold. For workflows that process only business data with no personal data involved, a DPIA is not required but still recommended as good practice.
If your business is building or planning AI automation workflows and you want to get the compliance foundations right from the start, [book a compliance-focused discovery call](https://innovate247.ai/contact/) with our team.