17.10.2025
Data protection impact assessment (DPIA) for AI tools: How companies can use AI in a legally compliant manner
As AI tools such as Microsoft Copilot, ChatGPT and DeepSeek become more prevalent, companies are facing growing regulatory pressure. Data protection impact assessments (DPIAs) are becoming mandatory. But what does that mean in practice? What do companies need to be aware of?
Tina Ehtechami
Legal Consultant
When is a data protection impact assessment required for AI projects?
Article 35 of the General Data Protection Regulation (GDPR) is relevant to this question. Paragraph 1 of this article states:
'Where a form of processing, in particular using new technologies, [...] is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall carry out an impact assessment.'
Therefore, if the use of an AI tool is likely to result in a high risk to the rights and freedoms of natural persons, a DPIA must be carried out.
According to Article 35(3) GDPR, examples of this include:
- Profiling
- extensive processing of special categories of personal data;
- systematic, extensive monitoring of publicly accessible areas.
The Data Protection Conference also provides a list of processing activities for which a DPIA must be carried out (known as a positive list). No. 11 states:
'Use of artificial intelligence to process personal data to control interaction with data subjects or to evaluate personal aspects of the data subject.'
In addition, a threshold analysis can be used to determine whether a data protection impact assessment is required, based on certain criteria. These criteria were developed by the former Article 29 Working Party, which was a key data protection body within the European Union prior to the introduction of the GDPR. The successor body, the EDPB (European Data Protection Board), confirmed these criteria.
According to these criteria, a DPIA must be carried out in the following cases:
- Large-scale data processing
- innovative use or application of new technological or organisational solutions
- Evaluation and classification
- Automated decision-making with legal or similarly significant effects.
- Other criteria may apply depending on the use case.
At least two criteria are usually met when using artificial intelligence.
Summary:
A Data Protection Impact Assessment (DPIA) must be carried out when using AI tools if, among other things:
- there is a high risk to the rights and freedoms of natural persons;
- profiling is carried out;
- special categories of personal data are processed extensively;
- publicly accessible areas are systematically monitored extensively;
- AI is used to process personal data in order to control interaction with data subjects or evaluate personal aspects of them or
- data is processed on a large scale.
Are you using AI the right way?
We can make your projects legally compliant, innovation-friendly and scalable.
Let's talk – no strings attached, just straight answers.
Shadow AI and loss of control
A growing phenomenon is the use of tools such as ChatGPT or DeepSeek by employees outside of company policy and without central control.
This poses serious risks to data protection, information security and trade secrets, and is causing an increasing amount of uncertainty among top management.
How should a DSFA for AI tools be conducted?
The DPIA is based on a systematic procedure:
- Review of the necessity of a DPIA
- Description of data processing, purposes, types of data processed and data categories
- Examination of the legal basis(es)
- Assessment of necessity and proportionality
- Analysis of risks for data subjects
- Risk mitigation measures (e.g. access controls, policies and deletion periods)
Your solution for the best data protection
Trust is the foundation of every good business relationship. Strengthen your relationships with customers by leveraging our expertise in data protection. This will give your company a strong competitive advantage, allowing you to focus fully on your business.
The typical pitfalls of AI projects
Black box and purpose limitation
AI systems are based on probabilistic models, and the decisions they make are often incomprehensible ('black box'). At the same time, AI is often used for unspecified purposes, violating the principle of purpose limitation.
Further challenges:
- Limited transparency towards those affected
- Data minimisation versus big data.
- Faulty outputs ('hallucinations')
- Transfer to third countries in the case of SaaS providers.
Example: Microsoft 365 Copilot
The integration of Microsoft Copilot into M365 demonstrates the complexity of AI-supported assistance systems.
Microsoft Copilot in M365 has access to extensive internal company data and processes it within clearly defined service boundaries.
To minimise data protection risks, appropriate technical measures must be taken, such as disabling plug-ins and links to Bing search.
In addition, configuration options via the Admin Centre provide targeted controls to ensure compliance with data protection regulations.
Microsoft 365 is a particular focus of regulatory authorities. A DSFA is mandatory here to provide legal protection for those responsible.
The DPIA is the key to AI compliance
A data protection impact assessment is not just a GDPR requirement. It is also a central element of the EU's AI Act. A well-conducted DPIA:
- reveals risks
- promotes transparency
- it documents the protective measures taken.
- it creates a robust basis for managers to make decisions.
It also enables risks of discrimination to be identified, for example through bias in training data.
We offer services in the fields of AI and data protection
As a specialised management consultancy, we support you in designing AI projects that comply with the law. Our services at a glance:
- Provision of an external data protection officer
- Risk assessments and threshold analyses to determine DPIA obligations
- Conducting data protection impact assessments for AI applications
- Development of AI usage guidelines tailored to your company
- Training for data protection officers, IT and management on the legally compliant use of AI.
- Support with AI Act compliance and the implementation of new regulatory requirements.
- Conducting data protection audits to identify data protection gaps
Get in touch with us! We ensure that your AI projects are both data protection compliant and efficient, enabling you to realise the full potential of AI.
Are you using AI the right way?
We can make your projects legally compliant, innovation-friendly and scalable.
Let's talk – no strings attached, just straight answers.