25.08.2025
Five steps to systematic AI governance: Using AI in a legally compliant manner
The regulation of artificial intelligence is gaining momentum, which has immediate implications for companies. Those who wish to use AI safely and legally in the future will require more than technical expertise; they will also need to adopt a systematic governance approach.

Dr Philipp Siedenburg
Operating Partner
The new reality: AI and regulation are inextricably linked
Starting in February 2025, the EU AI Act will be phased in, and providers and operators of AI systems will face comprehensive obligations.
The legislator is taking a risk-based approach: the greater the potential risk of an AI system, the more extensive and stringent the risk management, cybersecurity, documentation and transparency requirements will be.
This means that companies must keep track of the AI they use, identify risks early on, and design their processes in a legally compliant manner.
Are you using AI the right way?
We can make your projects legally compliant, innovation-friendly and scalable.
Let's talk – no strings attached, just straight answers.
The five steps to legally compliant AI use
These steps will help you establish a structured approach to the responsible and legal use of AI in your company.
1. Establish AI management
AI management encompasses the rules, processes and structures that govern the development, use and monitoring of AI within an organisation. This includes defining responsibilities and processes within the company, as well as creating and implementing guidelines and policies for AI use.
If a data protection management system (DPMS) and/or an information security management system (ISMS) already exist, it is advisable to integrate the AI management system (AIMS) into them to exploit synergies, reduce effort and ensure consistent risk and compliance management.
The new ISO/IEC 42001 standard for AI management systems follows the same basic structure as the well-established ISO/IEC 27001 (information security) and ISO/IEC 27701 (data protection) standards.
2. Perform risk classification
First, a risk classification must be performed for each AI application.
The following risk classes must be distinguished:
- AI system with unacceptable risk
- AI systems with high risk
- AI systems with systemic or medium risk.
- AI systems with low risk.
In addition, the company's role (e.g. provider or operator) must be determined. Correct classification of risk class and role is crucial for determining specific duties and responsibilities under the AI Act.
3. Documentation of assets and use cases
Companies should document all relevant information about the AI applications they use. This not only fulfills technical documentation requirements, but also makes risk assessments, such as data protection impact assessments (DPIA), much easier.
Documentation can be provided in:
- Asset directories, in which AI applications are recorded on an application-based basis
- Use case documentation, in which the specific use cases of AI are recorded based on the directory of processing activities familiar from data protection
4. Establish risk management
A risk management system in accordance with Article 9 of the AI Regulation is mandatory for high-risk AI. 9 of the AI Regulation is mandatory for high-risk AI systems. However, it is also strongly recommended for all other AI systems, particularly those based on non-transparent ('black box') models.
A key component of risk management is assessing the specific AI system's risks.
This is carried out in three steps:
- Identification of risks
- Assessment of risks
- Implementation of risk-mitigating measures, if necessary
This approach is based on the DPIA methodology and, where personal data is processed by the AI application, it can be integrated with or combined into it.
5. Implement transparency requirements
As a provider and operator of an AI system, you must comply with various transparency obligations under the AI Act and GDPR.
These range from providing information about the use and processing of personal data to disclosing and labelling AI-generated content (e.g. in texts, images or decisions) and providing instructions for use to enable proper and safe use of the AI system.
Tip: Appoint an AI officer to oversee AI in your company
While the appointment of an AI officer or representative is not legally mandatory, it is a recommended strategic measure for establishing AI governance within the company.
Those with experience of operationalising legal requirements in corporate practice, and knowledge of AI, are suitable for this role.
The central tasks of the AI officer include developing strategies, creating and implementing guidelines and policies, planning and controlling projects, coordinating internal responses to regulatory challenges, conducting AI literacy training, and communicating with authorities and external stakeholders.
Our services: Supporting you in the legally compliant use of AI
As a specialist AI compliance consultancy, we provide comprehensive support for implementing your AI governance:
- Appointment of an external AI Officer
- Consulting on the use of AI applications
- Data and AI governance
- Training on AI and data protection
- ISO 42001 certification
Contact us today to make your AI strategy future-proof and compliant!
Are you using AI the right way?
We can make your projects legally compliant, innovation-friendly and scalable.
Let's talk – no strings attached, just straight answers.