---
title: AI &amp; Data Protection 2025: How to use LLMs GDPR-compliant - and get a grip on shadow AI in the company - isla Studio
url: https://isla-stud.io/allgemein/ki-datenschutz-2025-wie-sie-llms-dsgvo-konform-nutzen-und-schatten-ki-im-unternehmen-in-den-griff-bekommen/
date: 2025-12-12
---

# AI &amp; Data Protection 2025: How to use LLMs GDPR-compliant - and get a grip on shadow AI in the company

There's a good chance that someone in your company has been using AI for a long time - without a policy, approval or data processing agreement.



In marketing, texts are polished with ChatGPT, in sales, customer data ends up in prompts, in development, code is cross-checked using AI tools. Well-intentioned, but from the perspective of GDPR, the EU AI Regulation and corporate compliance, this is a ticking time bomb: shadow AI.



At the same time, it would be absurd to forego the productivity gains of modern large language models (LLMs). The trick is to bring AI and data protection together - with a clear framework that allows innovation and limits risks.



As a certified AI expert (MMAI® Business School certificate, Academy4AI) and future member of the German AI Association, I support companies in precisely this intersection of technology, law and governance - and as a WooCommerce specialist &amp; WordPress developer for SMEs and industry, I know the practical perspective from projects very well.



This article is about that:




how the GDPR and EU AI Regulation (AI Act) interact,



which risks are really relevant when using ChatGPT, Claude, Gemini &amp; Co,



which practical rules you should introduce for data protection-friendly AI use,



and why a platform like InnoGPT is an exciting option if you want to provide your teams with a GDPR-compliant AI environment.




In this article, I share my professional perspective as a certified AI expert. However, this article does not replace individual legal advice. If you need a binding data protection assessment, I recommend consulting a qualified lawyer or data protection officer.




Table of contents
Toggle





1. the binding legal framework 2025: GDPR + EU AI Regulation (AI Act)



1.1 GDPR remains the basis for all personal data



As soon as you feed personal data into AI systems - whether for training, query response or analysis - the GDPR applies. You must, among other things:




have a legal basis according to Art. 6 GDPR,



ensure transparency towards data subjects,



Observe data minimization,



implement technical and organizational measures (TOMs),



and, if necessary, carry out data protection impact assessments (DPIA). (Handelsblatt Live)




In 2024, the German Data Protection Conference (DSK) published a detailed guideline entitled „AI and data protection“. It makes it clear that anyone who selects and uses AI applications is responsible for ensuring that this selection complies with data protection regulations - including provider selection, data flows and configuration. (Data Protection Conference)



With its ChatGPT task force, the EDPB (European Data Protection Board) has also addressed specific questions regarding the legality of web scraping, transparency and accuracy requirements for LLMs. (EDPB)



In short: even if AI is new, it is not a legal vacuum in terms of data protection law.



1.2 EU AI Regulation (AI Act): risk-based &amp; governance-driven



With the EU AI Regulation (Regulation (EU) 2024/1689), the EU adopted the world's first comprehensive legal framework for AI systems in 2024. The AI Act has been in force since August 1, 2024 and builds a risk-based approach: from minimal risk to limited risk to high-risk AI and prohibited practices. (EUR-Lex)



Key points:




Some bans on certain AI practices (e.g. certain forms of manipulative systems) and requirements for AI competence / AI literacy have already been in force since February 2, 2025 (Artificial Intelligence Act EU).



The majority of the obligations - especially for high-risk AI - will gradually take effect by 2 August 2026, with further specifications and guidelines from the EU Commission and the new European AI Office. (AI Act Service Desk)



The AI Act creates requirements for risk management, data quality, logging, technical documentation, transparency, human oversight and governance structures, among other things. (EUR-Lex)




The EU is currently discussing extending certain obligations for high-risk AI until 2027 in order to give companies more time to implement them. As of today (December 11, 2025), this is a political proposal that still has to go through the legislative process. (Reuters)



Important for you:The GDPR remains fully applicable, the AI Act supplements it. In case of doubt:




„AI Act regulates what an AI system is allowed to do - GDPR regulates how you are allowed to handle personal data.“ (Handelsblatt Live)




1.3 AI literacy &amp; governance obligations: Why companies need demonstrable AI literacy since 2025



For the first time, the EU AI Regulation provides a clear framework for the organizational responsibility of companies that use AI systems - regardless of whether they develop their own models or use external tools.



Two points have been particularly important since 2025:



AI literacy obligation (Art. 4 AI Act) - applicable since February 2025



Companies that provide or use AI systems („providers“ and especially „deployers“) must ensure that their employees have a sufficient level of AI literacy. In practice, this means




Employees need to understand how AI works in principle, where the risks lie and how they can work with it safely.



Companies must provide training, awareness measures and internal guidelines.



These measures must be documented in such a way that they are verifiable for accountability purposes.




In other words:




„Using AI“ has been inextricably linked to „demonstrating AI competence“ since February 2025.




AI governance - relevant for general-purpose AI since August 2025



With the start of application of the regulations on General Purpose AI (GPAI) in August 2025, additional organizational requirements apply - especially for providers, but indirectly also for companies that use such systems productively:




structured documentation of the models used,



monitoring and logging of usage,



processes for incident management, risks and complaints,



clear roles and responsibilities in the use of AI.




Even if the full canon of obligations for high-risk AI does not take effect until 2026/2027, it is clear that without an AI governance concept - i.e. documented responsibilities, guidelines, processes and training - it will be increasingly difficult for companies to credibly demonstrate AI Act and GDPR-compliant use.



2. what supervisory authorities actually say about AI &amp; LLMs



To make it tangible, let's take a quick look at three key sources:




DSK Guidance „AI and Data Protection“ (2024) - provides companies and authorities with criteria on how to select and use AI systems: Purpose limitation, legal basis, data minimization, transparency, order processing, technical and organizational measures. (Data Protection Conference)



EDPB ChatGPT Taskforce Report (May 2024) - highlights, among other things:

how training on web scraping of personal data is to be legally assessed,



what transparency and information obligations exist vis-à-vis users,



what requirements are placed on the accuracy and fairness of LLM responses. (EDPB)





National fact sheets, e.g. HWR Berlin / data protection officer - show very specifically which data is generated when using generative AI and how this use can be made data protection-friendly (e.g. no direct identifiers, pseudonymization, no sensitive data in freely available tools). (Data protection HWR Berlin)




The message is similar everywhere:




Principle: Put as little personal data as possible into AI systems.



Companies need clear rules on which tools may be used and how.



„Just having a quick go“ is not a legal basis.




3 Typical risks: How shadow AI arises in the company



Some typical situations that I see again and again:




Marketing loads customer data (e.g. CRM export) into any AI web app to „quickly create segments“.



HR has employment contracts or applications assessed by ChatGPT - including complete personal data.



Sales copies complete email histories with personal information into LLMs in order to formulate „better answers“.



Departments use free LLM tools without a company account, without an AV contract, without knowing where data is processed.




From a data protection perspective, these are several problem areas:




unclear roles &amp; responsibilities (controller / processor),



possible third country transfers (e.g. USA),



unclear storage and training use of the data,



lack of or inadequate information for data subjects.




It is precisely these cases that data protection supervisory authorities have increasingly focused on in recent months - including short-term restrictions and audits of individual providers. (StreamLex)



4. 10 basic rules: Using LLMs in a privacy-friendly way (solo &amp; in a team)



Whether you are a sole trader or a medium-sized IT company - the following rules are very helpful in practice for using LLMs in a GDPR-friendly way:




No sensitive personal data in consumer accountsHealth data, special categories according to Art. 9 GDPR, confidential employee information, internal contracts, etc. have no place in freely accessible AI front-ends. (Data protection HWR Berlin)



Pseudonymize or anonymize wherever possibleInstead of „Max Mustermann, IBAN, project XY at customer Z“ rather: „Customer A, budget B, project in the field of mechanical engineering, export country D“.



Clear tool strategy: separate private vs. professionalNo „I'll just use my private ChatGPT account“. Define approved tools - and if in doubt, block problematic domains on the company proxy. (NRW state database)



Create your own company policy („AI Policy“)Short, understandable, practical: Which tools are allowed? What data is allowed? Who is the contact person for questions? Today, an AI policy is no longer a „nice to have“, but a central element of AI governance.



Clarify the legal basisIn your company, you will often rely on legitimate interests (Art. 6 para. 1 lit. f GDPR), contract fulfillment or, if applicable, consent. Proper documentation in the record of processing activities is important. (Data protection conference)



Check DPIA - especially for sensitive scenariosWhen AI systems intervene massively in business processes or contain profiling elements, a data protection impact assessment often becomes mandatory. (Data protection conference)



Logging and making it traceableWho uses which system for what? Logging is not only an IT security issue, but also a governance issue - and fits in well with the documentation-oriented logic of the AI Act. (EUR-Lex)



Consciously select models &amp; providersCheck: hosting (EU/EEA?), AV contract, storage and training policy, transparency, technical security features. Some providers now explicitly advertise „zero retention“ and „no training on customer data“. (ASCOMP)



Training employees - also legally expected since February 2025Short training sessions, live demos, small use case workshops - aim: understanding where opportunities lie and where the red lines are Since February 2025, the EU AI Act has expressly required companies to ensure a sufficient level of AI literacy. Training, internal guidelines and documented participation are thus effectively becoming a mandatory part of AI compliance - comparable to data protection or information security training.



Integrate AI &amp; data protection with your existing web and SEO strategyIf you are already working properly with technical SEO, performance and structured data maintenance, you have a good basis for clean AI integrations. Are you familiar with my guide to technical SEO and my article on the most important SEO trends for 2024 - because visibility, trust and legally compliant technologies go hand in hand? (saskialund.de)




5. why consumer accounts (free or basic accounts) from ChatGPT &amp; Co are tricky for companies



Even with improvements in the data protection settings, the use of classic consumer accounts remains problematic in many corporate contexts:




Data flows to third countries and complex sub-processor chains,



limited or missing order processing contracts,



unclear transparency for data subjects,



partial use of inputs for model training (depending on provider / tariff), even if many providers now offer „opt-out“ or business options here. (eRecht24)




This does not mean that you are not allowed to use such tools, but:




In the corporate context, they can often only be properly secured with considerable coordination effort, contract review and additional measures.



In the corporate context in particular, it is therefore often worth taking a step towards dedicated AI platforms that are explicitly designed for GDPR-compliant use.




6 Data protection-compliant AI platforms - InnoGPT and Langdock in focus



There are now platforms that bundle various LLMs in an EU-hosted, GDPR-compliant environment. Two of these are InnoGPT and Langdock.



6.1 What makes InnoGPT stand out?



The following is an abridged version of publicly available descriptions:




InnoGPT bundles leading language models (e.g. GPT-4, GPT-5, Gemini, Claude, Mistral etc.) in a platform that specifically targets German and European companies. (sysbus.eu)



The platform relies on hosting in Europe and advertises a contractually secured „zero retention policy“, i.e. customer input is not used to train the AI models and is processed exclusively on European servers - the input therefore does not end up with the original third-country provider. (ASCOMP)



It addresses typical company requirements such as team functionalities, workflows and integration into existing processes.




If you would like to take a closer look, please use the following link:



Get to know InnoGPT (partner link)











This approach is exciting for companies that want to replace shadow AI and at the same time provide their teams with modern tools:




Their teams continue to work with strong models - but in a controlled, documentable and GDPR-friendly environment.








7. practical examples: How SMEs and industry could use InnoGPT



Some scenarios that I know from projects and discussions with customers:




Technical sales &amp; quotation preparation

Technical texts, product descriptions and quotations are prepared via InnoGPT.



Internally used documents can be integrated via retrieval techniques without the company losing control of the data. (arXiv)





Knowledge management &amp; documentation

Internal guidelines, manuals and SOPs are made available for Q&amp;A in a secure environment.



Employees ask questions such as „Which test steps apply to product line X?“ - InnoGPT provides answers based on internal documents without handing them over to external training systems. (arXiv)





Marketing &amp; content for B2B websites and stores

Content drafts for WooCommerce stores, product pages and blog articles are created, followed by a professional review.



Due to the storage and processing in Europe, this can be integrated much better into an existing data protection and compliance strategy than the use of scattered consumer tools. (Capterra)






8 Governance &amp; AI strategy: from individual tool to enterprise solution



If you don't want to leave AI in your company to chance, you need more than one tool:




Inventory

Who is already using which AI tools for what?



Which data flows where?





Define target image

Which use cases should be officially supported (e.g. text, code, research, meeting notes)?



How does AI fit into your existing digital &amp; SEO strategy?





Consolidate the tool landscape

Instead of five different AI services in shadow mode: one shared platform, e.g. InnoGPT, supplemented by clearly defined special tools.





Anchoring guidelines &amp; processes

AI policy, DP contracts, register of processing activities, training.



AI policy, role model, escalation and approval processes.





Monitoring &amp; continuous adaptation

AI Act implementation, new guidelines from the supervisory authorities, technical developments - governance is not a one-off project, but an ongoing process. (AKEuropa)






Why this is more than „best practice“:The AI Act requires - gradually over 2025-2027 - a documented governance system for companies that use AI. Without an internally anchored AI governance structure (guidelines, training, monitoring), it will be very difficult in the medium term to prove to supervisory authorities and business partners that AI is being used in a controlled, responsible and compliant manner.







9th checklist: Making AI in the company GDPR- &amp; AI Act-ready



A short checklist to get you started today:




Take stock - Where is AI already being used in the company (tools, data types, processes)?



Carry out a risk assessment - which applications are not critical, which affect sensitive data or core processes?



Check legal bases &amp; contracts - GDPR, DP contracts, data flows, third country transfers.



Define approved platform - e.g. InnoGPT as a central, GDPR-compliant AI solution for teams



Adopt an AI policy - understandable, practical, with examples and dos &amp; don'ts.



Training &amp; enablement - empowering employees to use AI in a targeted, responsible and efficient manner - and documenting this training (AI literacy).



Documentation &amp; monitoring - take the logic of the AI Act and the GDPR seriously: document, evaluate, refine. (EUR-Lex)




Sources




Data Protection Conference (DSK), Guidance „AI and data protection“ (as of 06.05.2024) - Criteria for the selection and use of AI applications in companies and public authorities. (Data Protection Conference)



European Data Protection Board (EDPB), „Report of the work undertaken by the ChatGPT Taskforce“ (24.05.2024) - First coordinated European assessment of ChatGPT's data processing practices in the light of the GDPR. (EDPB)



Fact sheet „Use of generative AI and data protection“ (University / Data Protection Officer, as of 04/2024) - Practical guide to the use of generative AI, in particular ChatGPT, from a data protection perspective. (Data protection HWR Berlin)



eRecht24, „Is ChatGPT usable in compliance with data protection?“ (2025) - Classification of the data protection-friendly use of ChatGPT, including information on training use and settings. (eRecht24)



Regulation (EU) 2024/1689 - Artificial Intelligence Act (AI Act) - Official EU legal framework for AI, including risk-based approach, governance obligations and application timeline. (EUR-Lex)



EU AI Act Service Desk &amp; FPF Timeline - Overview of the phased application of the AI Act until 2026/2027. (AI Act Service Desk)



Reuters &amp; Le Monde (2025), Reports on EU Commission proposals to extend high-risk obligations of the AI Act - Indications of planned delay of certain regulations until 2027. (Reuters)



Handelsblatt Live, „KI und Datenschutz: So nutzen Sie KI-Systeme DSGVO-konform“ (2025) - Classification of the interplay of GDPR, BDSG and AI Act in the corporate context. (Handelsblatt Live)



ASCOMP, sysbus.eu, Capterra - Information on InnoGPT as a GDPR-oriented AI platform with EU hosting and zero-retention approach. (ASCOMP, sysbus.eu, Capterra)



arXiv - Articles on retrieval-based AI applications and knowledge management scenarios in the enterprise context. (arXiv)



AKEuropa - Analyses and background reports on the practical implementation of the AI Act in Europe. (AKEuropa)




Note: This article provides technical guidance on the GDPR- and AI Act-compliant use of AI systems. It does not replace legal advice. For binding assessments, you should consult legal expertise.