1. Provider Identification Identification du fournisseur
| Legal Entity |
TLI S.A. (trading as Easylab AI) |
| Registered Address |
55, allée de la Poudrerie, L-1899 Roeser, Luxembourg |
| Governance Contact |
governance@easylab.ai |
| Role under AI Act |
AI Deployer (Article 26, Regulation (EU) 2024/1689) and ICT Service Provider (DORA -- Regulation (EU) 2022/2554) |
| Applicable Services |
- Custom AI application development
- AI system integration services
- AI consulting and advisory services
|
2. Scope of Services Perimetre des services
Easylab AI provides bespoke AI development and integration services to enterprise and institutional clients. The scope of covered services includes:
2.1 Custom AI Application Development
- Conversational AI systems (chatbots, virtual assistants)
- Retrieval-Augmented Generation (RAG) systems for knowledge management
- AI-powered content generation tools
- Automated data analysis and reporting systems
- Document processing and extraction pipelines
2.2 AI System Integration
- Integration of AI capabilities into existing client infrastructure
- API orchestration and workflow automation incorporating AI services
- Migration of legacy processes to AI-augmented workflows
2.3 AI Model Selection and Deployment
- Evaluation and selection of appropriate AI models per project requirements
- Deployment using established provider APIs (Anthropic Claude, OpenAI GPT, Google Gemini, and others)
- Model performance monitoring and optimization
2.4 AI Consulting and Strategy
- AI readiness assessment
- AI strategy development and roadmap planning
- AI governance and compliance advisory
Important distinction: Easylab AI does NOT train, fine-tune, or develop foundation AI models. We operate exclusively as a deployer and integrator, utilizing established provider APIs under their respective terms of service and data processing agreements. The providers of the underlying foundation models (Anthropic, OpenAI, Google, etc.) remain the providers of those GPAI models under the EU AI Act. Easylab AI's role is that of an AI deployer (Article 26), not a GPAI model provider.
3. AI Act Compliance -- Risk Classification Classification des risques selon le Reglement IA
Pursuant to the EU Artificial Intelligence Act (Regulation (EU) 2024/1689), Easylab AI performs a systematic risk assessment for every custom project prior to development.
3.1 Prohibited Practices (Article 5)
Easylab AI maintains a zero-tolerance policy toward prohibited AI practices. During project design and scoping, every project undergoes systematic screening against the prohibitions defined in Article 5, including:
- Subliminal, manipulative, or deceptive techniques causing significant harm
- Exploitation of vulnerabilities related to age, disability, or social/economic situation
- Social scoring by public or private actors
- Individual risk assessment for predicting criminal offences based solely on profiling
- Untargeted scraping of facial images for facial recognition databases
- Emotion recognition in the workplace or educational institutions (except for safety/medical reasons)
- Biometric categorization based on sensitive attributes
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (subject to exceptions)
Policy: Any project that falls within or approaches a prohibited practice category will be declined. This assessment is documented and retained as part of the project record.
3.2 High-Risk Assessment (Articles 6, 7 and Annex III)
Each project is evaluated against the Annex III categories of high-risk AI systems:
| Annex III Category |
Description |
Assessment |
| 1. Biometrics |
Remote biometric identification, categorization, emotion recognition |
Not within Easylab service scope |
| 2. Critical infrastructure |
Safety components of critical infrastructure management |
Assessed per project |
| 3. Education and training |
Access determination, assessment, proctoring |
Assessed per project |
| 4. Employment |
Recruitment, hiring decisions, task allocation, performance monitoring |
Assessed per project |
| 5. Essential services |
Credit scoring, insurance risk, emergency services |
Assessed per project |
| 6. Law enforcement |
Risk assessment, lie detection, evidence evaluation, profiling |
Not within Easylab service scope |
| 7. Migration and border |
Risk assessment, document verification, application examination |
Not within Easylab service scope |
| 8. Justice and democracy |
Legal research assistance, court decision influence |
Assessed per project |
3.3 Typical Risk Classification
The majority of Easylab AI custom projects are classified as:
- Limited risk (Article 50): Systems with transparency obligations (e.g., chatbots, content generation). Users must be informed they are interacting with an AI system.
- Minimal risk: Internal automation, data analysis, and productivity tools not falling under Annex III.
3.4 High-Risk Project Compliance
If a project is determined to involve a high-risk AI system, Easylab AI ensures full compliance with Chapter III, Section 2 requirements:
| Article 9 | Risk management system -- continuous, iterative risk identification, analysis, estimation, and evaluation throughout the system lifecycle |
| Article 10 | Data governance -- training, validation, and testing data quality criteria, bias detection and mitigation measures |
| Article 11 | Technical documentation -- comprehensive documentation drawn up before the system is placed on the market or put into service |
| Article 12 | Record-keeping -- automatic logging of events throughout the system's lifetime for traceability |
| Article 13 | Transparency and information provision -- instructions for use provided to deployers, including system capabilities and limitations |
| Article 14 | Human oversight -- design enabling effective oversight by natural persons during system use |
| Article 15 | Accuracy, robustness, and cybersecurity -- appropriate levels maintained throughout the system lifecycle |
3.5 Client Responsibility under Article 25
Notice: If a client substantially modifies the intended purpose of a delivered AI system, or places their own name or trademark on a high-risk AI system, they may assume the obligations of a provider under Article 25 of the AI Act. Easylab AI will advise clients of this risk during project handover.
4. Transparency Obligations (Article 50) Obligations de transparence
Easylab AI ensures that all custom projects comply with the transparency requirements of the AI Act:
4.1 AI Interaction Disclosure
- All end-user-facing AI systems include clear disclosure that the user is interacting with an AI system, unless this is obvious from the circumstances (Article 50(1)).
- Disclosure is provided before or at the first interaction with the AI system.
4.2 AI-Generated Content Labeling
- All AI-generated content is clearly labeled as such (Article 50(2)).
- AI-generated or AI-manipulated content (text, images, audio, video) is disclosed in an appropriate and timely manner (Article 50(4)).
- Machine-readable metadata is embedded in AI-generated content where technically feasible, in accordance with state-of-the-art standards.
4.3 Client Deliverables
- Every delivered project includes an AI Transparency Notice specific to the deployed system.
- Instructions for use include the system's capabilities, limitations, and known risks.
- Client documentation specifies how transparency obligations should be maintained post-deployment.
5. Human Oversight (Article 14) Supervision humaine
Core principle: No AI system developed or deployed by Easylab AI makes autonomous decisions without meaningful human validation. Human oversight is a non-negotiable design requirement in every project.
5.1 Design Principles
- Every custom project incorporates human review mechanisms appropriate to the risk level and domain.
- AI outputs are presented as recommendations, drafts, or suggestions -- never as final decisions.
- The client retains ultimate decision authority over all matters where the AI system provides input.
5.2 Override and Intervention Capabilities
- All deployed systems include a kill switch or override capability enabling immediate disabling of the AI component.
- Human operators can intervene at any stage of the AI workflow.
- Fallback procedures are documented for scenarios where the AI system is disabled.
5.3 Roles and Responsibilities
- For each project, human oversight roles and responsibilities are explicitly documented.
- The client designates competent natural persons responsible for oversight.
- Easylab AI provides training to designated oversight personnel.
6. Data Governance and GDPR Compliance Gouvernance des donnees et conformite RGPD
6.1 Roles
| Client | Data Controller (Article 4(7) GDPR) -- determines the purposes and means of processing personal data |
| Easylab AI | Data Processor (Article 28 GDPR) -- processes personal data on behalf of the client, solely under documented instructions |
6.2 Contractual Safeguards
- A Data Processing Agreement (DPA) compliant with Article 28 GDPR is included in every client contract.
- Processing is strictly limited to the purposes specified in the contract and the client's documented instructions.
6.3 AI Provider Data Handling
Easylab AI configures all AI provider APIs to minimize data retention. No client data is used to train, fine-tune, or improve AI models. The zero-retention status of each provider is detailed below:
| Provider |
Service |
Zero-Retention Configuration |
Contractually Confirmed |
| Anthropic (Claude) |
AI text generation |
Zero Data Retention (ZDR) addendum available. API Business terms include no-training clause. Must be explicitly requested and approved. |
ZDR addendum to be signed |
| OpenAI (GPT) |
AI text generation, embeddings |
store:false parameter set per API request. EU data residency available at project level. OpenAI Ireland Ltd entity for EEA clients. |
Yes -- API terms + DPA |
| Google Gemini |
Multimodal AI |
Paid API (Vertex AI) does not use data for training. Free API tier may retain data. EU region available via Vertex AI. |
Yes -- via Google Cloud DPA (paid API only) |
Note: Zero-retention configurations are verified at project inception and logged in the AI project register. This table is reviewed quarterly and updated when provider terms change. Last verified: March 2026.
- Data minimization: Only the minimum data necessary for the AI function is transmitted to AI providers. Easylab AI implements pre-processing to strip unnecessary personal data where feasible.
6.4 Encryption and Security
| In Transit | TLS 1.3 for all data transmissions |
| At Rest | AES-256 encryption for all stored data |
| API Authentication | Encrypted API keys, rotated regularly, stored in secure vaults |
6.5 Data Residency
- EU data residency is the default preference for all projects. Preferred regions: Frankfurt (eu-central-1), Dublin (eu-west-1), Tallinn, Belgium (europe-west1).
- Non-EU data transfers are documented, assessed, and subject to appropriate safeguards (Standard Contractual Clauses, adequacy decisions, or supplementary measures).
6.6 Sub-processor Management
- A current list of sub-processors is maintained and made available to clients.
- Clients are notified at least 30 calendar days before any new sub-processor is engaged.
- Clients have the right to object to new sub-processors.
6.7 Data Retention and Deletion
- Data retention periods are defined per client contract and project requirements.
- Upon project completion or termination, all client data is deleted or returned as specified in the DPA.
- Deletion upon client request is honored within 30 days, with confirmation provided.
7. DORA Compliance (Financial Sector Clients) Conformite DORA (clients du secteur financier)
For clients subject to the Digital Operational Resilience Act (Regulation (EU) 2022/2554), Easylab AI provides the following additional assurances as an ICT third-party service provider:
7.1 ICT Sub-contractor Register
- A complete register of all ICT sub-contractors involved in the service chain is maintained.
- The register includes identification, services provided, data processed, and location of processing.
- Updated promptly upon any change in the sub-contractor chain.
7.2 Business Continuity and Disaster Recovery
- Business Continuity Plan (BCP) maintained and tested annually.
- Disaster Recovery Plan (DRP) with defined Recovery Time Objective (RTO) and Recovery Point Objective (RPO) per project.
- Redundancy measures for critical infrastructure components.
7.3 Incident Notification
- ICT-related incidents are classified and reported to affected clients within 24 hours of detection (T0). The notification clock starts at the moment of detection, not confirmation.
- For financial sector clients subject to DORA: initial notification without undue delay, intermediate report within 72 hours, final report within 1 month.
- Incident reports include nature, impact, mitigation measures, and timeline for resolution.
7.4 Audit and Access Rights
- Clients and their supervisory authorities (including the CSSF for Luxembourg-regulated entities) are granted audit and access rights as required by DORA.
- Easylab AI cooperates with audits upon reasonable notice.
- Relevant documentation and records are made available for inspection.
7.5 Exit Plan and Data Portability
- An exit plan is documented in every contract with financial sector clients.
- Data portability is ensured: all client data can be exported in standard, machine-readable formats.
- Transition period support is available to facilitate migration to alternative providers.
- Complete data deletion is confirmed in writing upon exit completion.
8. AI Providers and Sub-processors Fournisseurs IA et sous-traitants
The following table lists the AI providers and infrastructure sub-processors commonly used by Easylab AI in custom projects. The specific providers selected depend on each project's requirements and are documented in the project-specific technical documentation.
| Provider |
Service |
Location |
Data Processing |
Certifications |
| Anthropic LLC |
Claude LLM (text generation, analysis, reasoning) |
USA Non-EU |
ZDR addendum available; API terms include no-training clause. ZDR addendum to be signed. |
SOC 2 Type II, ISO 27001, ISO 42001 |
| OpenAI LLC |
GPT models (text generation, embeddings) |
USA Non-EU |
store:false per request; no data used for training. DPA in place. |
SOC 2 Type II, ISO 27001, ISO 27701 |
| Google Cloud |
Gemini, Vertex AI (LLM, embeddings, ML services) |
EU EU |
DPA in place; EU data residency available |
SOC 2 Type II, ISO 27001, ISO 27017, C5 |
| Amazon Web Services |
Cloud infrastructure, compute, storage |
EU (Frankfurt) EU |
DPA in place; EU region selected |
SOC 2 Type II, ISO 27001, C5 |
| Google Firebase |
Authentication, Firestore database, Cloud Functions |
EU (Belgium) EU |
DPA in place; EU data residency |
SOC 2 Type II, ISO 27001 |
| n8n GmbH |
Workflow orchestration and automation |
EU EU |
Self-hosted option available; no data leaves client infrastructure when self-hosted |
SOC 2 Type II |
Note on international transfers: For US-based AI providers (Anthropic, OpenAI), the transfer risk is mitigated by zero-retention configurations (ZDR addendum for Anthropic, store:false parameter for OpenAI): personal data is processed in-memory only and is not persisted outside the API call duration. Data Processing Agreements with Standard Contractual Clauses are in place. A Transfer Impact Assessment (TIA) is conducted for each project involving non-EU transfers. See Section 6.3 for the detailed per-provider zero-retention status.
9. Record-Keeping and Logging (Article 12) Conservation des registres et journalisation
Easylab AI implements comprehensive logging and record-keeping for all custom AI systems:
9.1 Operational Logs
- All AI system operations are logged with timestamps, including API calls, responses, and processing events.
- Logs include sufficient detail for traceability and post-incident analysis.
9.2 Retention (Tiered Policy)
- Minimal/limited risk deployments: 6 months minimum.
- High-risk deployments or regulated sectors (finance, health, employment, public services): 2 years minimum (EU AI Act Article 12).
- Accounting and tax records: 10 years (Luxembourg commercial law).
- Extended retention available per contractual agreement or regulatory requirement.
- Retention periods are documented in each project's technical specification.
9.3 Data Logged
- Input/output data for AI API calls, as required per project risk level and contractual requirements.
- System access logs for all components.
- Configuration change logs.
- Incident and error logs.
9.4 Access and Audit
- Logs are accessible to the client upon request.
- Log integrity is protected against unauthorized modification.
10. Incident Reporting (Article 73) Signalement des incidents
10.1 Definition of Serious Incident
A serious incident, as defined under Article 3(49) of the AI Act, is any incident or malfunctioning of an AI system that directly or indirectly leads to:
- Death or serious damage to health of a person
- A serious and irreversible disruption to the management or operation of critical infrastructure
- A breach of fundamental rights obligations
- Serious damage to property or the environment
10.2 Notification Procedure
Important: The notification clock starts at the moment of detection (T0), not confirmation. A suspected serious incident triggers the notification timeline immediately.
| Client notification (suspected serious incident) | Within 24 hours of detection (T0) |
| Authority notification (GDPR Art. 33) | Within 72 hours of detection (T0), to the supervisory authority for personal data breaches |
| Authority notification (AI Act Art. 73) | Within 72 hours of detection (T0), to the relevant market surveillance authority |
| DORA notification (financial sector clients) | Per DORA timelines: initial notification without undue delay, intermediate within 72 hours, final within 1 month |
| Method | Written notification via email to the client's designated contact and to the relevant authority via established reporting channels |
10.3 Post-Incident Process
- Immediate containment and mitigation measures.
- Root cause analysis conducted within 10 business days.
- Post-incident report provided to the client, including root cause, impact assessment, remediation measures, and preventive actions.
- Lessons learned integrated into the risk management system.
10.4 Contact for Incident Reporting
Email: governance@easylab.ai
Reports are acknowledged within 4 hours during business hours (CET/CEST, Monday--Friday, 09:00--18:00).
11. AI Literacy (Article 4) Competences en matiere d'IA
Pursuant to Article 4 of the AI Act, Easylab AI ensures that all staff and clients have a sufficient understanding of AI to enable informed use and oversight.
11.1 Staff Training
- All Easylab AI staff involved in custom projects receive training on AI capabilities, limitations, and risks.
- Training covers the EU AI Act requirements, GDPR implications, and ethical AI principles.
- Training is updated annually and upon significant regulatory or technological changes.
11.2 Client Onboarding
- Every custom project includes an AI literacy briefing for the client's relevant personnel.
- Briefing covers: how the AI system works, its capabilities and limitations, known risks, human oversight requirements, and regulatory obligations.
11.3 Documentation
- Each delivered project includes documentation of the AI system's capabilities, limitations, and intended use.
- Instructions for use are provided in clear, accessible language.
- Clients are notified of significant changes to AI models or capabilities that may affect their deployed systems.
12. Deployer Obligations Guide (Article 26) Guide des obligations du deployeur
When the client acts as deployer of a high-risk or limited-risk AI system developed by Easylab AI, the client is responsible for ensuring the following obligations are met:
12.1 Use According to Instructions
- Use the AI system in accordance with the instructions for use provided by Easylab AI.
- Do not use the system for purposes outside the documented intended use without prior consultation.
12.2 Human Oversight
- Assign competent natural persons to oversee the operation of the AI system.
- Ensure oversight personnel have the authority, competence, training, and resources to fulfill their oversight role.
12.3 Input Data Quality
- Ensure that input data is relevant and sufficiently representative for the intended purpose of the system.
- Do not feed the system data that is biased, incomplete, or unfit for the intended purpose.
12.4 Monitoring
- Monitor the operation of the AI system based on the instructions for use.
- Report any serious incidents or malfunctions to Easylab AI without undue delay.
12.5 Record-Keeping
- Keep the logs automatically generated by the AI system for the period specified in the contract (minimum 6 months for minimal/limited risk; minimum 2 years for high-risk or regulated sector deployments per EU AI Act Article 12).
12.6 Information to End Users
- Inform natural persons who are subject to or affected by the AI system that they are interacting with an AI system.
- Provide this information in a clear and timely manner.
12.7 Fundamental Rights Impact Assessment
- If the AI system is high-risk and the deployer is a public body or a private entity providing public services, conduct a fundamental rights impact assessment before putting the system into use (Article 27).
- Easylab AI can assist in conducting this assessment upon request.
Easylab AI support: We provide documentation, training, and ongoing advisory to support clients in fulfilling their deployer obligations. This guide is supplemented by project-specific instructions for use delivered with each system.
13. Contact Contact
| AI Governance |
governance@easylab.ai |
| Privacy / DPO |
privacy@easylab.ai |
| General Inquiries |
jdoussot@easylab.ai |
| Postal Address |
TLI S.A. / Easylab AI 55, allée de la Poudrerie L-1899 Roeser Luxembourg |