Supporting AI Compliance for the EU Market

Your trusted Authorised Representative under Article 22 & Article 54 of the EU AI Act, supporting seamless regulatory compliance for your AI systems across Europe.

Get Started
AI Compliance and Digital Technology

About Trustora Digital

Trustora Digital serves as your dedicated Authorised Representative under Article 22 & Article 54 of the EU Artificial Intelligence Act, bridging the gap between innovation and compliance.

Our mission is to serve as a reliable point of contact with competent authorities and supporting your AI systems compliance with applicable regulatory obligations throughout their lifecycle.

With deep expertise in AI governance, risk management, and regulatory frameworks, we ease the entrance of your AI systems on the Market while you focus on what you do best.

Article 22 & Article 54 of the AI Act

As your Authorised Representative, we act as your designated point of contact with EU authorities, manage documentation requirements, facilitate market surveillance cooperation, and operate continuous regulatory compliance for your AI systems where the outputs are used within the European Union.

Our Team

Adam Leon Smith

Adam Leon Smith

Chief Executive Officer

Adam Leon Smith is an AI regulatory and technical expert specialising in EU AI Act compliance and risk management for complex AI systems. He is also Chair of the AIQI Consortium, a global initiative to promote the use of the quality infrastructure for responsible AI, and Deputy Chair of the UK’s national AI standards committee. He has led many AI standardisation projects in ISO/IEC SC 42 and CEN-CENELEC JTC 21 supporting the AI Act and the generally acknowledged state of the art, including the Article 17 Quality Management System.

Before his involvement in AI regulation, Adam spent 20 years in senior technology roles, delivering verification and validation solutions for highly complex or high-risk industry challenges. In 2024, the University of Bath awarded Adam an honorary doctorate in recognition of his work and its impact on the profession.

As CEO of Trustora Digital, Adam supports non‑EU providers in meeting their obligations under Articles 22 and 54 of the EU AI Act, with a focus on practical documentation, assurance, and interaction with EU regulators.

Clément Bénesse

Clément Bénesse

Chief Technology Officer

Dr. Clément Bénesse is a researcher specializing in trustworthy artificial intelligence, with a focus on explainability, algorithmic fairness, privacy and the societal impacts of AI systems. His work explores the use of AI as a tool for public interest objectives, including disinformation detection and the alignment of large language models with social values. His technical expertise spans a broad range of methods and tools, from classical statistics and machine learning to state-of-the-art generative AI systems, enabling him to bridge foundational approaches with emerging AI capabilities.

Beyond his technical research, he has led interdisciplinary work at the intersection of computer science and law, contributing to research and policy-oriented initiatives related to the European Union’s artificial intelligence regulatory framework. In this context, he has participated as an expert within AFNOR, supporting the standardization of AI regulation and the development of technical frameworks aligned with legal and regulatory requirements.

In addition, Dr. Clément Bénesse acts as Chief Technology Officer at Trustora, where he defines and oversees the organization’s technical strategy.

Tom Lebrun

Tom Lebrun

Chief Strategy Officer

Dr. Tom Lebrun holds senior leadership roles in artificial intelligence governance and international standardization. At the Standards Council of Canada, he leads the national artificial intelligence and data governance standardization strategy. He also chaired certification and management system committees related to ISO IEC 42001 on Artificial Intelligence Management Systems. Through this work, he supports the translation of regulatory objectives into operational and interoperable governance frameworks.

In parallel, he provides sustained academic leadership in AI law and governance. He has taught for more than six years at the graduate and doctoral levels at Université Laval and Université Paris Saclay. He has also delivered executive and public policy training for institutions including the Government of Canada, Global Affairs Canada, and the MILA Quebec Artificial Intelligence Institute.

At Trustora Digital, he serves as Chief Strategy Officer. In this role, he leads strategic positioning, regulatory alignment, and the development of governance frameworks.

Our Services

01

Representation & Liaison

Acting as your official point of contact with EU regulatory authorities, managing all communications and ensuring swift response to inquiries and requests.

02

Documentation Management

Maintaining comprehensive technical documentation, declarations of conformity, and compliance records as required under the AI Act regulations.

03

Documentation Review

Identifying potential compliance gaps in the documentation based on the legal requirements, our knowledge and industry best practice.

04

Market Surveillance

Coordinating with market surveillance authorities, managing incident reporting, and ensuring continuous monitoring of your AI systems' compliance status.

05

Advisory & Consultation

Providing expert guidance on best practices for compliance, and strategic recommendations for your AI deployment in the EU market.

06

Ongoing Compliance

Monitoring regulatory updates, ensuring continued adherence to evolving requirements, and proactively addressing potential compliance challenges.

EU AI Act · Article 3(1)

Is my system an AI system under the AI Act?

Follow the flow from left to right. Answer the highlighted node’s question to move through the assessment.
Scope & autonomy
Step 1
Is the system machine-based?
Runs as software on hardware (servers, edge, embedded, classical or quantum).
Yes / No
Step 2
Does it operate with some autonomy?
Has some independence of action once inputs are provided (not fully manual).
Outcome
Not an AI system (no machine base).
Outcome
Not an AI system (purely manual behaviour).
Objectives & intelligence
Step 3
Does it have explicit or implicit objectives?
Goals like classification, prediction, recommendation, optimisation, etc.
Step 4
Does it infer how to generate outputs using AI techniques?
Uses ML or logic/knowledge-based reasoning; more than fixed simple rules.
Step 5
Is it only basic data processing or similar excluded functionality?
Basic statistics, descriptive dashboards, trivial baselines, narrow optimisation helpers.
Outcome
Not an AI system (no system objectives).
Outcome
Not an AI system (only basic/excluded behaviour).
Outputs & impact
Step 6
Does it output predictions, content, recommendations or decisions?
Generates outputs that interpret or transform inputs, not just store or show them.
Step 7
Can those outputs influence physical or virtual environments?
Affects devices, processes, user interfaces, digital services, or user behaviour.
Outcome
Not an AI system (no qualifying outputs).
Outcome
Not an AI system (no influence on any environment).
Outcome
Qualifies as an AI system under Article 3(1) AI Act.
Scope only; risk level and obligations are separate questions.
Step 1 Path: —
Start at “machine-based system”.
Begin by checking whether the system is implemented as software running on computing hardware. If it is not machine-based, it is outside the AI system definition.
Examples
  • Email spam filter running on servers.
  • Generative text or image service deployed in the cloud.
  • Expert system for medical diagnosis implemented as software.
Answer the question based on how the system behaves in its real use context.

FAQ

1. What is an AI Act Authorised Representative?
An AI Act authorised representative is an EU-based natural or legal person appointed by a non-EU provider to act on its behalf for certain compliance tasks under the EU Artificial Intelligence Act, including being the main point of contact for EU authorities.
2. Who is required to appoint an authorised representative?
In general, providers established outside the EU who place high-risk AI systems or general-purpose AI (GPAI) models on the EU market, or put them into service in a manner where the outputs are used in the EU must appoint an authorised representative in the Union, unless they already have a legal presence that can fulfil this role.
3. Do we need an authorised representative for both Article 22 and Article 54?
If you are a non-EU provider of high-risk AI systems, Article 22 applies; if you are a non-EU provider of GPAI models, Article 54 applies. Many organisations fall under one of these categories, and some may need representation for both, depending on their AI portfolio.
4. What does Trustora Digital do as your authorised representative?
Trustora Digital acts as your designated point of contact with EU authorities, manages regulatory communications, and supports you in meeting your obligations under the AI Act.
This includes handling documentation, facilitating market-surveillance cooperation, and supporting continuous monitoring of your AI systems’ compliance status.
5. Which types of AI systems do you cover?
Services cover providers of high-risk AI systems and providers of general-purpose AI models that fall within the scope of the EU AI Act, with a focus on cross-border deployments into the EU market.
6. Are you limited to certain sectors?
No. The service is designed for a wide range of sectors, including SaaS, enterprise software, fintech, health, and other domains where AI systems are placed on the EU market.
7. Where is Trustora Digital established?
Trustora Digital operates from Paris, France, acting as your EU-based authorised representative under Article 22 and Article 54 of the EU AI Act.
8. How does the onboarding process work?
Onboarding typically involves an initial scoping call, collection and review of key technical and compliance documentation, and the signature of a written mandate setting out roles and responsibilities under the AI Act.
9. What documentation do you need from us?
Typical documentation includes technical documentation for your AI system or GPAI model, information on risk-management and governance processes, deployment use cases, and contact details for your internal compliance leads.
10. How do you work with EU authorities on our behalf?
Trustora Digital manages incoming requests from competent authorities and the AI Office, coordinates your responses, and supports you in addressing market-surveillance inquiries and incident reporting obligations in a timely manner.
11. Are you responsible for making our AI system compliant?
You, as the provider, remain primarily responsible for ensuring your AI systems comply with the EU AI Act; the authorised representative supports and enables this by performing specific mandated tasks and acting as your interface with the authorities.
12. Can we terminate the mandate if you are non-compliant?
Yes. Authorised representatives must be able to terminate the mandate if they identify serious or persistent non-compliance that is not remedied, and may be required to inform competent authorities in such cases.
13. Where can I learn more?
You can consult the EU AI Act Harmonised Standards Mapping website for additional information on AI standards.
14. How can we get started?
You can contact Trustora Digital through the website to schedule an initial consultation, after which a tailored proposal and draft mandate can be prepared based on your AI systems and deployment plans.

Get in Touch

Ready to enter EU market? We're here to help.

Location

50 Avenue des Champs-Élysées, Paris 75008, France