LekslyRelease 1.0 (Beta) pending
HomeBlogEU AI ActEarly access
Sign up for assessment
Back to home

EU AI Act

Executive briefing for the August 2026 deadline

The EU AI Act is no longer a future issue. Fines can reach EUR 35 million or 7% of global annual turnover for prohibited practices, and up to EUR 15 million or 3% for high-risk violations. This page translates legal text into practical operational risk.

Max fine

EUR 35M / 7%

High-risk fine

EUR 15M / 3%

Full enforcement

02 Aug 2026

Deadlines your team must manage now

February 2, 2025

Active now - prohibited AI practices are enforceable

Any unacceptable-risk use must be shut down immediately (for example social scoring, manipulative AI, or emotion recognition in schools/workplaces).

August 2, 2025

GPAI model obligations apply

General-purpose AI model providers and deployers must meet transparency and governance duties.

August 2, 2026

Annex III high-risk obligations are fully enforceable

High-risk systems must run with complete logging, oversight controls, and regulator-ready technical documentation.

Quick risk check: are you exposed?

Classify your use cases in minutes before enforcement timelines set your roadmap for you.

01

Unacceptable risk (prohibited)

Social scoring, covert psychological manipulation, and emotion recognition in schools and workplaces.

Action: Neutralize or shut down immediately.

02

High-risk systems (Annex III)

Employment and HR decisions, credit scoring, essential public services, and critical infrastructure operations.

Action: Implement continuous logging, full technical documentation, and functional human oversight.

03

Limited risk

Chatbots and generative AI experiences.

Action: Deliver transparency notices and machine-readable labeling for AI-generated output.

The real burden: core technical mandates

GRC, audit, and IT operations teams do not need more legal theory; they need systems that satisfy these mandates continuously.

Article 12

Automatic event logging

High-risk systems require lifetime, automated event logs that support full reconstructability of behavior.

Article 13

Transparency to deployers

Teams must produce deployer-ready instructions on expected accuracy, robustness, and limitations, often resulting in 40 to 120-page Annex IV dossiers.

Article 14

Human oversight

Passive review is not enough. Operators need practical interfaces to pause, override, or reverse decisions before harm occurs.

Article 73(6)

Incident reporting and evidence preservation

When serious incidents happen, system state and evidence must be frozen immediately without tampering.

Article 86

Right to explanation

Affected individuals can demand meaningful explanations for decisions such as job rejection, credit denial, or service access outcomes.

Compliance as infrastructure

Leksly is built to operationalize EU AI Act obligations in your delivery path, rather than treating compliance as quarterly paperwork.

Article 13

Manual Annex IV documentation drains legal and engineering teams.

Generate Annex IV technical dossiers from live system telemetry instead of spreadsheets and ad hoc templates.

Article 73(6)

Incident evidence is often incomplete or mutable when regulators ask.

Preserve records in a persistent cryptographic forensic vault with immutable custody and verifiable integrity.

Article 86

Teams cannot answer why a model-driven decision was made in time.

Use real-time explanation APIs to retrieve decision context, controls, and reasoning traces for subject requests.

Limited beta slots before August 2026

Leksly is in active development, and we are onboarding a limited set of beta partners before public launch. Request a briefing to reserve a slot and discuss early-adopter terms.

This page is a product and engineering briefing and does not replace legal advice.

Request an early access briefing

Compliance as Infrastructure

Leksly is forensic-grade middleware that provides technical evidence for EU requirements without locking you into a single model provider.

Early accessSign up for assessmentBlogPrivacyImprint

Early access

Leksly is in active development, and we are onboarding a limited set of beta partners before public launch. Request a briefing to reserve a slot and discuss early-adopter terms.

Optional: help us prioritize your onboarding

Select one or more areas that matter to you. You can also pick specific focuses under each selected area.

Main priorities (select all that apply)

© 2026 Leksly. All rights reserved. Forensic-grade AI governance.

info@leksly.com

Data Integrity · Regulatory Certainty · Forensic Custody