February 2, 2025
Active now - prohibited AI practices are enforceable
Any unacceptable-risk use must be shut down immediately (for example social scoring, manipulative AI, or emotion recognition in schools/workplaces).
EU AI Act
The EU AI Act is no longer a future issue. Fines can reach EUR 35 million or 7% of global annual turnover for prohibited practices, and up to EUR 15 million or 3% for high-risk violations. This page translates legal text into practical operational risk.
Max fine
EUR 35M / 7%
High-risk fine
EUR 15M / 3%
Full enforcement
02 Aug 2026
February 2, 2025
Active now - prohibited AI practices are enforceable
Any unacceptable-risk use must be shut down immediately (for example social scoring, manipulative AI, or emotion recognition in schools/workplaces).
August 2, 2025
GPAI model obligations apply
General-purpose AI model providers and deployers must meet transparency and governance duties.
August 2, 2026
Annex III high-risk obligations are fully enforceable
High-risk systems must run with complete logging, oversight controls, and regulator-ready technical documentation.
Classify your use cases in minutes before enforcement timelines set your roadmap for you.
01
Social scoring, covert psychological manipulation, and emotion recognition in schools and workplaces.
Action: Neutralize or shut down immediately.
02
Employment and HR decisions, credit scoring, essential public services, and critical infrastructure operations.
Action: Implement continuous logging, full technical documentation, and functional human oversight.
03
Chatbots and generative AI experiences.
Action: Deliver transparency notices and machine-readable labeling for AI-generated output.
GRC, audit, and IT operations teams do not need more legal theory; they need systems that satisfy these mandates continuously.
Article 12
High-risk systems require lifetime, automated event logs that support full reconstructability of behavior.
Article 13
Teams must produce deployer-ready instructions on expected accuracy, robustness, and limitations, often resulting in 40 to 120-page Annex IV dossiers.
Article 14
Passive review is not enough. Operators need practical interfaces to pause, override, or reverse decisions before harm occurs.
Article 73(6)
When serious incidents happen, system state and evidence must be frozen immediately without tampering.
Article 86
Affected individuals can demand meaningful explanations for decisions such as job rejection, credit denial, or service access outcomes.
Leksly is built to operationalize EU AI Act obligations in your delivery path, rather than treating compliance as quarterly paperwork.
Article 13
Manual Annex IV documentation drains legal and engineering teams.
Generate Annex IV technical dossiers from live system telemetry instead of spreadsheets and ad hoc templates.
Article 73(6)
Incident evidence is often incomplete or mutable when regulators ask.
Preserve records in a persistent cryptographic forensic vault with immutable custody and verifiable integrity.
Article 86
Teams cannot answer why a model-driven decision was made in time.
Use real-time explanation APIs to retrieve decision context, controls, and reasoning traces for subject requests.
Leksly is in active development, and we are onboarding a limited set of beta partners before public launch. Request a briefing to reserve a slot and discuss early-adopter terms.
This page is a product and engineering briefing and does not replace legal advice.