EU AI Act
The world's first comprehensive legal regulation for Artificial Intelligence, adopted by the EU Parliament in 2024, establishing risk-based requirements for AI systems.
Marketing teams must assess which of their AI tools fall into which risk category. Personalization systems, chatbots, and content generators may require transparency notices.
Explanation
The EU AI Act classifies AI systems into four risk categories: unacceptable risk (prohibited), high risk (strictly regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). Marketing AI typically falls into limited or minimal risk categories, but systems for biometric identification or manipulation are strictly regulated or banned.
Marketing Relevance
Marketing teams must assess which of their AI tools fall into which risk category. Personalization systems, chatbots, and content generators may require transparency notices. Compliance requirements are phased in starting 2025/2026.
Example
A marketing chatbot must clearly indicate that users are interacting with an AI. Emotion recognition for ad targeting is prohibited in certain contexts. For high-risk systems, technical documentation and risk assessments are mandatory.
Common Pitfalls
Underestimating categorization: Many marketing tools could fall under "limited risk." Missing documentation during audits. Heavy fines up to €35M or 7% of global turnover.
Origin & History
EU AI Act is an established concept in the field of Artificial Intelligence. The concept has evolved alongside the growing importance of AI and data-driven methods.