EU Artificial Intelligence Act test
hello select
New EU Definition of AI
“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
A system that uses rules defined solely by natural persons to automatically execute operations should not be considered an AI system.
AI Act will enter into force two years after agreement (see materials tab for latest updates)
The agreement by MEPs to progress with the AI Act is historic. The proposal has been in discussion for some years and was amended to take account of the MEPs views. Given the MEPs agreement, a final legal form will be developed and translated into the Law of each country in the EU.
What is in the Act?
The EU AI Act is a comprehensive regulatory framework designed to ensure the safe and ethical development, deployment, and use of artificial intelligence (AI) systems within the EU. Here is a summary of its key provisions:
- Objective and Scope:
- The Act aims to improve the functioning of the internal market by promoting human-centric and trustworthy AI while ensuring high levels of protection for health, safety, and fundamental rights.
- It applies to providers, deployers, importers, and distributors of AI systems within the Union, excluding AI for national security, military, research purposes, and personal non-professional use.
- Prohibited AI Practices:
- The Act prohibits AI systems that manipulate behavior, exploit vulnerabilities, create social scoring, predict criminality based on profiling, scrape facial images, infer emotions in workplaces or schools, and use biometric categorization or real-time remote biometric identification with certain exceptions.
- High-Risk AI Systems:
- High-risk AI systems are subject to strict requirements, including risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity
- These systems include those used in critical infrastructure, education, employment, essential services, law enforcement, migration, border control, and justice.
- Transparency and Information Requirements:
- AI systems must provide clear and understandable information to users about their capabilities and limitations. This includes informing individuals when they are interacting with an AI system, especially in cases of emotion recognition or biometric categorization.
- Governance and Enforcement:
- The Act establishes a governance framework involving national competent authorities, the European Artificial Intelligence Board, and the AI Office to oversee compliance and enforcement.
- Providers and users of AI systems must ensure compliance with the Act’s requirements and may be subject to penalties for non-compliance.
- Support for Innovation:
- The Act includes measures to support innovation, particularly for small and medium-sized enterprises (SMEs) and startups. This includes regulatory sandboxes and other initiatives to foster the development of ethical and trustworthy AI.
- Relation to Existing Laws:
- The Act complements existing EU laws on data protection, consumer protection, and product safety. It ensures that AI systems comply with these laws while also addressing specific AI-related risks.
- Voluntary Codes of Conduct:
- Providers of non-high-risk AI systems are encouraged to adopt voluntary codes of conduct to promote ethical and trustworthy AI practices [r165-1].
Overall, the EU AI Act seeks to balance the promotion of AI innovation with the protection of fundamental rights and public interests, ensuring that AI systems are developed and used responsibly across the Union.
Risk Based Approach
The rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics).