Today, 1st August, the European Law on Artificial Intelligence (AI Law), the world’s first global regulation on artificial intelligence, enters into force. The AI Law is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect the fundamental rights of individuals. The LivAI project welcomes the entry into force of this regulation, that aims to establish a harmonised internal market for AI in the EU, encouraging the adoption of this technology and creating an environment conducive to innovation and investment, in line with the objectives of the project.
The AI Act introduces a forward-looking definition of AI, based on a product safety and risk-based approach in the EU:
- Minimal Risk: most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems do not face any obligations under the AI Act due to their minimal risk to citizens’ rights and security. Companies can voluntarily adopt additional codes of conduct.
- Specific transparency risk: AI systems, such as chatbots, should clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users must be informed when biometric categorisation or emotion recognition systems are used. In addition, providers will have to design systems so that synthetic audio, video, text and image content is marked in a machine-readable format and can be detected as artificially generated or manipulated.
- High risk: AI systems identified as high risk will need to meet stringent requirements, including risk mitigation systems, high quality data sets, activity logging, detailed documentation, clear user feedback, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include, for example, AI systems used for recruitment, or to assess whether someone is eligible for a loan, or to operate autonomous robots.
- Unacceptable risk: AI systems considered a clear threat to the fundamental rights of individuals will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent the free will of users, such as toys that use voice assistance and encourage dangerous behaviour by minors, systems that enable “social scoring” by governments or companies, and certain predictive policing applications. In addition, some uses of biometric systems will be banned, e.g. emotion recognition systems used in the workplace and some systems to categorise people or real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with limited exceptions).
To complement this system, the AI Act also introduces standards for so-called general-purpose AI models, which are high-capacity AI models that are designed to perform a wide variety of tasks, such as generating human-like text. General purpose AI models are increasingly used as components of AI applications. The AI Act will ensure transparency along the value chain and addresses the potential systemic risks of more capable models.
Implementation and enforcement of AI rules
Member States have until 2 August 2025 to designate national competent authorities, which will oversee the implementation of the rules applicable to AI systems and carry out market surveillance activities. The Commission’s Office for Artificial Intelligence will be the main implementing body for AI law at EU level, as well as the implementing body for rules on general-purpose AI models.
Three advisory bodies will support the implementation of the rules. The European Committee on Artificial Intelligence will ensure uniform implementation of the AI Act in all EU Member States and act as the main body for cooperation between the Commission and the Member States. A scientific panel of independent experts will provide technical advice and input on the implementation of the law. In particular, this panel may issue alerts to the IA Office on risks associated with general purpose IA models. The IA Office may also receive guidance from an advisory forum, composed of a diverse set of stakeholders.
Companies in breach of the rules will be fined. Fines could be up to 7% of annual global turnover for violations of prohibited AI applications, up to 3% for violations of other obligations and up to 1.5% for providing incorrect information.
Next steps
Most of the rules in the AI Act will start to apply on 2 August 2026. However, prohibitions on AI systems deemed to present an unacceptable risk will already apply after six months, while rules for so-called general purpose AI models will apply after 12 months.
To overcome the transitional period before full implementation, the Commission has launched the Artificial Intelligence Pact. This initiative invites AI developers to voluntarily adopt the key obligations of the AI Act ahead of the legal deadlines.
The Commission is also developing guidelines to define and detail how the AI Act should be implemented and provide co-regulatory instruments such as standards and codes of practice. The Commission launched a call for expressions of interest to participate in the elaboration of the first comprehensive IA Code of Best Practices, as well as a multi-stakeholder consultation which gave all interested parties the opportunity to express their views on the first Code of Best Practices under the IA Law.
In conclusion, the entry into force of the EU Artificial Intelligence Act marks a significant milestone in the global regulation of AI, ensuring that its development and use in the EU is trustworthy and protects the fundamental rights of individuals. This breakthrough reinforces the core values of the LivAI project, which seeks to promote the ethical and safe use of AI in adult education. We welcome this initiative that encourages responsible innovation and investment in advanced technologies, aligning with our mission to empower educators and administrators to effectively and ethically integrate AI into their practices. From the LivAI team, we believe that this is a crucial step towards a future where artificial intelligence serves to empower rather than exclude, ensuring that all citizens, regardless of age, can benefit from the opportunities this technology offers.
About LivAi
LivAI is an Erasmus+ project with a focus on the ethical approach to AI and data in the field of adult education. Led by the Universitat Jaume I (Spain) in collaboration with the Finnova Foundation (Belgium), KonnektableTechnologiesLtd (Ireland), EFCoCert (Switzerland),Project Consult (Italy) and UBITECH (Greece), the project has a total budget of €250,000.