- Key considerations for educational institutions to apply AI techniques after the AI law approval.
- It is crucial to incorporate AI techniques and tools into education within a regulatory and ethical framework, which is what LivAI focuses on.
January 11th, 2024, Brussels. With the recent approval of the European AI law, educational institutions are facing a crucial turning point. Integrating AI techniques and tools into CVs are now essential for preparing students for the future, especially adult learners who need efficient ways to adapt to technological and digital disruptions, which is what the LivAI project promotes.
A balance between innovation and ethical considerations is needed. Educational institutions must prioritize safeguarding student privacy, ensuring fairness and equity in AI-powered systems, and avoiding potential biases or discrimination.
Thus, the new AI regulation has paved the way for the whole of society to make good use of AI. At least, it is a first step. But what does this mean for educational institutions, both public and private, that want to make use of these tools?
Both in the AI act (provisional version) and the “General approach” of the European commission (25/11/2022) the lines to be followed are set out. Although the documents are broad in its legislation and do not seek to regulate a specific sector such as education, we will contextualize them for educational institutions.
The regulations
Unacceptable risk category systems (banned):
- AI systems used for social emotion scoring by public authorities or education.
- AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, disability or a specific social or economic situation.
- The use of ‘real-time’ remote biometric identification/categorization systems in publicly accessible spaces (using sensitive characteristics).
- AI systems that deploy harmful “manipulative subliminal” techniques with the effect of materially distorting a person’s behaviour.
High risk category systems (strict requirements for its use):
- Facial Recognition Technologies. For example, to assess student attentiveness or to validate the student’s presence at the exams.
- Biometric identification and categorisation of natural persons.
- AI systems intended to be used to determine access, admission or to assign natural persons to educational and vocational training institutions or programmes at all levels.
- AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons.
These use cases could be permitted, subject to a conformity assessment and compliance with safety requirements before entering the EU market. So, educational institutions should ensure that providers (whether third parties or themselves):
- Use a “self-assessment” showing that they comply with the new requirements and can use CE marking.
- When using biometric identification, providers would require a conformity assessment by a “notified body”.
- They should have risk and quality management, testing, technical robustness, data training and data governance, transparency, instructions, human oversight, and cybersecurity (Chapter 2, articles 8 to 17).
- If outside the EU, they require an authorized representative in the EU.
Limited risks systems (Transparency obligations):
Systems that interact with humans (i.e. chatbots), and AI systems that generate or manipulate image, audio or video content (i.e. deepfakes). When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
Low or minimal risk (codes of conduct, free use):
The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters and most AI systems currently used in the EU fall into this category. AI content must be labelled and detectable.
Educational institutions must therefore check that these products are difficult to use in a risky way. While the authorities are responsible for market surveillance and suppliers have a post-market monitoring system in place, educational entities have a responsibility to report serious incidents and malfunctions and are accountable for the effect this may have on their students and their education system.
About LivAI
LivAI is an Erasmus+ project with a focus on the ethical approach to AI and data in the field of adult education. Led by the Universitat Jaume I (Spain) in collaboration with the Finnova Foundation (Belgium), KonnektableTechnologiesLtd (Ireland), EFCoCert (Switzerland), Project Consult (Italy) and UBITECH (Greece), the project has a total budget of €250,000.