- AI integration in education offers new learning methods but poses challenges like privacy risks and bias perpetuation.
- Guidelines for educators emphasize thoughtful AI use planning, data analysis, and pilot testing for effective implementation.
- Educational institutions should ensure AI systems align with GDPR, prioritize data security, and collaborate effectively with service providers.
Brussels, 13/05/2024.
The integration of AI systems into the educational process offers new ways of learning, teaching, and operating educational institutions efficiently. However, it also presents new challenges for educators who must be aware of the risks to student privacy and the potential discrimination that could arise from perpetuating biases.
The LivAi project brings you a brief guide for administrators of educational institutions, teachers, and anyone interested, following the criteria outlined in the European Commission report “Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators” (2022).
It will help administrators and teachers question the new methodologies and technologies they will implement in their classrooms, engaging in a positive, critical, and ethical manner. They should ensure that the AI and data analytics systems they use prioritize security, ease of management, and are designed for the common good.
1.Planning for Effective Use of AI and Data in School
When considering the use of AI and data, conduct a collaborative and thoughtful internal school review process.
- Examine how they can use AI systems to positively support teaching and student learning. Conduct a SWOT analysis to identify the strengths and weaknesses.
- Predicting the consequences and the impact of using data and AI in education.
- Use an incremental approach to make the risks manageable and allow flexibility in view of changes. Plan a gradual introduction of these technologies, considering the possible risks, how to monitor them, and how to reduce or stop their execution in case of unexpected effects.
2. Review current AI systems and data use
In accordance with the European Union’s General Data Protection Regulation (GDPR), you should initiate an analysis of the quality of the data to collect, how much data is actually needed, what level of specificity, for how long, and how it will be stored securely. Eventually this process will culminate in the development of a set of policies and procedures before the implementation of AI systems. These measures aim to anticipate and deal with problems when they arise, as well as to communicate to educators, students or legal representatives.
These policies should include, at a minimum:
- Ensuring the public procurement of trustworthy and human-centric AI.
- Implementing human oversight.
- Verifying that input data aligns with the intended purpose of the AI system.
- Providing appropriate staff training.
- Monitoring the AI system’s operations and taking corrective actions.
- Adhering to relevant GDPR obligations, such as conducting a data protection impact assessment.
3. Carry out a pilot of the AI system
The next step is to implement the system through a series of tests, preferably with a pilot group of learners. This will require a plan for evaluating system outcomes, including, for example, increased student learning outcomes, level of user security (emotional, mental as well as data security), financial or energy cost, and other relevant metrics.
To ensure user privacy, data can be kept on the university’s servers, rather than those of the providers. Similarly, information can be collected by groups larger than 25 people to avoid associating data to a certain person.
4.Collaborate with the AI system provider
If using a service provider, carefully examine the documentation and the Service Level Agreement (SLA), ensuring compliance with the GDPR and the text of the European AI Law. Similarly, verify the provider’s availability during implementation and the maintenance services they will offer. Users of AI systems also have the right and responsibility to report any inconsistencies, dangers, or insecurities to the providers that may arise from the service. Additionally, school administrators must establish boundaries to maintain independence from the system vendor.
In conclusion, a good AI governance framework in educational institutions will be characterized by administrators’ and teachers’ knowledge in recognizing that: AI systems are subject to national and EU regulation; the risks of AI use cases in education; knowledge of the AI Act and how it affects educational institutions; and knowledge of how to recognize the use of extraneous work manipulated by AI.
Similarly, a good data governance system in educational institutions will be characterized by administrators’ and teachers’ recognition of: the different ways in which student data is used; how national and EU regulations govern the processing of personal data. They know who has access to student data, how access is controlled and how long it is retained; they know that all EU citizens have the right not to be subject to fully automated decision making; they recognize examples of sensitive data, including biometric data; they know how to weigh the benefits and risks before allowing third parties to process personal data when using AI systems.