- Security in AI is a critical issue, as highlighted in the 2024 FLI AI Safety Index published in December 2024.
- Policymakers must regulate AI use, while end-users must gain knowledge to effectively manage associated risks.
- The Erasmus+ LivAI project aims to address knowledge gaps by providing adult users with the skills needed to engage responsibly with AI.
Security in the field of artificial intelligence (AI) remains one of the most urgent challenges to address, as emphasized by the authors of the 2024 FLI AI Safety Index, published in December 2024. The report assessed six leading AI companies and their security practices.
The companies surveyed—Anthropic, Google DeepMind, Meta, OpenAI, x.AI, and Zhipu AI—are at the forefront of AI technology. Despite their significant advancements, the management of their intelligent systems still falls short of high-quality standards.
The findings of the 2024 FLI AI Safety Index revealed substantial gaps in accountability and transparency, along with insufficient capacity to manage the risks associated with AI deployment.
The safety protocols implemented by these companies were deemed inadequate relative to the technological potential of their AI systems. Scholars underscore the importance of establishing more responsible governance structures that prioritize security.
Why is security so crucial?
The rapid pace of AI development makes it nearly impossible to define clear boundaries for its use and predict all the risks it may entail. However, some risks have already become evident. Among the key concerns highlighted by the authors of the 2024 FLI AI Safety Index are the malicious use of AI and the risks posed by advanced AI systems. The central issue they raise is the need for security measures that ensure AI remains under human control.
To address this, it is essential to engage policymakers while simultaneously fostering responsible practices among end-users. Policymakers must regulate the use of AI to mitigate its risks, while end-users must be equipped with the knowledge to recognize and manage these risks effectively.
The Erasmus+ LivAI project plays a key role in addressing knowledge gaps by equipping adult end-users with the necessary skills to engage responsibly with new technology. The project’s main objectives include creating educational resources on AI and developing an e-learning platform for certifying digital skills, with a particular focus on an ethical approach to AI.
As AI technology continues to evolve, an adaptive process in security and protection is expected, making it an essential ethical issue. In this context, the project aims to equip individuals with the knowledge and tools required to navigate the ever-changing digital landscape consciously and securely.
About the Erasmus+ LivAI project
The Erasmus+ LivAI project is led by Universitat Jaume I (Spain), in collaboration with Finnova Foundation (Belgium), Konnektable Technologies Ltd (Ireland), EFCoCert (Switzerland), Project Consult (Italy), and UBITECH (Greece), the project has a total budget of EUR 250,000.