GuardAI project: Robust and Secure Edge AI Systems for Safety-Critical Applications

IMG_2886 700x450

The European research project GuardAI (Enhancing Robustness and Security of Edge AI Systems for Safety-Critical Applications) was officially launched on October 10th with a kick-off meeting, assembling all project partners in Cyprus, hosted by the KIOS Center of Excellence at the University of Cyprus.

This new project GuardAI seeks to enhance the security of edge AI systems, such as drones, connected and autonomous vehicles, and 5G network edge infrastructure. The machine learning algorithms used in these systems can be susceptible to making errors even by small changes, random noise, or glitches in the data they process. Addressing these critical vulnerabilities is important to maintain the trustworthiness in such systems to enable them to undertake critical operations.

This will be achieved by (1) developing innovative protection layer for machine learning algorithms to ensure the integrity, security, and resilience of these systems, (2) integrating context indicators and holistic situational understanding into AI algorithms, enabling systems to adapt and make informed decisions in dynamic environments, (3) collaborating with researchers, industry experts, government agencies, and AI practitioners to lay the groundwork for future certification schemes that promote the adoption of secure AI technology across various domains. Ultimately GuardAI is committed to developing cutting-edge, secure, and robust, solutions tailored to the specific needs of edge AI safeguarding critical infrastructure and systems.

The innovative GuardAI project is coordinated by the KIOS Research and Innovation Center of Excellence at the University of Cyprus and involves 9 EU partners and 1 associate partner from Cyprus, Greece, Italy, Austria, and the UK, with diverse expertise, uniting technical prowess, research acumen, and industrial collaboration to fortify edge AI systems.

The KIOS CoE will develop algorithms for enhancing the robustness of machine learning algorithms and will also lead the use-case related to surveillance and monitoring with AI-enabled drones to showcase the solutions.

According to the Project’s Coordinator, Associate Professor Theocharis Theocharides, “Τhe GuardAI project aligns with the objectives of the EU AI Act, the first-ever legal framework on AI, which supports the development of trustworthy AI. By developing novel AI defence mechanisms, the project will enhance resilience against adversarial attacks and data manipulation of Edge AI Systems”.

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101168067.

Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.