Exploring Strategies to Safeguard AI Systems Against Cybersecurity Threats
The complexity, interconnectivity, and data reliance of AI systems make them prime targets for cyber-attacks and susceptible to critical failures with tangible repercussions. SHASAI will implement and validate its methodologies across three practical scenarios to combat cybersecurity risks:
- AI-powered cutting machines in the agrifood sector
- Eye-tracking systems utilized in assistive healthcare technologies
- A tele-operated last-mile delivery vehicle within the mobility sector
These varied applications will enable the research team to assess their approach across diverse industries, ensuring transferability to other AI domains. The project’s ultimate goal is to establish a robust, adaptable, and trustworthy security framework that upholds the resilience, traceability, and compliance of AI systems with evolving cybersecurity standards in high-stakes environments.
Empowering Europe’s Endeavors to Foster Dependable AI
By translating overarching cybersecurity and AI safety principles into tangible technical protocols, SHASAI contributes to Europe’s broader mission of promoting trustworthy AI practices. In alignment with key EU directives and strategies, including the EU AI Act, the Cyber Resilience Act (CRA), the NIS2 Directive, and the EU Cybersecurity Strategy, the project consortium combines expertise from various sectors to drive innovation and progress. Commencing on November 1, 2025, SHASAI is slated to continue its impactful work until the conclusion of April 2029.