This section includes authoritative reports, frameworks, and practical guidelines from international bodies, standardization organizations, and expert communities. These documents offer strategic and technical direction for developing, regulating, and deploying secure and resilient AI systems.

TitlePublisherYearDescriptionLink
Adversarial Machine Learning
A Taxonomy and Terminology of Attacks and Mitigations
NIST2025This NIST report offers a comprehensive taxonomy and standardized terminology for adversarial machine learning, detailing attack types—such as evasion, poisoning, and privacy breaches—and corresponding mitigation strategies across predictive and generative AI systems. NIST
Foresight Cybersecurity Threats for 2030ENISA2024This ENISA report outlines the top 10 emerging cybersecurity threats projected for 2030, including software supply chain compromises, AI misuse, and digital authoritarianism, emphasizing the need for proactive, cross-sectoral resilience strategies.ENISA
Cybersecurity Best Practices for the
Safety of Modern Vehicles
U.S. Department of Transportation2022This document offers updated cybersecurity guidance for modern vehicles, promoting a layered, risk-based approach and industry-wide collaboration.U.S. Department of Transportation
Cybersecurity Challenges in the uptake of Artificial Intelligence in Autonomous DrivingENISA2021This report examines cybersecurity challenges in AI-driven autonomous vehicles, highlighting vulnerabilities like adversarial attacks on perception systems and recommending lifecycle security assessments, supply chain governance, and incident response enhancements.ENISA