Fundamental Rights Impact Assessment for AI Systems
Fundamental Rights Impact Assessment for AI systems
CBRNE Ltd, in collaboration with KU Leuven, developed one of the pioneering Fundamental Rights Impact Assessment (FRIA) methodologies under the Horizon 2020 ALIGNER project. Published in 2023, this methodology addresses the ethical and legal challenges associated with deploying high-risk AI systems in law enforcement. The FRIA methodology provides a structured framework to evaluate and mitigate the impact of AI tools on fundamental rights, while embedding ethical and trustworthy AI principles into operational practices.
The FRIA methodology comprises two complementary templates designed to support law enforcement agencies (LEAs) in addressing the ethical and legal challenges associated with AI deployment. The Fundamental Rights Impact Assessment Template assists LEAs in identifying and assessing the potential impacts of AI systems on fundamental rights, including equality, non-discrimination, freedom of expression, and privacy. By providing a detailed assessment process across 17 identified challenges, this template enables LEAs to proactively address concerns related to fairness, accountability, and privacy. The AI System Governance Template focuses on embedding ethical and trustworthy AI principles by outlining 43 minimum standards covering transparency, diversity, technical robustness, and societal wellbeing. This template guides LEAs in adopting governance practices that mitigate risks and uphold fundamental rights.
The methodology is both practical and operational, allowing multidisciplinary teams of legal, ethical, and technical experts to perform comprehensive assessments throughout the AI lifecycle. It ensures that AI systems align with ethical and legal standards, mitigating risks such as algorithmic bias, non-transparency, and privacy violations. Moreover, it facilitates ongoing monitoring and updates as AI systems evolve, provides tools for embedding ethical standards into governance frameworks, and addresses societal and operational concerns by involving stakeholders in the decision-making process.
The FRIA methodology has been recognised for its contribution to AI ethics and has been published as an original article in the AI Ethics journal.
The templates and a detailed handbook are available for download from the ALIGNER project website: Fundamental Rights Impact Assessment.