Ethical Impact Assessment for AI systems

Case Studies Ethical Impact Assessment for AI systems

Ethical Impact Assessment for AI systems

The Ethical Impact Assessment (EIA) methodology, developed as part of the MAGNETO project, stands as one of the first comprehensive frameworks addressing the ethical implications of what are now classified under the AI Act as high-risk AI systems. Published in 2019, the methodology was among the earliest to operationalise the EU High-Level Expert Group’s Ethics Guidelines on Trustworthy AI, providing law enforcement agencies (LEAs) with tools to integrate ethical principles into the design, development, and deployment of AI systems.

This methodology is publicly available through the CBRNE Ltd publication Ethical Assessment Regarding the Use and Misuse of AI Systems for Law Enforcement: A Handbook for Law Enforcement Officials.Access the handbook here.

Central to the methodology are three key components. The Ethical Risk Assessment (ERA) Form evaluates the alignment of AI tools with ethical principles, providing developers and decision-makers with guidance throughout the AI lifecycle. The Misuse Risk Assessment (MRA) Form identifies and mitigates risks associated with unethical or unintended uses of AI systems, ensuring safeguards are in place during both design and operational stages. The Risk Matrix further supports these efforts by prioritising ethical risks, assessing their likelihood and severity, and enabling focused mitigation strategies.

The EIA methodology played an important role in the MAGNETO project, advancing law enforcement capabilities while safeguarding fundamental rights. It provided ethical guidance during the research and development phase, supported law enforcement agencies in embedding ethical governance into AI operations, and ensured compliance with key regulations such as GDPR and the Law Enforcement Directive (LED).

The methodology’s integration into MAGNETO’s governance framework established it as an important tool for evaluating ethical risks in high-risk AI systems. Published in 2019, the methodology is a reference point for addressing ethical, societal, and legal challenges in the deployment of AI systems in Law Enforcement contexts. To explore the handbook and gain further insights into the methodology visit CBRNE Ltd Publications.