Act.AI

Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching

Funded by European Research Council (ERC)
ERC PoC funding a lump sum of €150,000

Objectives

The vision behind Act.AI is to utilise statistical matching for mitigating and auditing bias in Artificial Intelligence (AI) models. AI has been rapidly growing in various industries, from financial services to healthcare, education, and job recruitment. However, as AI algorithms have become increasingly sophisticated and pervasive in decision-making processes, concerns have arisen about their fairness and compliance with regulations. In particular, a proposed EU AI Act requires that AI providers in high-risk applications – such as employment, credit, or healthcare – to identify (and thereby address) discrimination by their algorithms against certain demographics of people. However, ensuring compliance with the Act can be challenging, particularly for AI startups that may not have the resources or expertise to fully understand and implement the Act’s requirements. Addressing existing disconnects between AI fairness toolkits’ capabilities and current practitioner needs, the Act.AI tool can be easily integrated into any AI workflow, in a plug and play fashion, to continuously monitor and improve its fairness. A key aspect of Act.AI is the ability to operate with different types of data (tabular, images, and text) in a variety of contexts (binary and multiclass classification and regression). It is also able to match datasets in different domains including out-of-distribution data even if these datasets have different numbers of variables or features. To ensure usability of Act.AI, it will integrate feedback from relevant stakeholders from two immediate target markets: financial service and healthcare.

Principal Investigator

More projects