Addressing potential inequities and harms associated with artificial intelligence (AI) and machine learning (ML)

The "Combating Bias in AI/ML Implementations" project aims to address the potential inequities and harms associated with the adoption of artificial intelligence (AI) and machine learning (ML) capabilities within government agencies and practices. The initiative underscores the importance of addressing societal injustices and cultural concerns surrounding AI/ML technologies, aiming to make positive interventions that reinforce democratic values of justice and equity.

Over the course of three phases, 10x funded a project team that collaborated with industry and academic experts to develop open-source "de-biasing" tools that allow civil servants to identify and mitigate biases in datasets used for AI applications. By focusing on upstream data components of AI/ML implementations, the project offers an equitable approach for civil servants who are incorporating these emerging technologies into downstream applications, from Human Resource to benefits administration

The project has completed Phase 3, delivering three functional de-biasing tools in the Bias Toolkit.


Tag: