Alondra Nelson co-authors Auditing AI on bias and oversight
Updated
Updated · POLITICO · May 4
Alondra Nelson co-authors Auditing AI on bias and oversight
1 articles · Updated · POLITICO · May 4
The former acting White House science policy chief says audits should involve lawyers, social scientists and journalists, not only computer scientists, as AI already affects bail, housing, jobs and transplants.
Nelson says auditing can test whether systems do what developers claim without stifling innovation, while warning ideological bias is harder to measure than harms tied to civil rights, privacy and discrimination.
She points to emerging rules in New York and the EU, says the UK has the strongest auditing infrastructure, and argues funding and institutional collaboration are still needed to scale oversight.
Your AI assistant just lost your legal case. Who is held liable: you, the AI, or its creator?
As AI guides life-or-death decisions in warfare, can any human truly remain in meaningful control?
AI can deny your healthcare and misidentify you. What real power do you have to fight back against the algorithm?
Auditing AI Systems: Frameworks, Case Studies, and Policy for Ensuring Fairness and Safety
Overview
Published in April 2026 by MIT Press, the open-access book Auditing AI offers a practical framework to evaluate AI systems on accuracy, fairness, compliance, and security. Drawing on real-world cases like biased facial recognition, discriminatory hiring tools, and flawed autonomous weapons, the book shows how audits uncover harms and drive policy and corporate reforms. Rooted in Alondra Nelson's 2022 AI Bill of Rights blueprint, it guides tech companies, policymakers, journalists, and communities to hold AI accountable. Amid shifting U.S. policies favoring innovation over regulation, Auditing AI emphasizes the need for ongoing, multi-stakeholder auditing to ensure AI systems are fair, transparent, and trustworthy.