Challenging the Machine: Contestability in Government AI Systems

Challenging the Machine: Contestability in Government AI Systems

As government agencies move to adopt AI across a range of programs, choices made in system design can ensure individuals’ ability to effectively challenge decisions made about them.

In an October 2023 executive order, President Biden drew a highly detailed but largely aspirational road map for the safe and responsible development and use of artificial intelligence (AI). The executive order’s premise that AI “holds extraordinary potential for both promise and peril” is perhaps nowhere more clearly manifested than in efforts currently well underway to adopt AI and other advanced technologies in the administration of government programs.

When AI or other automated processes are used to make decisions about individuals—to grant or deny veterans’ benefits, to calculate a disabled person’s medical needs, or to target enforcement efforts—they directly implicate the principles of fairness and accountability called for in the executive order. But contestability—a person’s right to know why a government decision is being made and the opportunity to challenge the decision, tailored to the capacities and circumstances of those who are to be heard—is not merely a best practice. 

Across a wide range of government decision-making, contestability is required under the Due Process Clause of the Constitution. Especially pertinent, given the complexity of many AI systems and the inscrutability of some, is the Supreme Court’s insistence on understandable notice: “An elementary and fundamental requirement of due process … is notice reasonably calculated, under all the circumstances, to apprise interested parties of the pendency of the action and afford them an opportunity to present their objections.” Additionally, federal laws establishing many programs require specific notice and right-to-be-heard procedures. Contestability can also serve other public interests: Challenging a specific decision can uncover systemic errors, contributing to ongoing improvements and saving money in the long run.

Continue reading at