Mind the Gap: The Challenges of Assurance for Artificial Intelligence

INTRODUCTION

Artificial intelligence (AI) capabilities have progressed rapidly since defeating human players at board games. From voice assistants to chatbots, image filters to multi-modal content generation, AI is accelerating individual, commercial, and scientific purposes. The development of increasingly advanced AI systems, each with unique strengths and vulnerabilities, has also made assessing and managing AI risk a daunting prospect.

Controls do exist. A variety of stakeholders have made strides in proposing tools and methods to govern AI systems. A common approach applied across these attempts is based on assurance, a collection of approaches and techniques that aim to measure the quality, reliability, and trustworthiness of AI products, services, organizations, or professionals. To some extent, governments are relying on assurance-based approaches to govern AI – from assessments and audit to certification – regardless of whether they have plans to develop AI regulations. Yet the AI assurance ecosystem remains nascent. Frameworks for AI assessments and audits are still evolving, as are supporting standards and accreditation criteria for third-party auditors, reviewers and evaluators. In some cases, it is now possible to apply certain assurance mechanisms to the governance of narrower applications of AI and related organizational processes. However, assurance-based approaches to more advanced systems based on large language models (LLMs) or generative models remain in their infancy.

Continue reading here
Download pdf