It’s time for limited, mandatory testing for AI

It’s time for limited, mandatory testing for AI

Like other dangerous products, the largest models should be tested to prevent possible harm

While millions of lives have been saved through medical drugs, many thousands died during the 19th century by ingesting unsafe medicines sold by charlatans. Across the US and Europe this led to the gradual implementation of food and drug safety laws and institutes — including the US Food and Drug Administration — to ensure that the benefits outweigh the harms. 

The rise of artificial intelligence large language models such as GPT-4 is turbocharging industries to make everything from scientific innovation to education to film-making easier and more efficient. But alongside enormous benefits, these technologies can also create severe national security risks.  

We wouldn’t allow a new drug to be sold without thorough testing for safety and efficacy, so why should AI be any different? Creating a “Food and Drug Administration for AI” may be a blunt metaphor, as the AI Now Institute has written, but it is time for governments to mandate AI safety testing.

Continue reading at ft.com