US, UK, and allies announce pact for ‘secure by design’ AI
2 min readAn non-binding accord among 18 nations emphasizes AI promoting public safety
The U.S., the U.K., and over a dozen countries introduced the initial comprehensive international agreement addressing AI safety. Emphasizing the importance of creating AI systems that are inherently secure, the 18 nations outlined principles in a 20-page document, urging companies to develop and deploy AI with a focus on ensuring the safety of customers and the general public, safeguarding against potential misuse.
The accord, not legally binding, offers broad suggestions: overseeing AI misuse, safeguarding data, and scrutinizing software providers.
However, Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, emphasized the significance of numerous countries endorsing the notion that prioritizing safety in AI systems is crucial.
Easterly stated, “This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs.” She added that the guidelines signify “an agreement that the most important thing that needs to be done at the design phase is security.
The accord is the most recent among various government initiatives, albeit lacking enforcement power, aimed at influencing the trajectory of AI development, given its growing impact on industry and society. The United Kingdom recently convened an AI summit.
Beyond the U.S. and the U.K., the 18 nations endorsing these fresh guidelines encompass Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
The framework addresses concerns about preventing the hijacking of AI technology by hackers and suggests measures such as releasing models only after thorough security testing.
It does not address complex issues related to the appropriate uses of AI or the methods for collecting the data that fuels these models.
The ascent of AI has sparked numerous concerns, encompassing worries about its potential use to disrupt democratic processes, amplify fraud, or result in significant job losses, among other potential harms.
In terms of AI regulations, Europe has taken the lead over the United States, with lawmakers in the region working on drafting AI rules. Recently, France, Germany, and Italy reached an agreement on AI regulation, endorsing “mandatory self-regulation through codes of conduct” for foundation models of AI, designed to generate a wide array of outputs.
The Biden administration has been urging legislators to enact regulations for AI, but the divided U.S. Congress has struggled to make significant progress in passing effective legislation.
In October, the White House aimed to mitigate AI risks for consumers, workers, and minority groups while enhancing national security through a new executive order.