In the present day, Microsoft is asserting its assist for brand new voluntary commitments crafted by the Biden-Harris administration to assist make sure that superior AI methods are secure, safe, and reliable. By endorsing the entire voluntary commitments introduced by President Biden and independently committing to a number of others that assist these essential objectives, Microsoft is increasing its secure and accountable AI practices, working alongside different trade leaders.
By transferring rapidly, the White Home’s commitments create a basis to assist make sure the promise of AI stays forward of its dangers. We welcome the President’s management in bringing the tech trade collectively to hammer out concrete steps that may assist make AI safer, safer, and extra useful for the general public.
Guided by the enduring rules of security, safety, and belief, the voluntary commitments deal with the dangers introduced by superior AI fashions and promote the adoption of particular practices – similar to red-team testing and the publication of transparency studies – that may propel the entire ecosystem ahead. The commitments construct upon robust pre-existing work by the U.S. Authorities (such because the NIST AI Threat Administration Framework and the Blueprint for an AI Invoice of Rights) and are a pure complement to the measures which were developed for high-risk purposes in Europe and elsewhere. We sit up for their broad adoption by trade and inclusion within the ongoing international discussions about what an efficient worldwide code of conduct may seem like.
Microsoft’s further commitments give attention to how we are going to additional strengthen the ecosystem and operationalize the rules of security, safety, and belief. From supporting a pilot of the Nationwide AI Analysis Useful resource to advocating for the institution of a nationwide registry of high-risk AI methods, we imagine that these measures will assist advance transparency and accountability. Now we have additionally dedicated to broad-scale implementation of the NIST AI Threat Administration Framework, and adoption of cybersecurity practices which might be attuned to distinctive AI dangers. We all know that this can result in extra reliable AI methods that profit not solely our clients, however the entire of society.
You may view the detailed commitments Microsoft has made right here.
It takes a village to craft commitments similar to these and put them into observe at Microsoft. I wish to take this chance to thank Kevin Scott, Microsoft’s Chief Know-how Officer, with whom I co-sponsor our accountable AI program, in addition to Natasha Crampton, Sarah Chook, Eric Horvitz, Hanna Wallach, and Ece Kamar, who’ve performed key management roles in our accountable AI ecosystem.
Because the White Home’s voluntary commitments mirror, folks should stay on the heart of our AI efforts and I’m grateful to have robust management in place at Microsoft to assist us ship on our commitments and proceed to develop this system we have now been constructing for the final seven years. Establishing codes of conduct early within the growth of this rising expertise is not going to solely assist guarantee security, safety, and trustworthiness, it’s going to additionally permit us to higher unlock AI’s optimistic affect for communities throughout the U.S. and around the globe.