Microsoft seeks US agency for AI governance, lays out strategy


May 26, 20233 mins

Artificial IntelligenceChatbotsGenerative AI

In a blog post, Microsoft laid out a five-step blueprint for public governance of AI that includes implementing government-led AI safety frameworks at the inception level and identifying content generated by AI.

Microsoft is seeking the formation of a new US agency to govern AI while expressing its concerns over the safety and security aspects of the latest technology and laying out a blueprint for public governance of AI.

“We would benefit from a new agency. That is how we will ensure that humanity remains in control of technology,” Microsoft President Brad Smith said while delivering an address in Washington, according to Bloomberg

Smith’s call to build an agency whose sole purpose would be to look at governing AI and AI-based tools comes days after OpenAI’s CEO Sam Altman voiced his opinion over setting up an agency that would set ground rules for AI implementation.

OpenAI is the company behind generative AI such as ChatGPT and DALL.E 2.

During his address, Smith also expressed concerns over the content that will be generated by AI and said false content that looked realistic or “ deep fakes” could be a major issue.

“We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI,” Smith said, according to Reuters.

In a separate blog post, Smith laid out a five-step blueprint that would help in public governance of AI that includes implementing government-led AI safety frameworks and a mechanism to identify the content that is being generated by AI.

Expanding his thoughts on AI safety frameworks, Smith said companies need to build their next AI tools based on government-led regulations and added that the US National Institute of Standards and Technology (NIST), which is part of the Department of Commerce, has already launched a new AI Risk Management Framework.

This framework can be used in conjunction with other steps to ensure that responsible AI tools are implemented, Smith said.

Listing out the other steps, Smith said there was a requirement for effective safety brakes for AI systems that control critical infrastructure, such as the electrical grid, water system, and city traffic flows.

“These fail-safe systems would be part of a comprehensive approach to system safety that would keep effective human oversight, resilience, and robustness top of mind. In spirit, they would be similar to the braking systems engineers have long built into other technologies such as elevators, school buses, and high-speed trains, to safely manage not just everyday scenarios, but emergencies as well,” Smith said in the blog post.

Another important step listed by Smith puts focus on creating a broad legal and regulatory framework based on the technology architecture for AI.

“In short, the law will need to place various regulatory responsibilities upon different actors based upon their role in managing different aspects of AI technology,” Smith said, adding that laws for AI models and AI infrastructure operators have to be developed separately.

These laws could effectively lead to the customer knowing which content has been generated by AI, Smith said.

The other steps listed by Smith include opening up AI for research purposes and forging public-private partnerships to address societal challenges arising out of the new technology.