In April, Microsoft CEO Satya Nadella contended that the software giant is pulling away from competitors in artificial intelligence, with "advanced" AI for detecting security threats and a greater number of AI services "than any other cloud provider."
But during the EmTech Next conference at MIT on Tuesday, Microsoft President and Chief Legal Officer Brad Smith suggested that Microsoft won't be surprised if lawmakers and government regulators take action over AI down the road.
"I don't believe that on something as fundamentally impactful as artificial intelligence that one should leave this to either the software developers alone or to the private sector alone," Smith said during the MIT Technology Review conference in Cambridge, Mass. "I think we need a path that will enable governments, especially in the democratically elected countries of the world, to ensure that we're all on an appropriate path."
That journey starts with identifying ethical issues, developing a consensus about key ethical principles and building industry support and standards around some of those ethical principles, he said.
"Ultimately, I do believe that in our future there will be AI law and AI regulation," Smith said. "We've never lived in a world where we thought it was appropriate for government to legislate ethics on every question, but we do expect that on certain issues of such ethical importance—can you rob, can you kill—no, we legislate on those issues."
As tech industry companies "endow machines with the power to do things that previously could be done only by human beings, I think it's perfectly appropriate to expect governments to ensure that laws apply to what these machines can do," Smith said.