Cloud Computing Providers May Be Forced To Disclose AI Users: Report

‘This is something that should go through a public forum, and should be done working with tech providers. Right now, it sounds like a knee-jerk reaction. It could divert attention from what we really need to focus on. This needs more thoughtful discussions. I would like to see the Congress get educated on this, and see the government study this issue for public and legislative input,’ says John Woodall, vice president of solutions architecture West at General Datatech.

ARTICLE TITLE HERE

Cloud vendors like Microsoft, Google and Amazon might soon be forced to reveal more information about their AI customers if a new rule is adopted.

The White House may be readying an executive order that “would direct the Commerce Department to write rules forcing to disclose when a customer purchases computing resources beyond a certain threshold,” according to a report from the online news site Semafor.

Semafor, citing people familiar with the upcoming order, wrote that such an executive order has yet to be finalized, and that changes could be made before published.

id
unit-1659132512259
type
Sponsored post

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Such an executive order would be similar to other policies in the banking sector such as a requirement that companies report cash transactions in excess of $10,000 to help prevent illegal activities such as money laundering, Semafor wrote.

“In this case, the rules are intended to create a system that would allow the U.S. government to identify potential AI threats ahead of time, particularly those coming from entities in foreign countries. If a company in the Middle East began building a powerful large language model using Amazon Web Services, for example, the reporting requirement would theoretically give American authorities an early warning about it,” Semafor wrote.

The White House did not reply to a CRN request for further information by press time.

There are countries and others who have shown ill intent towards the U.S. government and citizens, and AI could be one of the ways they do that in the future, said John Woodall, vice president of solutions architecture West at General Datatech, a Dallas-based solution provider.

However, Woodall told CRN, there are some potential issues with an executive order aimed at forcing cloud providers to report the use of computing resources above a certain threshold.

“With AI, does the government understand the technology enough to determine how much resources consumed become a concern?” he said.

Also, Woodall said, like everything, there’s a good side and bad side to AI.

“Companies can use AI for faster development, or say for health diagnosis,” he said. “If AI is used by someone with bad intents, it’s important to know. But if a cloud provider reports a user goes over some threshold, what does it mean. If a telco or service provider spins up resources, do you report them? They could be using AI to benefit customers, but it could be a tremendous amount of resources. This is something you expect them to do. But then the next user might do the same to design a new weapon or weaponize COVID.”

There is a lot of fear-mongering because the average user doesn’t understand AI, despite the fact that it has been around for decades, Woodall said.

“But now the resources are there to easily pursue AI-driven outcomes,” he said. “However, we still have not had the ethics discussion over AI because it can be used in so many fields. How do you manage the morality of AI? By setting thresholds?”

The other potential issue has to do with the fact that this might be an executive order, which means there is no legislative oversight, and that it could be changed with the next administration, Woodall said.

“This is something that should go through a public forum, and should be done working with tech providers,” he said. “Right now, it sounds like a knee-jerk reaction. It could divert attention from what we really need to focus on. This needs more thoughtful discussions. I would like to see the Congress get educated on this, and see the government study this issue for public and legislative input.”

Other government actions

The executive order, should it be published, follows several moves by the White House to find ways to manage the potential risks of AI technology.

The White House in July secured voluntary commitments from seven leading AI companies to help develop safe, secure, and trustworthy AI. Those seven included Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

The White House earlier this month followed that up with similar voluntary commitments from an additional eight companies including Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability.

Those commitments include internal and external security testing of their AI systems before their release; sharing information across industry, government, civil society, and academia on managing AI risks; investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights; facilitating third-party discovery and reporting of AI system vulnerabilities; develop technologies such as watermarking to let users know when content is AI-generated; and more.

This year also saw the administration launch a two-year “AI Cyber Challenge to use AI to protect important software and code such as that which runs the internet and critical infrastructures, a meeting to discuss consumer risks with AI, meeting to look at the risks and opportunities of AI.