The Biden administration will begin implementing new rules laid out in the president’s executive order aimed at regulating artificial intelligence, though some experts are skeptical about how useful the new rules will be.
“The executive order’s preoccupation with model size and computing power, rather than actual use case, is misguided. This approach risks creating compliance burdens for companies without meaningfully improving accountability or transparency,” Jake Denton, a research associate at the Heritage Foundation’s Tech Policy Center, told Fox News Digital.
“The order’s blurred lines and loosely defined reporting requirements will likely yield selective, inconsistent enforcement.”
Denton’s comments come after The Associated Press reported Monday that the Biden administration would start implementing new rules from the order, including a rule that requires developers of AI systems to disclose the results of safety tests to the government.Â
The White House AI Council met Monday to discuss progress on the three-month-old executive order, according to the report, coming at the same time the 90-day goal laid out in the order under the Defense Production Act that AI companies begin sharing information with the Commerce Department.
Ben Buchanan, the White House special adviser on AI, told The Associated Press that the government has an interest in knowing if “AI systems are safe before they’re released to the public – the president has been very clear that companies need to meet that bar.”
But Denton is skeptical that the order will lead to the advertised results.
“The order’s blurred lines and loosely defined reporting requirements will likely yield selective, inconsistent enforcement,” Denton said. “Meanwhile, the substantial information asymmetry between regulators and companies will likely render oversight ineffective.”
Christopher Alexander, chief analytics officer of Pioneer Development Group, also expressed skepticism about the new rules, noting government struggles to regulate other tech industries such as cryptocurrencies and expressing fears about censorship.
WHITE HOUSE URGES CONGRESS TO ACT FOLLOWING ‘ALARMING’ AI TAYLOR SWIFT IMAGESÂ
“The Biden administration’s problematic regulation of crypto is a perfect example of government dictating to industry rather than working with industry for proper regulations,” Alexander told Fox News Digital. “I am also concerned that the aggressive censorship efforts with social media by the U.S. government in the past few years is very disconcerting, and I think any government oversight efforts must be carefully monitored by Congress for accountability, and it is crucial that they clearly define ‘who will watch the watchers.'”
Nevertheless, Alexander argued that it is important to establish standards for the industry, noting that “the private sector motivations of AI companies are not always in the best interest of the general public.”
Biden’s executive order seeks to bridge that gap, putting in place a set of common standards for future AI safety.
“I think the government is setting the tone for the future. There really isn’t a standard yet for testing safety with these models yet. Because of that, this order doesn’t have much teeth – yet,” Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), told Fox News Digital.Â
“If the administration fails to meet the moment, by creating stifling regulations, America will see its global edge in AI technology wither away.”
“But there are some consensus processes emerging. Eventually, there will probably be several prompts generated either randomly or not to test the models. There will be some sophisticated AI models that will be used to converse or test new models. In addition, ‘red teaming’ will become a method that is used where teams of people and technology try to ‘break’ these models.”
CHATGPT CHIEF WARNS OF SOME ‘SUPERHUMAN’ SKILLS AI COULD DEVELOP
Siegel likened the process to the current rules for drug approval, which he argued is now well understood and followed by drug developers.
“We will eventually have that for testing AI models and honestly should have had that in place for social media applications,” Siegel said.
Ziven Havens, policy director at the Bull Moose Project, argued that the administration has reached a critical juncture in the regulation of AI, one that will require them to balance safety standards while taking care not to stifle innovation.
“If the Biden administration aims to be successful with AI regulation, they will use the information provided to them to create reasonable standards, ones that will both protect consumers and the ability of companies to innovate,” Havens told Fox News Digital.Â
CLICK HERE TO GET THE FOX NEWS APPÂ
“If the administration fails to meet the moment, by creating stifling regulations, America will see its global edge in AI technology wither away. Waving a white flag on American innovation would be a disaster, both for our economy and national security.”
The White House did not immediately respond to a Fox News Digital request for comment.