The National Times - US regulator targets AI ahead of White House confab

US regulator targets AI ahead of White House confab


US regulator targets AI ahead of White House confab
US regulator targets AI ahead of White House confab / Photo: © AFP

A US regulator put artificial intelligence in the crosshairs ahead of a White House meeting Thursday with tech firms to strategize about its dangers.

Change text size:

"While the technology is moving swiftly, we already can see several risks," Federal Trade Commission chief Lina Khan wrote in a guest essay in the New York Times.

"Enforcers and regulators must be vigilant."

The tough talk comes as US lawmakers lag far behind their European counterparts when it comes to reining in big tech.

US President Joe Biden has urged Congress to pass laws putting stricter limits on the tech sector, but these efforts have little chance of making headway given political divisions.

The lack of rules has given Silicon Valley freedom to put out new products rapidly -- and stoked fears that AI technologies will wreak havoc on society before the government can catch up.

The White House will meet with chief executives from Google, Microsoft, OpenAI and Anthropic on Thursday to discuss the promise and risks of artificial intelligence.

Vice President Kamala Harris and other administration officials will discuss ways to ensure consumers benefit from AI while being protected from its harms, according to a copy of an invitation seen by AFP.

"Don't get your hopes up that this will lead to anything particularly meaningful, but it's a good start," said David Harris, a lecturer at Haas Business School at the University of California, Berkeley.

"The United States Congress can barely keep the government's bank accounts full; I think it would be a lot for us to expect it to put forth any meaningful legislation about AI."

- 'Turbocharged' fraud? -

The United States is home to the biggest innovators in tech and AI -- including Microsoft-backed OpenAI, which created ChatGPT.

Google has invited users in the United States and Britain to test its AI chatbot, known as Bard, as it strives to catch up with ChatGPT.

Billionaire Elon Musk in March founded an AI company called X.AI, based in the US state of Nevada, according to business documents.

Google, Meta and Microsoft have spent years working on AI systems to help with translations, internet searches, security and targeted advertising.

But late last year San Francisco-based OpenAI supercharged interest in generative AI when it launched ChatGPT, a bot that can crank out natural-seeming written responses from short prompts.

"Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control?" Khan asked rhetorically.

"Yes -- if we make the right policy choices."

Khan said risks from AI include it being used to "turbocharge" fraud with voice clones, deep-fake videos, and convincing written messages.

- Eclipsing humans? -

A computer scientist often dubbed "the godfather of artificial intelligence" recently quit his job at Google to speak out about the dangers of the technology.

Geoffrey Hinton said at an MIT forum on Wednesday that it makes sense to halt development of AI, but the idea is naive given competition between countries and companies.

"It's quite conceivable that humanity is just a passing phase in the evolution of intelligence," Hinton said.

Digital intelligence "may keep us around for a while to keep the power stations running. After that, maybe not," he added.

Hinton, who created a technology underlying AI systems, maintained that the existential threat from AI is "serious and close."

Microsoft chief economist Michael Schwarz at the World Economic Forum in Geneva warned that AI will be used to cause damage by "bad actors" like spammers and those out to manipulate election outcomes, according to a CNBC report.

An arms race over AI is expected to play out for several years.

Microsoft has been swiftly adopting generative AI, showing less caution than rivals despite early problems such as chatbots giving disturbing responses or blatantly inaccurate information.

European countries are scrutinizing the use of personal data by ChatGPT and demanding answers from OpenAI.

The European Union's central data regulator has formed a task force to help countries harmonize their policies and address privacy concerns.

ChatGPT can generate essays, poems and conversations from the briefest of prompts, and has proved itself capable of passing some tough exams.

But it has been dogged by concerns that its talents could lead to widespread cheating in schools, fuel disinformation on the web and replace human workers.

And the chatbot can only function if it is trained on vast datasets, raising concerns about where OpenAI gets its data and how that information is handled.

D.Kelly--TNT