Don’t overregulate AI, warns productivity chief

By Peter Gearin

December 11, 2023

Stephen King
Productivity commissioner Stephen King. (Supplied)

Productivity Commissioner Stephen King believes the problem with artificial intelligence isn’t a lack of regulation — it’s the fear lawmakers will jump to over-regulate AI.

Speaking at CEDA’s AI Leadership Summit on Friday, King said it’s important for Australian regulators to concentrate on how AI is used and not on the technology itself.

“There will be lots of uses of AI,” he said. “Some of them will be things that we already do, some of them will be new uses. But the starting point is to say, is there already regulation covering that use — consumer law, copyright law, privacy law, discrimination law? If it’s not clear, can it be clarified?

“If it’s not covered by the existing rules, can we modify them? If the answer is still no, can we have a technologically neutral regulation covering use?”

King said specific AI regulation should only be considered as a last resort. He fears overregulation will only succeed in driving Australia’s innovators overseas.

“The biggest risk is that by starting at the wrong end, by saying things like large language models need to be regulated [because they] are too dangerous, developments will go overseas. You just deal yourself out of the game.”

King said despite the hype generated by consumer-facing apps such as ChatGPT, regulators need to remember AI technology has been with us for a long time.

“It’s great to get excited about [the technology], but not to lose the perspective,” he said. “This will be a long game. There are going to be different forms of uptake of AI, and it’s going to take a long time to move through the economy.”

King said adopting “wrong-headed approaches to regulation” from some jurisdictions “could actually make us worse off”.

“It will be really, really hard to undo technologically specific regulations in 10 years’ time because there will be entire industries built up around those regulations.”

Addressing why the Productivity Commission (PC) is taking such an interest in AI, King said technology presents an opportunity to lift productivity dramatically over the next two or more decades.

“If we’re going to get an improvement in productivity growth in Australia, we’ve got to work out how to get a productivity improvement in services,” King said. “In particular, we’ve got to work out how to get a productivity improvement in government services – human services, education, health, disability, aged care … where AI can have potentially big impact.

“They make up around 22% of our economy. It’s also one of the areas of the worst productivity growth in our economy.

“Unless we fix services — particularly human services — we’re not going to get back to sort of improvement in the standard of living we’ve been used to.”

All of which means government must be an “exemplar” in this field, King said.

“Government needs to take a lead in the ethical and the relevant diffusion of AI technologies in human services,” he said. “This is going to mean a change in government mindset.

“Government is very bad at doing things like working with the private sector to develop the tools we need. I’ll put the top of the class the Australian Taxation Office, with single touch payroll — it essentially created a software-as-a-service industry in Australia.

“I’ll put at the bottom of the class something like My Health Record, which was rolled out without even having APIs or talking to the standard software. Then they wonder why nobody is uploading into my My Health Record.”

When it comes to the risks posed by AI, King said we need to look at the example of self-driving cars. Cruise’s ‘robotaxis’ were banned recently in California because they presented an “unreasonable risk to public safety”.

“Humans make mistakes all the time,” King said. “We don’t say ‘sorry, banish humans’.

“If you’re looking at AI risk, you have to say ‘compared to what?’ What would be the status quo without AI? That’s where you need to develop rules — not looking at risk compared to perfection.”


READ MORE:

Canberra leaps into AI without a working safety net

About the author
0 Comments
Inline Feedbacks
View all comments