Parliamentary committee reminds parties to verify submissions generated by AI

By Tom Ravlic

November 7, 2023

parliament house canberra-APS
Can a Commonwealth Electronic Service (CES 2.0) be created? (frdric/Adobe)

A powerful parliamentary committee has said it expects submitters who use large language models (LLM) and other artificial intelligence tools to ensure the output is truthful, following complaints from two major accounting firms about a recent academic submission.

The Parliamentary Joint Committee on Corporations and Financial Services issued a statement setting out its expectations for truthfulness last week after two major accounting firms wrote to correct errors made in a submission lodged by academics.

Errors made by the academics included failing to identify the correct firm as the auditor — both Deloitte and KPMG had been named as auditors of entities that are audited by PwC — and there were scandals manufactured in the artificial intelligence output.

The academics responsible had apologised for the multiple errors resulting from the reliance on artificial intelligence engines in a letter sent to the committee by Emeritus Professor James Guthrie.

Guthrie identified himself in the letter as being responsible for using Google Bard, an experimental LLM, in the first week of release to assist with research for the submissions.

“I am solely responsible for the part of the submissions pointed to in these letters, in which I used the AI program Google Bard Language model generator to assist with the submission preparation,” Guthrie said.

“There has been much talk about the use of AI, particularly in academia, with much promise of what it holds for the future and its current capabilities. “

He said that he now realises that the engine generates authoritative-sounding material that can be “incorrect, complete, or biased”.

The committee statement noted that many people submitting to committees are under time pressures and are often unpaid for the work they do.

“Emerging tools within the artificial intelligence space, whilst appearing to reduce workload, may present serious risks to the integrity and accuracy of work if they are not adequately understood or applied in combination with detailed oversight and rigorous fact checking,” the committee said.

The committee also said it referred the issue of artificial intelligence and its possible impacts on the work of committees to the clerk of the senate.


READ MORE:

Tuesday Ethics Club: The case of passing off ChatGPT as your own work

About the author
0 Comments
Inline Feedbacks
View all comments