Cybersecurity’s six ‘shields’ aim to protect Australian infrastructure

By Liam Tung

November 6, 2023

(Photo: Defence)

The massive Optus and Medibank hacks in late 2022 were Australia’s moment of cyber reckoning. Can the government’s new cybersecurity strategy help Australia protect itself in the future?

Citizens, government and the private sector have had a year to contemplate Australia’s worst breaches on record and the forthcoming 2023-2030 cybersecurity strategy, announced by home affairs minister Clare O’Neil in their immediate aftermath.

In September, O’Neil outlined the strategy’s six “shields”: broad awareness of cyber threats, “safe technology”, rapid threat intelligence sharing, improved skills, and greater international coordination.

The cybersecurity strategy will need to account for Australia’s vast critical infrastructure, which spans everything from operating system kernel code to encryption, operational technology (OT) industrial systems, the people using IT and the teams from the public and private sectors tasked with securing them.

“The breaches served as an excellent wake-up call,” says Toby Murray, a director of the Defence Science Institute.

Murray is also a core developer of the ‘seL4’ secure OS kernel, a project backed by CSIRO’s Data61 and the Linux Foundation that won the prestigious 2022 ACM Software System Award, whose few recipients include the inventors of the internet and the web.

“The other side was, had the government been doing enough to create appropriate incentives for industry to be investing in cyber? And had the government recognised it had a role to play in helping these large organisations manage their response to big incidents?”

Europe’s wake-up was the 2017 WannaCry and NotPetya wiper-malware attacks on critical infrastructure, a year after the first EU-wide cybersecurity law and GDPR.

America’s came in the late 2020 SolarWinds supply chain attack on government agencies and the 2021 ransomware attack on Colonial Pipeline. The Biden administration then forced federal agencies to up their cyber defence posture by adopting “zero trust” principles and multi-factor authentication.

How the Australian government will act under the new cybersecurity strategy is taking shape. The Commonwealth Bank of Australia urged Home Affairs to appoint one entity to receive reports and coordinate responses. Telstra wants the government to commit to a timeline for agencies adopting the Australian Signals Directorate’s (ASD) Essential Eight defences. Google urged Australia to create a unit akin to the US Cyber Safety Review Board (CSRB) that conducts blameless investigations into major incidents.

In July, Home Affairs appointed Air Marshal Darren Goldie as Australia’s first national cybersecurity coordinator. An early task was to manage the national response to the April extortion attack on law firm HWL Ebsworth that exposed confidential information from 65 Australian government clients.

Improved cybersecurity skills could help Australia thwart such attacks in future, but investments in training should be targeted to building the supply of people with an “adversarial mindset”, says Shaanan Cohney, an Australian security and privacy researcher who has helped devise technology policy at the US Federal Trade Commission.

“We’ve got a lot of people who claim to be experts and CISOs,” says Cohney. “What’s missing are the people who can look at the overall socio-technical system and evaluate it.”

These people can be found on so-called “red teams” – technically sophisticated friends who think like stealthy foes. They’re rare in the big four consultancies but can be found in places like Google-owned Mandiant, says Cohney.

Government can address this. One example is the US CyberCorps Scholarship for Service (SFS) program, run by the US National Science Foundation, the Office of Personnel Management, and the Cybersecurity and Infrastructure Security Agency (CISA).

“You get a scholarship for your degree and in return you have to do national service that might see you being deployed within critical infrastructure rather than with the [National Security Agency], or within a government agency that needs improving,” Cohney says.

Home Affairs asked if it should ban victims and cyber insurers from paying ransoms. O’Neil said HWL Ebsworth’s not paying up was the “right call by the nation”. Not paying is ethically ambiguous when the stolen data is co-owned by the victim and their clients. Murray says a ban on paying ransoms would be counterproductive to improving threat information sharing.

“You couldn’t have dreamed up a better example than Medibank refusing to pay the ransom when data is stolen,” says Murray. “Everyone has a precedent now. But if a hospital’s infrastructure is crippled or a small business is choosing between continuing to operate or pay, then a lot of them will say they need to pay.”

Banning payments would also sacrifice creating a lever to facilitate insurance firms – which hold the most useful threat information — sharing their data. “We need to create mechanisms where more of that information is being shared, especially to the smaller organisations that can’t afford premium security,” says Murray.

Delivering “safe technology” is a complex task and very broad, spanning all internet-connected devices and software. The ASD has backed CISA’s Secure by Design whitepaper encouraging the use of “memory safe” programming languages like Rust and Java that reduce critical security flaws easily introduced in widely-used C. Major projects written in C include the Linux kernel, Chromium and Android. Google is pushing these projects towards to Rust but C code will remain for decades.

Murray warns that the efforts to reduce risky code in software infrastructure like an OS kernel will take time. He was responsible for “security proofs” for the seL4 microkernel over several years. In 2009, 18 years after the first Linux kernel, seL4 became the first project ever to verify an operating system’s kernel as secure.

“New technology that’s being built always sits on top of old infrastructure that lasts a very, very long time, especially in software,” says Murray.

“We’re going to be finding vulnerabilities in that code for decades to come. And that’s true of so much of the open source ecosystem, but just as true of proprietary code,” he says.

About the author
0 Comments
Inline Feedbacks
View all comments