Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for global professionals · Tuesday, March 25, 2025 · 797,028,976 Articles · 3+ Million Readers

AI Leaders Call for Safety Commitments Ahead of the Federal Election

Australian AI experts urge action as Australia remains the only Seoul Declaration signatory yet to deliver on commitments for a national AI Safety Institute.

CANBERRA, ACT, AUSTRALIA, March 24, 2025 /EINPresswire.com/ -- Today, Australia’s AI safety leaders have united in a call for action by the next government on AI safety concerns. The public statement, which is open for support from other Australian AI experts as well as members of the public, says that action on safety is necessary for Australia to fully seize AI’s opportunities.

The experts are calling for the Australian Government to deliver on its commitments by creating an Australian AI Safety Institute (AISI). In May 2024, Australia and other participants at the Seoul AI Summit committed to “create or expand AI safety institutes”. Australia is the only signatory of the Seoul commitments yet to establish an AISI.

Australian philosopher Dr Toby Ord, Senior Researcher at Oxford University and author of The Precipice: Existential Risks and the Future of Humanity, said, “Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An Australian AI Safety Institute would allow Australia to participate on the world stage in guiding this critical technology that affects us all.”

During last year’s Senate Select Committee Inquiry into AI Adoption, Senator David Pocock recommended that such an institute be created, but neither major party took a position.

Greg Sadler, coordinator of Australians for AI Safety, said, “It sets a dangerous precedent for Australia to formally commit to specific actions but fail to follow through. Australia is the only signatory that is yet to meet its obligations.”

The Seoul Declaration isn’t the only commitment that an AISI would deliver. Australia’s AI Ethics Principles call for “transparency and explainability” as well as “reliability and safety”. Currently, frontier AI systems are not transparent and can be hard to predict. An AISI could lead the technical work necessary to tackle these challenges and deliver the AI Ethics Principles.

The AI experts are also calling for an Australian AI Act. Minister Ed Husic ran a series of consultations on safe and responsible AI, culminating in a paper about imposing mandatory guardrails on high-risk AI systems. The experts argue that the next Parliament needs to turn talk into action.

Australian Professor Paul Salmon, Centre for Human Factors and Sociotechnical Systems, said, “I support the creation of an Australian AI safety institute and the implementation of an AI Act. Both are urgently required to ensure that the risks associated with AI are effectively managed. We are fast losing the opportunity to ensure that all AI technologies are safe, ethical, and beneficial to humanity.”

This open letter compares AI safety to aviation safety, arguing that Australians are hesitant to adopt AI because Australia has yet to build the frameworks to give confidence that AI is safe and secure.

Yanni Kyriacos, Director of AI Safety Australia & New Zealand, said, "Robust assurance justifies trust. We’re all excited about the potential opportunities of AI, but not enough work is currently happening to address genuine safety concerns. It’s easy to understand why Australians are hesitant to adopt AI while these big issues are outstanding.”

A 2024 survey shows that Australians overwhelmingly want strong AI oversight, with 8 in 10 believing Australia should lead international AI governance, 9 in 10 supporting the creation of a new government regulatory body for AI, and preventing dangerous and catastrophic AI outcomes identified as the top priority. A 2023 Ipsos survey found Australians to be the most nervous about AI globally.

In conjunction with this letter, Australians for AI Safety is also launching a national scorecard where voters can compare and stay up to date on the positions of parties and candidates on AI safety policies as they are released in advance of the election.

The letter will remain open for support by Australian experts and members of the public until election day.

Greg Sadler
Australians for AI Safety
+61 401 534 879
greg@goodancestors.org.au

Powered by EIN Presswire

Distribution channels: Law, Politics, Technology

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Submit your press release