See travel advice due to Middle East conflict
Travel advice
AI text on screen

Securing AI innovation without stalling it

Evolving from a risk-averse approach

In today’s rapidly evolving technological landscape, organisations are challenged to embrace artificial intelligence (AI) while maintaining robust risk management.

Traditionally, cyber security teams have operated with a risk-averse mindset, focused on stopping threats and minimising exposure. However, this approach can unintentionally slow innovation, especially as AI becomes more integral to government operations.

VMIA Chief Information Security Officer, Ian Pham, recently shared his insights on this topical subject at the recent annual PSN Victorian Government Cyber Security Showcase. Following the event, we invited Ian to share some of his key insights with us in a dedicated interview.

Shifting the risk mindset

Ian emphasised the importance of shifting cyber security teams’ mindset from simply avoiding risk to actively managing it. By understanding risk in the context of organisational objectives, teams can move forward safely and confidently.

Ian’s analogy—comparing his approach to risk management with parenting—illustrates the shift from reacting to every perceived threat to focusing on what truly matters. This evolution is crucial for cyber teams, moving from gatekeeping to guiding.

When adopting AI, locking everything down from the outset can stall innovation or encourage teams to bypass governance. Instead, secure sandboxes should be created for experimentation, with defined guardrails such as starting with synthetic or non-sensitive data, clear monitoring, defined usage conditions, and strong identity and access controls. This environment allows teams to test, learn, and fail fast, without exposing the organisation to unacceptable risk. As capability grows, exposure is gradually increased, supported by strong data governance and continuous risk reassessment.

Embracing AI safely

AI introduces new risks, including data leakage and privacy concerns, potential misuse of AI tools, unapproved use of AI applications (shadow AI), data quality issues, and how to manage authorised access to these systems. The goal is not to eliminate these risks entirely, but to make them visible, understood and manageable. Educating and engaging the organisation, especially decision makers, is essential to provide assurance and build confidence in new ways of working.

“A common misconception is that security inevitably slows innovation. In practice, embedding security early and focusing on real risks actually accelerates progress because there is greater confidence.”

Security should evolve from being a control function to an enabling function, supporting the organisation’s purpose and fostering innovation. Success in the age of AI will come from managing risk intelligently while continuing to move forward.

Updated