What better place to discuss the rise of generative AI in cybersecurity than at SXSW Sydney, where thought-leaders, industry experts and innovators converge to peer into the future. In a room packed with attendees keen to learn more about the way AI is set to protect us against an increasingly sophisticated threat landscape – where hackers have access to the same technology but without the guardrails – cyber-safety and digital-wellbeing expert Yasmin London quizzed experts on the transformative impact of AI on improving our digital defences. “AI is on every major news cycle in the country across the globe. All industries are affected,” she said. “Educators are wondering what it means for learning. Authors are seeking injunctions to stop their content being used to train ChatGPT. And, of course, we’re all wondering what the rise of AI means for our lives, industries and workplaces.” Here’s what we learned at SXSW Sydney when she talked to other people in the know.

Helping us find the needle in the haystack

“Three years ago, my team was looking at threats across about 80 million signals, which is anything from an email to a file being shared. Last week, we were scanning 240 billion for threats,” says Andrew Pade, CommBank General Manager of Cyber Defence Operations and Security Integration, whose team is tasked with constantly scanning for potential threats or disruptions to the network. “And we’re using AI in our process to help identify a behavioural change before it becomes a security event.” As Pade explains it, if you click on a link in an email and a threat actor takes control of your account then your behaviour is going to change. “So we’re using AI to help detect that shift and respond in real time. It’s helping us find the needle in the haystack.”

Deception works both ways

We hear a lot about using social engineering to deceive people, but the same principles can be used against cyber criminals. The CSIRO are planning and scale Australia’s AI adoption, and resident AI expert Rita Arrigo identified honey pots as a key example of AI enabling deception to defend, and even learn more, about threats. “This solution uses generative AI to recreate your whole email platform, social media posts, blog posts and images, and it looks so real that hackers go in there and spend days and days looking at all these fantastic documents – and whoever's got this deceptive environment in their security platform is then able to observe and understand what hackers are looking for and then patch holes in their security systems.”

AI has joined cyber security arms race

Reformed hacker Bastien Treptel sees first-hand how AI has improved the scale and sophistication of cyber criminals. Now engaged by companies to test their cyber defences, he stays up to date on the tactics and technologies of cyber criminals. “I was invited into a hacking organisation in Southeast Asia – they had 180 staff in a four-storey building with HR managers. And they were using AI to make their job far more efficient,” he says. Hackers are using AI to launch convincing social-engineering attacks, generating deepfake audio that imitates the voices. “They also use it to analyse vast datasets and then craft messages that mimic the writing style of a known colleague or friend, making it more challenging for individuals to discern scams from a genuine communication.”

The best defence is learning from the best offence

CommBank defences are routinely tested by a team whose role is to attack, often using the technologies like generative AI to replicate the actions of a potential. “We’re very fortunate to have a team of hackers who are there to identify vulnerabilities before anyone else can,” says Andrew Pade. “We’re constantly testing our own defences.”

With the great power, comes great responsibility

The ethical elements of AI in cybersecurity also took centre stage, with panellists sharing concerns about potential biases in AI algorithms and the need for transparency. “At the National AI Centre, we’ve done a lot of work around exposing the challenges of generative AI and how to implement it responsibly in business,” says Arrigo. These challenges include safeguarding our individual privacy and ensuring AI systems handle sensitive information responsibly, implementing security measures so there’s real confidentiality and encouraging inclusivity in AI development to incorporate diverse perspectives and expertise. “We need to consider how we build that responsible AI muscle.”

Machine unlearning is a thing, too

We all have so much personal data online and generative AI takes these data points and predicts what we’ll do next. A challenge we’re facing, says Arrigo, is what happens when we want to change and evolve. When we don’t want the algorithm to guess our next move. It’s about cyber safety – but also digital freedom, too. “Machine unlearning is a really important concept. And what many people in AI are trying to figure out is, how can we wipe the slate clean?” Treptel believes AI will be able to help us take ownership of our own identity. “Every single person in this room probably has literally in the order of thousands of digital copies of yourself all over the place and there will come a time when we can hit delete.”

Basic principles are still effective, especially for small and medium businesses 

Technology providers have significantly improved the security features included with their products. Small and medium businesses don’t necessarily need the resources and expertise of the bigger players, they can secure their assets by using built-in features and following the free guidance of organisations like the Australian Cyber Security Centre. “The big game-changer for small businesses has been that technology platforms now have built-in cybersecurity so they get all the advantages of a big corporation,” says Pade. Understanding the threat matters, too: “At the National AI Centre, we want every single Australian to learn an AI micro-skill and then a generative micro-skill, because understanding what’s happening can help you not only protect yourself but also innovate,” says Arrigo.

Gen AI is helping to tackle burnout

“After decades on the job, I’ve seen so many of my peer’s burnout because the threats never stop,” says Pade. “One of the great benefits for us is that AI is built with the knowledge of our most senior people. These models support all our analysts with senior level guidance as our teams around the globe manage the growing number of threats.”

To learn more from leading industry experts, head to CommBank Foresight™ – insights for future-facing businesses.

Things you should know

  • This article is intended to provide general information of an educational nature only. It does not have regard to the financial situation or needs of any reader and must not be relied upon as financial product advice. You should consider seeking independent financial advice before making any decision based on this information. The information in this article and any opinions, conclusions or recommendations are reasonably held or made, based on the information available at the time of its publication but no representation or warranty, either expressed or implied, is made or provided as to the accuracy, reliability or completeness of any statement made in this article. Commonwealth Bank of Australia ABN 48 123 123 124. AFSL and Australian Credit Licence 234945.