Group AI policy

  • Artificial Intelligence (AI) is developing rapidly. Appropriate use of AI will help us to achieve our purpose of building a brighter future for all. Failure to govern and manage AI systems effectively may lead to incorrect decision-making, compliance failures, poor customer outcomes with consequent reputational damage, financial loss, fines and/or penalties for the Group. Our approach to artificial intelligence establishes the principles and compliance that apply to the design, development, deployment and use of AI systems.

    Our approach to artificial intelligence establishes the principles and compliance that apply to the design, development, deployment and use of AI Systems. Our AI Policy may intersect with the Group Model Risk Policy where the AI system is also a model, and our Group Supplier Lifecycle Policy where the AI system is sourced from a supplier.

    Our six AI principles are:

    • Human, social and environmental well-being
    • Fairness
    • Transparency
    • Privacy and Security
    • Reliability and Safety
    • Accountability

    Read our Group AI policy

Our definition of AI

  • We view AI as a machine based system that independently learns from data. AI can, for a given set of human defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. AI includes technologies such as machine learning (identifies patterns and relationships in data, including supervised, unsupervised and reinforcement learning), dynamic or adaptive models, speech recognition, natural language processing and computer image recognition.

Our AI Principles and how we apply them 

The following six AI Principles must be applied in the design, development, deployment and use of our AI systems: 

Human, social and environmental well-being

AI systems should advance human, social and environmental wellbeing as well as facilitate respect for human rights, diversity, and the autonomy of individuals. To justify the balance of potential harms and benefits that an AI system delivers means that other solutions, including not deploying any system, have been considered and ruled-out because they do not realise the same overall benefits delivered by the AI system. 


When designing, developing, deploying and using AI systems, the Group should aim to treat all people fairly and must not unfairly discriminate. Where AI systems contribute to decisions and outcomes, those decisions and outcomes are judged against the same standards that would apply to decisions and outcomes made entirely by humans. 


AI systems should be transparent so that their manner of operating and outputs can be readily understood, reproducible and, where appropriate, contested. Reference resources must be written in plain language and at an appropriate level of detail. 

Privacy and Security 

AI systems must ensure security of data, comply with privacy and data protection laws, as well as the Group Privacy Policy and Group Information Security Policy. 

Reliability and Safety 

AI systems should operate reliably, perform consistently and in accordance with their intended purpose. AI systems should not pose unreasonable safety risks, and should adopt safety measures that are proportionate to the potential risks. 


Human oversight of AI systems is necessary. The Group Employee(s) accountable for an AI system should be identified and accountabilities for an AI system during its entire lifecycle documented. There must be sufficient oversight by individuals with relevant expertise in the technology, the intended use, benefits and risks relevant to the AI system. Application of this principle will vary according to the degree of autonomy and criticality of the AI system.

Learn more about how we are using AI at CBA 

Using Artificial Intelligence to deliver personalised customer experiences

Learn how CBA uses AI to deliver personalised experiences

Making AI fit for purpose

Learn about the ethical AI framework