There is a particular kind of risk that keeps experienced insurance people up at night. Not the risks they've priced or the risks they've accounted for in their risk registers. The risks they haven't noticed yet.
AI is one of those risks right now. Not because the technology is inherently dangerous, but because the regulatory response to AI is finally gathering speed. For many insurance businesses, the gap between what they do and what they're supposed to do is quietly widening.
Over the past year, I've been deep in an AI bubble, for my own businesses, for my personal growth and in conversations with other early AI adopters in the Australian and Lloyd's markets. Here is a real world take on the regulatory landscape, the obligations in place today, the approaching deadlines, and the decisions insurance businesses need to make now rather than later.
The regulatory landscape nobody has mapped clearly
In Australia, insurers do not yet face a single, AI-specific insurance rulebook. Instead, AI use is pulled into existing privacy, cyber, outsourcing and governance obligations.
The Australian Privacy Act and the Australian Privacy Principles
The Privacy Act 1988 and its 13 Australian Privacy Principles (APPs) apply to APP entities, which include most insurers, intermediaries, and service providers that handle personal information in this context. For insurers and coverholders, this means virtually every piece of customer data you touch.
The most immediately relevant principle is APP 6, which governs secondary use and disclosure of personal information. The core obligation is clear: personal information collected for one purpose cannot be used or disclosed for another purpose without consent or a specific exemption.
When a broker or underwriter drops a client's proposal form into a consumer AI tool, they are disclosing that personal information to a third party, the AI vendor, for a purpose the customer did not consent to.
The OAIC has been explicit about this. Their published guidance states that, as a matter of best practice, organisations should not enter personal information, and particularly sensitive information, into publicly available generative AI tools, due to the significant and complex privacy risks involved. The OAIC even provides a worked example of an insurance company whose employees enter a customer's claims data, including sensitive health information, into a publicly available chatbot, framing it as a clear compliance failure under APP 6.
The December 2026 ADM deadline
The Automated Decision Making (ADM) transparency requirement will take effect on 10 December 2026, under new APPs 1.7, 1.8 and 1.9 introduced by the Privacy and Other Legislation Amendment Act 2024. These obligations apply to any decision made on or after that date, regardless of when the underlying system was deployed.
APP entities that use automated systems, including AI, to make or significantly influence decisions that could reasonably be expected to materially affect an individual's rights or interests must disclose this in their privacy policies. The required disclosures are specific: the kinds of personal information used, the types of decisions involved, and the level of human involvement. Vague statements about "using technology to improve our services" will not be sufficient.
Nine months is not long when you factor in the time required to identify all AI touchpoints, draft compliant policy language, obtain legal review, and publish updates.
Lloyd's MS11 and the CDM requirements
For Lloyd’s businesses, MS11 is an important part of the control environment. It is addressed to managing agents, but it can shape expectations around delegated operations, suppliers and data governance across the wider chain.
MS11 does not contain AI-specific requirements; Lloyd's has yet to publish dedicated AI guidance, but its general data governance framework applies directly to AI usage. The relevant provisions are:
-
CDM 2.2 requires a documented data governance framework with director level accountability
-
CDM 2.3 requires controls to maintain data integrity and prevent unauthorised access or disclosure
-
CDM 1.5 requires formal risk assessment and security arrangements for third party vendors
An insurance business that has staff using consumer AI tools to process policyholder data, without a documented governance framework and vendor assessment, is likely non-compliant with CDM 2.3, regardless of what the Australian Privacy Act says. The absence of AI-specific Lloyd's guidance creates regulatory flexibility, but it does not create a safe harbour.
APRA CPS 234
For APRA-regulated entities, which include many insurers operating in Australia, CPS 234 on Information Security adds a further layer. While CPS 234 predates the current AI wave, its requirements around information asset classification, third party management, and incident response apply to AI systems and the data they process.
For coverholders and MGAs that are not themselves APRA-regulated, CPS 234 may still be relevant through contractual flow down from carrier partners. It is worth checking your binder and delegated authority agreements carefully.
The platform problem nobody wants to talk about
Understanding the regulatory framework is one thing. The operational reality is another.
The consumer AI tools most insurance professionals use were not designed for regulated industry use. They offer no Australian data residency, no enterprise data processing agreements, and in several cases, have default settings that involve using submitted content to train future model versions.
This is a documented risk, not a hypothetical one:
ChatGPT Free and Plus: Data used for training by default, no data processing agreement available, no data residency controls, no administrative oversight for organisations.
The enterprise tiers of both platforms address most of these issues, but at a higher cost and with ongoing management overhead. The more interesting question for many organisations is whether there is a compliant path that does not require expensive enterprise contracts.
For organisations already running Google Workspace, the answer is often yes. Google Workspace Gemini, included in Enterprise Standard subscriptions at no additional cost, offers contractual guarantees of no training on customer data, ephemeral processing, Australian data residency, native DLP integration, and ISO 42001 certification. This is not a product recommendation. It is an observation about where the compliance bar currently sits and which platforms clear it without additional investment.
What good governance actually looks like
Classify data before choosing tools. Not all insurance data carries the same risk. Internal communications, market research, and publicly available information can be processed by a wider range of tools than sensitive PII or claims data. A simple data classification framework, public, internal, confidential, sensitive, that maps to approved tool usage, is the foundation of everything else.
Make the compliant path the easy path. The OAIC's own case study illustrates the governance failure mode precisely: staff are not deliberately circumventing controls; the compliant tool simply requires more steps than the non-compliant one already open in their browser. Governance that creates friction will be worked around.
Document the decisions. When an auditor or OAIC investigator asks what controls you have in place, the answer is not a verbal description of what you think your staff are doing. It is a policy document, a training record, a vendor assessment, and a review log. The documentation is the evidence.
Assign director level accountability. Under Lloyd's CDM 2.2, this is a requirement. In practice, it also changes behaviour. When a named director is responsible for AI governance outcomes, the governance work gets done.
What to do now
If you are reading this and recognising your own organisation, the practical starting point is not a technology project. It is an audit.
Map where AI is being used across your business, not just the approved tools, but the actual behaviour. Survey your staff. You will likely find things that surprise you.
Then work backwards: which of those use cases involve personal information? Which platforms are those use cases running on? Do those platforms have appropriate data processing agreements, residency controls, and training opt-outs?
The answers will tell you where your exposure is. From there, the path forward is straightforward, if not quick.
The December 2026 deadline is confirmed in law, and The OAIC is already enforcing. The businesses that treat this as a compliance checkbox will do the minimum required. The businesses that treat it as a genuine governance challenge will build something durable.
The gap between those two groups will become more visible over the next nine months.
Sources
| Claim | Source |
|---|---|
| Three-tier civil penalty structure | Landers & Rogers Privacy Reform Update |
| ADM transparency obligations, APP 1.7–1.9, December 2026 | OAIC APP 1 Guidance |
| OAIC compliance sweep, January 2026, ~60 organisations | Landers & Rogers 2026 Privacy Update |
| OAIC guidance on AI tools and APP 6, insurance scenario | OAIC Guidance on Commercially Available AI |