There is a conversation happening in almost every business right now, and it is the wrong conversation.

It goes something like this: Will AI take our jobs? Should we be worried? How many people will be in this team in five years? The conversation is laced with legitimate fear and conducted in hushed tones between people who have spent twenty years building expertise.

And the fear is founded. Block cited AI-driven productivity as a key reason behind shedding more than 4,000 jobs, close to 40 per cent of its workforce (Reuters). WiseTech, the Sydney-based logistics software company, announced plans to cut about 2,000 positions, nearly a third of its global headcount, as part of a two year restructure centred on an AI-led overhaul (Reuters). And last week, Atlassian, the Sydney-born company behind Jira and Confluence, announced it would cut around 1,600 people, roughly 10 per cent of its global workforce, to redirect capital toward AI and enterprise sales.

The headlines are real. The scale is real. The anxiety is understandable.

But there is something important hiding in those three announcements, and most of the coverage is missing it entirely.

Not all cuts are the same

Block and WiseTech framed their reductions primarily as efficiency plays, AI doing work that people previously did. Atlassian is telling a different story, and it deserves more attention.

Atlassian is not cutting from weakness. Cloud revenue grew 26 per cent last quarter. Its Rovo AI assistant already has five million monthly active users. The company is, by most measures, executing well. Co-CEO Mike Cannon-Brookes said directly that the decision was to "self-fund further investment in AI and enterprise sales" — and, significantly, that the company was reshaping its skill mix, not simply reducing its headcount. "It would be disingenuous," he wrote, "to pretend AI doesn't change the mix of skills we need or the number of roles required in certain areas."

That is a materially different statement from "AI is doing the work, so we need fewer people." It is an acknowledgement that the nature of the work itself is changing — and that the people best positioned to do the new work are not necessarily the people already there.

For any business leader reading this: that is the more confronting implication. Not that AI will do what your team currently does. But that what your team currently does may not be what your business needs in three years.

The cautionary tale hiding in plain sight

While Block and WiseTech were making global news, a quieter story was unfolding at Australia’s biggest bank. Commonwealth Bank moved to replace 45 call centre workers with an AI-powered voice bot. The rationale was straightforward: the bot would reduce inbound call volumes, the workload would shrink, and the roles would become redundant (ABC News).

It did not go to plan.

Call volumes rose rather than fell. Staff were pulled into overtime. Team leaders were drawn back onto the phones. The Finance Sector Union took the matter to the Fair Work Commission. Within weeks, the bank publicly reversed course, acknowledged an error, apologised to affected workers, and reinstated the roles (ABC News).

Read that again carefully. Australia’s most technologically ambitious bank, with a dedicated Seattle tech hub and strategic partnerships with Anthropic and OpenAI, still got it wrong. Not because the AI was inherently useless, but because the bank appears to have pointed it at the wrong operational problem (Commonwealth Bank).

They pointed it at the wrong problem.

The real question your leaders should be asking

The conversation most businesses are having is: how many of our people can AI replace?

The conversation they should be having is: what are our best people doing right now that AI should be doing instead?

These are not the same question. They produce entirely different answers, strategies, and cultures.

The first question is a cost-reduction exercise dressed up as a transformation. The second is an honest audit of where human intelligence is actually being wasted.

In insurance, that misallocation is often hiding in plain sight. McKinsey has estimated that, even in large commercial lines, 30 to 40 per cent of an underwriter’s time is still spent on administrative work such as rekeying data or manually executing analyses, rather than on judgment-heavy underwriting itself (McKinsey).

Chasing documents. Triaging submissions. Formatting renewal packs. Running sanctions checks. Drafting standard correspondence. Compiling bordereaux spreadsheets... all those spreadsheets.

This is not a small inefficiency. It is a structural misallocation of one of the most valuable resources in the business.

What AI is actually good at

There is a useful mental model here, borrowed from Daniel Kahneman’s Thinking, Fast and Slow: the distinction between fast, automatic processing and slow, deliberate reasoning. Kahneman describes “System 1” as quick and intuitive, and “System 2” as slower, more effortful, and more analytical (Scientific American).

AI is already strong at tasks that resemble fast pattern recognition, classification, summarisation, drafting and routing. It is far less reliable where context, accountability, ambiguity and judgment dominate. In insurance terms, the opportunity is not wholesale replacement. It is reallocation.

Consider what that looks like in practice across an insurance operation:

Submission triage. A skilled underwriter can receive dozens of submissions a week, many of them outside appetite on first read. AI can help screen, sort and route those submissions before a human reviews them, leaving the underwriter to spend more time on the risks that genuinely require underwriting judgment. McKinsey argues that a large share of underwriting time is still consumed by low-value administrative work, precisely the sort of activity ripe for automation (McKinsey).

Sanctions screening. Australian sanctions compliance guidance expects businesses subject to sanctions laws to take active steps to avoid dealing with designated persons and entities, including, where relevant, checking the DFAT Consolidated List (DFAT). In practice, screening is a rules-based, repeatable process that is well suited to automation, while borderline matches and escalation decisions still require human review.

Claims correspondence. Administrative work in claims teams is a recognised source of delays and a productivity drag, and AI tools are increasingly used to summarise documents, draft routine correspondence, and support handlers with first pass workflows (Risk & Insurance). The human role does not disappear; it shifts upwards towards review, exception handling and difficult conversations.

None of these examples requires replacing a person. They involve giving a person back hours of their week.

The question for leaders

If you are running an insurance business, a brokerage, or a coverholder, the right exercise is not to open a spreadsheet and ask how many FTEs you can remove. The better exercise is to ask one of your best people to track their week, task by task, hour by hour, and then identify which of those tasks truly required the expertise you hired them for.

In many businesses, the answer is confronting. Not because AI has made those skills redundant, but because the business has been applying fifteen years of expertise to work that does not justify it.

That is where AI becomes useful: not as a substitute for judgment, but as a tool for clearing the administrative fog.

What good governance looks like

The CBA case is instructive not just as a failure, but as a map. The mistake was not simply introducing AI. It was using AI as a workforce-reduction mechanism before proving, in live operating conditions, that it could safely absorb the work it was supposed to handle (ABC News).

Good AI governance in a regulated financial services business looks like this:

The businesses that will build a durable advantage from AI over the next five years are not the ones that move fastest to cut headcount. They are the ones that most clearly understand which parts of their operation require human judgment, and then automate everything else with discipline.

The longer view

Jack Dorsey said he believed a majority of companies would reach the same conclusion as Block and make similar structural changes within a year (Reuters). He may be right about the scale of change. He is much less obviously right about the mechanism.

The businesses that will be in the best shape in 2030 are not the ones that replaced the most people. They are the ones who freed their best people from the least valuable work, and gave them the space to do the things that AI still cannot do well: judge nuance, build trust, negotiate ambiguity, and make accountable decisions.

The long lunch is not coming back because AI is taking over.

It is coming back because, for the first time, your people might actually have time to take one.

Sources: Fortune (Feb 2026), Reuters/Yahoo Finance (Feb 2026), Retail Banker International (Jul 2025), For Every Scale (Aug 2025)