menu close
  • Back

daniel dsouza
Head of Information Security Solutions, Canon Business Services ANZ

Daniel D'Souza is a highly accomplished Information Security professional with a wealth of experience spanning over a decade. His professional journey has covered multiple market sectors including finance, insurance, technology, education, and consulting. The latest of which led him to join the dynamic team at Satalyst, a Canon Business Services Australia company, as an Information Security Manager. In this role, Daniel was instrumental in helping customers safeguard their digital assets, protect their data, and mature their Information Security control environment. 

In recognition of his expertise Daniel was then transitioned into a pivotal secondment as the Manager of IT Governance, Risk & Compliance within Canon Business Services. Daniel's scrupulous oversight in ensuring key security audits and assessments were delivered has not only strengthened the implementation of CBS’ governance framework, but also substantiated a robust security infrastructure, both for CBS and its customers. 

Currently serving as the Head of Information Security Solutions at CBS, Daniel’s insightful approach to cybersecurity leadership plays a key role in ensuring CBS customers leverage the latest in Information Security technology and services. In this role he brings together strategic vision and a team of highly skilled cyber security professionals with vast real-world experience in reducing business risk through cyber resilience. 

Last updated Tuesday 20 January 2026

Summary: In 2026, enterprise technology is entering a new phase where AI moves beyond experimentation to become secure, governed, and integral to business operations. Organisations are increasingly adopting private AI models connected to corporate data, deploying small language models at the edge, and leveraging advanced automation to modernise legacy systems. Emerging technologies like world models and AI-enhanced cybersecurity measures are reshaping industries, while quantum computing and Zero Trust security demand strategic attention. Leadership is shifting toward collaborative governance to manage the complexity and rapid pace of change. To succeed, businesses must clarify their AI strategies, focus on high-impact use cases, and invest in governance and education, partnering with trusted experts to turn AI potential into measurable business outcomes without sacrificing sustainability, performance, or control across enterprise operations.


Tech trends 2026: From AI hype to guardrails

If 2024–25 was the year everyone “played” with AI, 2026 is when the experts start supervising playtime.

The proof-of-concept phase is coming to an end. Boards are asking, What are we actually getting for this spend? CISOs are exhausted. CIOs are trying to modernise without blowing up risk and still deliver competitive advantage, reduce costs, and prove value across enterprise operations.

And across the top end of town, there’s a quiet shift from “let’s try a bot,” to “how do we govern and safeguard an army of AI agents that can spend money, deploy code, or potentially delete a production database?” — a strategic priority that now sits inside the enterprise strategy, not beside it.

According to IDC, AI and data platforms are seeing the fastest investment growth in Australia, jumping 24.4% year-on-year, while AI-related spend is expected to reach 20–25% of IT budgets by 2026. That shift captures the mood perfectly: AI is moving from experimentation to enterprise-grade capability.

For Canon Business Services ANZ (CBS) experts Raji Haththotuwegama (National Solutions Adviser - Data & AI) and Daniel Dsouza (Head of Information Security Solutions), one theme comes through again and again:

Shiny AI features won’t define 2026. Guardrails, governance, and the hard work of making AI useful, safe, and sustainable will define it.

Here’s how that plays out.

1. Enterprise AI grows up: From PoCs to Production Solutions.

Most organisations that wanted to experiment with AI have already done it. They’ve run pilots, built a chatbot or three, and watched staff quietly paste sensitive content into public tools.

Gartner’s outlook backs this, noting that organisations are accelerating investment in AI governance and custom-trained models, with demand for secure enterprise copilots rising across every major industry.

In New Zealand, AI and data analytics are among the top service priorities through 2026, as more organisations move from pilots to production.

Now the focus is shifting to enterprise-wide, secure AI and a more disciplined enterprise strategy for integrating data, data storage, and access controls:

Private LLMs as standard

The direction of travel is clear: if you’re large enough, you’ll have your own enterprise LLM instance or corporate chat environment; hosted privately, wired into your identity systems, and sitting behind your governance controls, so companies can maintain control of sensitive information, tools, and workflows.


Stage 1: Secure the model.

First step: get a private model you control. You could use a locally hosted open-source model, a cloud-hosted model in your own tenancy or something like Copilot in your Microsoft tenant, but the key is to keep data inside your boundary and align the solution to enterprise operations and risk posture.


Stage 2: Connect it to your data.

The second step is where the value lands: safely connecting that model to your corporate data so staff can ask, “What does this contract say?” or “Summarise all incidents involving this client in the last 12 months” without breaching policy.

This is exactly where Raji is seeing demand: “We’re already getting proposals across the board for ‘secure corporate chat’ — secure copilots that understand your data but don’t leak it.”

The message for 2026: if you’re still relying solely on public ChatGPT for staff productivity, you’re behind the curve and outside your own risk appetite, especially in regulated sectors like finance and healthcare.

2. Small language models and AI at the edge.

While large models soak up headlines, small language models are quietly becoming the real disruptor.

Because they have a much smaller footprint and can still deliver highly accurate outputs for specific tasks, they can:
  • Run on devices: laptops, iPads, even industrial controllers
  • Operate in disconnected environments: mining sites, remote operations, deep inside plants
  • Power edge AI on drones, in mineshafts, or on factory floors

Instead of streaming everything back to an LLM in the cloud, organisations can run focused models close to where work happens. That’s a big shift for industries like resources, logistics, and advanced manufacturing, where connectivity is patchy, or latency is critical and where scalability and performance matter as much as innovation.

In other words, 2026 is likely the year AI stops being just a “cloud thing” and becomes an on-site capability.

Get in touch

Talk to us today to optimise your operations.

Contact Us

3. Computer use models: A quiet revolution for legacy systems.

One of the most interesting (and least flashy) trends Raji calls out is “computer use models”.

Think of them as the spiritual successor to Robotic Process Automation (RPA), but with an actual brain:
  • Old-school RPA had to know exactly where every button was. Change the UI and the automation broke.
  • Computer use models understand intent. If the “Submit” button has changed, they can still find it.

Why does that matter? Because it finally offers a pragmatic way to:
  • Automate legacy systems with no APIs
  • Interact with old UIs by mimicking human behaviour
  • Unblock automation projects that stalled because “the system is too old” or “we don’t have connectors.”

This lines up with Gartner’s observation that many Australian and New Zealand organisations. particularly in government, still depend on decades-old systems. Modernisation remains a top priority, but economic pressure means technology leaders are looking for solutions that don’t require full system replacements.
For many organisations, computer use models could be the unlock for decades-old platforms that are too expensive to replace but too critical to ignore. 2026 might be the year those “we’ll fix it later” projects become solvable, without a full core replacement, including parts of software development and software deployment workflows that currently rely on manual workarounds.

Done well, this approach can reduce costs, improve quality control, and remove friction across entire processes, without forcing a risky “big bang” rewrite.

4. World models: Virtual environments that obey physics.

Generative AI isn’t just about text and images anymore. World models take it further by letting you create virtual environments that behave like the real world, including gravity, momentum, and other physics rules.

This new technology category is starting to get serious attention in the world of industrial simulation. Use cases are emerging fast:
  • Factory and warehouse design. Test layouts, simulate workflows, and explore “what if” scenarios without touching physical infrastructure.
  • Training in hazardous environments. Give workers safe, interactive practice on virtual mining sites, vessels, or plant rooms.
  • Design and construction. Walk through virtual buildings or factory floors generated from prompts, not handcrafted polygons.

Unlike early VR platforms or traditional virtual worlds, these environments can be prompt-driven and generative. Move a crane incorrectly, and the load swings realistically. That makes world models powerful for industries where mistakes in the real world are dangerous or expensive.

World models are already popping up in places like the Wall Street Journal as a “what comes next” storyline for enterprise innovation. Expect 2026 to be more experimental than mainstream here, but the groundwork is being laid now.

5. AI security: Better defense, scarier attacks.

When new technology emerges, security is generally an afterthought. This time, it’s trying very hard not to be.

Cybersecurity remains the fastest-growing technology category, with Australian spend forecast to rise 13.5% CAGR through 2030, reaching A$16.68 billion. Managed security services are in high demand as organisations try to keep pace with AI-driven threats and skills shortages.

AI-assisted SOCs.

Security Operations Centres are already leaning into AI:
  • Using AI companions to query logs and alerts
  • Automating parts of triage and response
  • Moving towards self-healing infrastructure, where certain incidents can be automatically contained or remediated

Instead of analysts stitching together threat intel from multiple sources manually, AI can help correlate signals faster and suggest actions, while humans stay firmly in the decision loop.

AI-boosted threat intelligence.

Threat intelligence is also being supercharged: global feeds, local certs, dark web monitoring, and brand risk services are all feeding into security platforms so organisations can see what’s happening inside their environment and:
  • Where their sensitive data has appeared externally
  • Which credentials or records may already be for sale
  • How emerging threats map to their own assets.
CBS, for example, has already layered brand risk monitoring into its SOC services to give customers that external lens — a practical difference when attackers move faster than internal teams can.

AI-enhanced social engineering.

Unfortunately, attackers have access to the same tools. Daniel sums it up neatly with phishing:
  • Old phishing: Typos, broken grammar, weird phrasing — the usual red flags.
  • New phishing: Perfect English, personalised content, and emails that read better than your own internal comms.
Layer in deepfake audio and video, and suddenly it’s plausible your CFO “called” someone asking them to urgently transfer funds, or your CEO “recorded” a quick approval message.

The core problem in 2026?
“What’s real?” is now a non-trivial security question.

6. Quantum is no longer a sci-fi footnote.

For years, “post-quantum cryptography” felt like a conference panel topic you could safely ignore.

Not anymore.

As Raji points out, the real tipping point is accurate qubits. With new approaches improving qubit stability:
  • The timeline for practical quantum attacks is shrinking from “10+ years” to “maybe as little as two”.
  • Algorithms like AES-256, which would take thousands if not millions of years to brute-force today, could theoretically be cracked by powerful quantum attacks in a fraction of the time.

That doesn’t mean you need to rip-and-replace every encryption mechanism tomorrow. But 2026 is the year boards will start asking:
  • What’s our plan for post-quantum cryptography?
  • Which systems and data would be most exposed if today’s “secure forever” assumptions disappear?
  • Are we prepared for the idea that encrypted data stolen years ago could suddenly become readable?

Forrester expects quantum security spending to jump significantly by 2026, as boards start preparing for attacks that could break today’s cryptography.

It’s early days, but this is a strategic risk question that can’t stay theoretical for much longer.

7. Zero Trust and identity-first security go mainstream.

Zero Trust has been around as a concept for a while. 2026 is when it stops being optional.

As Daniel points out, the model is deceptively simple:
  • Historically: “Log in once, then you’re trusted.”
  • Zero Trust: “Prove who you are every time it matters.”
That shift sounds small, but the operational impact is huge. Over the past two years, cybersecurity has remained the number one investment priority for Australian organisations, driven by Zero Trust adoption, threat detection automation, and regulatory pressure.

Organisations are moving away from castle-and-moat security and towards a world where identity, not the network boundary, determines access. If you’re using SaaS platforms like Microsoft 365, Salesforce, or ServiceNow, this isn’t philosophical; it’s reality. The “network perimeter” barely exists anymore.
tech trends 2026
It’s also why authentication can feel more intrusive. Re-auth prompts, MFA challenges, and device trust checks aren’t “IT being difficult.” They’re protections designed for an environment where attackers often look like legitimate users.

“Zero Trust isn’t a tool you buy. It’s a mindset you operate in,” Daniel says. And in 2026, that mindset becomes the baseline expectation rather than the forward-leaning option.

8. Leadership, burnout, and the rise of councils over “lone heroes”

One of the most human parts of the conversation is about what this is doing to people in charge of security and AI.

Even full-time security leaders can feel constantly behind. The volume of change isn’t just high — it’s relentless.

That’s showing up structurally:
  • The traditional CISO role is starting to fracture in large organisations.
  • Instead of one person carrying everything, responsibilities are being split into multiple chief roles (e.g. incident, encryption, AI security, physical security).
  • Governance is moving towards forums and councils, an information security forum, an AI council, that collectively own strategy and risk.

Raji is seeing the same in AI: “No single person can keep up. More customers are standing up AI councils, not appointing a lone ‘Head of AI’ and hoping they have all the answers.”

There’s also a mental health angle: CISO burnout rates are high, and tenure is short. It’s simply not feasible for one person to track every threat, every tool, every architectural change.

With Australia and New Zealand reporting acute shortages in cloud, cybersecurity, AI, and data talent, governance is shifting away from single-role accountability toward shared council-based leadership to support scalability of decision-making, not just technology.

For boards, the implication is clear: if your governance model still relies on a single “hero” CISO or AI lead, it’s time to rethink.

9. The real cost of AI: POC graveyards and TCO shocks

Everyone loves a good AI demo. But moving from demo to production is where the bodies are buried, literally, in what Raji calls the “POC graveyard.”

Common pitfalls:

  • Underestimating total cost of ownership. Licensing + compute + storage + observability + human review adds up. Raji has seen cases where a new AI solution would technically solve the problem, but the full TCO was 5–10x higher than the existing system.
  • Token-based pricing confusion. It’s difficult to predict costs when usage is tied to tokens, document sizes, and interaction patterns. One team’s “quick tests” can become another team’s budget blowout.
  • Forgetting the human-in-the-loop cost. If you need people validating AI outputs for accuracy, compliance, or ethics (and you almost always do), that’s a recurring operational cost, not a free safety net.

Add leadership uncertainty on top (“Where do we even invest first?”), and it’s no wonder many organisations are cautious.


So, what should technology leaders actually do in 2026?

If there’s a meta-theme for 2026, it’s this: cautious, governed adoption beats either analysis paralysis or reckless experimentation.

tech trends 2026

Clarify your AI strategy and guardrails.

  • Define what staff can and can’t do with public AI tools.
  • Align policies with standards like ISO 42001 and OAIC guidance.
  • Stand up an AI or information security council rather than relying on a single executive, and connect that council’s remit to enterprise and sustainability goals.

Prioritise one or two high-value AI use cases.

  • Look for problems where AI can clearly improve productivity or insight.
  • Avoid “AI for AI’s sake” and insist on TCO modelling before committing, including how the AI application will scale and how it will be governed over time.

Invest in leadership and user literacy.

  • Run executive sessions on AI risk, Zero Trust, and modern threat models.
  • Provide clear, practical training that demystifies tools like Copilot and AI Agents.

Modernise security in step with AI.

  • Evolve towards Zero Trust and identity-first security.
  • Explore AI-assisted SOC capabilities and brand risk monitoring.
  • Start the conversation about post-quantum cryptography, even if you’re not implementing it yet.

Accept that no one can “know it all.”

  • Optimise for good decision-making and strong governance, not encyclopaedic knowledge.
  • Partner where it makes sense, especially in security operations and AI enablement, to build internal expertise while still moving fast.

How Canon Business Services ANZ can help.

For most organisations, the challenge in 2026 isn’t just what to adopt, it’s how to do it safely, pragmatically, and without overwhelming internal teams.
With the managed services market continuing to grow — albeit steadily at around 2.1% year-on-year — organisations are becoming more selective. They’re seeking partners who can demonstrate capability, provide proof points, and deliver measurable ROI.

CBS works with enterprises across Australia and New Zealand to:
  • Design and implement secure enterprise AI, from private copilots to data-aware assistants
  • Build AI and security governance frameworks aligned to ISO and local regulatory guidance
  • Modernise security operations, including AI-assisted SOC, threat intelligence, and brand risk monitoring
  • Evolve architectures towards Zero Trust and identity-first models
  • Support leadership and teams with education and advisory services, so they can make confident decisions

If you’re planning your 2026 roadmap and want to move beyond hype to governed, defensible outcomes, talk to Canon Business Services ANZ about where to start — and how to turn these trends into real, measurable value.

Frequently asked questions

What are the key technology trends driving change in Australia in 2026?

The major technology trends include enterprise AI adoption with secure private AI, AI at the edge with small language models, advanced automation through computer use models, immersive world models for virtual environments, and heightened AI-driven cybersecurity measures. Quantum computing and Zero Trust security are also gaining strategic importance.

How is AI transforming businesses in Australia?

AI is moving beyond pilot projects to enterprise-grade solutions, enabling organisations to integrate AI securely with their data, automate legacy systems, and improve decision-making with AI-powered analytics. AI agents and digital transformation are enhancing operational efficiency, risk management, and customer engagement across multiple industries.

What challenges do Australian companies face in adopting new technologies?

Common challenges include managing AI governance and security risks, dealing with the total cost of ownership for AI solutions, ensuring compliance with evolving regulations, addressing talent shortages, and modernising legacy infrastructure without disrupting business operations.

How important is cybersecurity in emerging technology trends?

Cybersecurity remains a top priority, with AI-enhanced threat detection, AI-assisted Security Operations Centres (SOCs), and brand risk monitoring becoming essential. The rise of sophisticated AI-powered social engineering attacks and the impending impact of quantum computing demand proactive security strategies like Zero Trust and post-quantum cryptography.

What should Australian organisations focus on to succeed with technology adoption in 2026?

Leaders should clarify their AI and technology strategies, establish governance frameworks, prioritise high-value AI use cases, invest in leadership and user education, adopt modern security practices, and partner with experienced technology service providers to ensure safe, sustainable, and effective technology integration.

Similar Articles

VIEW ALL

AI agents vs automation

Uncover the key differences between AI agents and automation. Learn how each technology can improve workflows and drive smarter decisions for New Zealand businesses.

AI automation and the future of work

Uncover how AI automation is transforming the future of work in New Zealand. Learn about the latest trends, impacts on jobs, and strategies to adapt.

A guide on AI fraud detection

Explore how AI fraud detection enhances security of businesses in New Zealand. Learn about machine learning algorithms, benefits, challenges, and best practices.

Key steps in Application Modernisation

Discover effective strategies for modernising applications within New Zealand organisations. Unlock insights, tips, and tools to streamline your modernisation journey now.

APRA CPS 230 & the future of IT compliance

Ensure IT compliance with APRA CPS 230. Learn how AI and automation help enterprises build resilience in a changing regulatory landscape.

What is Security Automation?

Learn how automated security transforms cybersecurity, making it simpler and more efficient. Protect your business data with CBS New Zealand’s expert insights now!

What are the benefits of machine learning in business?

Explore the myriad benefits of machine learning in business. Learn how ML enhances efficiency and drives innovation for sustainable growth in New Zealand. Discover now!

What are the benefits of penetration testing?

Gain confidence in your digital security with the benefits of penetration testing. Enhance cybersecurity, identify vulnerabilities, and fortify your defences with CBS New Zealand's expert insights now!

Differences between Copilot and ChatGPT

Compare Copilot and ChatGPT to understand their unique capabilities. Explore how each tool can enhance productivity and creativity in different contexts.

Feature comparison between Copilot and Copilot Pro

Compare Microsoft Copilot and Copilot Pro, exploring their features, benefits, and value propositions for organisations in New Zealand. Read more.

Cybersecurity Threat Detection: Proactive strategies

Stay ahead in cybersecurity with our 2024 guide on threat detection. Learn advanced technologies & response plans to protect your business against threats with CBS New Zealand.

Cybersecurity risk assessment

Learn how to protect your business with a detailed cybersecurity risk assessment. Start now to identify threats and secure your digital assets!