Agentic AI is coming - it's time to choose wisely

Just because you’ve got an AI hammer doesn’t mean every problem is a nail.

This article was written by guest contributor Nick Ferguson, Principal Consultant at NFAIQ (link: https://nfaiq.com/). Nick specialises in Change Management, Organisational Design, and the practical application of AI across HR and enterprise functions. With an MBA in Human Resources Management and a background leading large-scale transformation programs, he focuses on designing people-centred, AI-ready operating models. His current work explores how agentic AI can reshape roles, processes, and performance – without losing sight of human judgement. Connect with Nick on LinkedIn (link: https://www.linkedin.com/in/nickfergusonaus/).

The AI conversation in 2025 has moved beyond large language models. The frontier is agentic AI - systems of intelligent agents that plan, act, and learn together to achieve goals. These agents can browse the web, run code, query databases, or coordinate with other agents, often with minimal human direction. They might even be your next team-mate at work.

As statistician George Box said, “All models are wrong, but some are useful.” The challenge now is knowing which AI models are wrong, which are useful - and when to use them.

Traction vs truth: When “good enough” isn’t

Generative AI doesn’t always get the facts right - but it often moves faster than any team. It surfaces patterns, unlocks drafts, and accelerates action. For some business cases, that’s good enough. In 2025, organisations aren’t always chasing precision, they’re chasing progress, with people in the loop who know when to trust the glide and when to grab the controls. For them, the value of a model isn't in truth - it's in its traction.

If the value of a model isn’t in truth - it’s in its traction.

But not all problems can afford approximation. Sometimes, the value of a model is in its truth. A law firm needs the exact case reference, a regulator needs full auditability, and your CFO won’t sign off on "about right." In these cases, you don’t want GenAI. You want structure: a well-governed dataset, clear logic, maybe some clever machine learning. No hallucinations, no guesswork, just structured, reliable truth.

And this isn't a hypothetical choice - it's already on the table. Agentic AI isn't science fiction - it’s straight up happening.

Key vendors have already launched agentic capabilities:

Some recent examples of enterprise platforms introducing agentic AI capabilities.

These aren’t experiments either - these capabilities are embedded in products your teams may already be using (Workday, Salesforce, Copilot, ServiceNow, SAP, Oracle, CrewAI). And that brings us back to a familiar temptation.

Altman’s Hammer still looms large. In a previous post, I wrote about the pull to apply AI to every problem simply because it’s there - powerful, popular, and pushed by every platform. That same hammer is now repackaged as agentic AI and wired directly into your tech stack. Which makes it even more critical to ask:

Are we applying agentic AI to the right problems?

Just because a system can autonomously research, reason, and generate output doesn’t mean it should - especially when precision, traceability, or regulatory compliance is required. In high-stakes domains like legal advice, audit outputs, compliance triggers the cost of being “mostly right” is often too high. These aren’t use cases for creativity or speed. They demand structured truth.

When, if and how to use an AI depends on your use case.

Creativity vs Control: A Strategic AI Heuristic

To guide AI use case selection, try this simple lens:

Ask yourself questions about what outcome you need your process to deliver.

What is your risk tolerance? How much control vs creativity do you need?

This distinction helps leaders decide where GenAI or multi-agent systems can accelerate, and where the organisation must protect against hallucination or overreach.

Choose with context: System thinking for Agentic AI

For business leaders facing this challenge, try system thinking when considering implementing AI. Rather than layering agents over fragile processes or adding generative tools where structured logic is needed, remember to take a step back to first align on the problem.

✅ 1) Confirm the opportunity

  • Look for transactional bottlenecks, repetitive workflows, or buried insights.

  • Prioritise augmentation - Look for where processes can be improved and work out the best way to improve it later (e.g., don't jump straight to AI, when bureaucracy/bad design is your issue).

✅ 2) Protect the precise

  • Identify workflows that rely on legal defensibility, public trust, or audit trails.

  • Use deterministic systems: rules, validations, structured ML.

✅ 3) Close the circuit

  • Build clear interfaces, handoffs, and human fallback plans - document the end-to-end process.

  • Align people, agents, and systems with strong governance.

✅ 4) Draw the line

  • Flag decisions too sensitive to automate - ethics, risk, legal or human impact.

  • Set clear red lines. Choose which things should remain human by design.

Confirm use case > Clarify outcome > Choose tech.

In the rush to adopt agentic AI, the real risk isn’t just in the tech - it’s in how and where we apply it. Not every process needs speed. Not every outcome needs AI. But when you bring together the right expertise and system design (one that balances creativity, control, and context) agentic AI can move from hype to high-value impact.

What have I missed? Do you have a different perspective?

Next
Next

How to find your next Change Management role