Most organizations racing to deploy AI agents have something critical missing from their project teams. It is not technical talent. It is not budget. It is a Business Analyst who actually knows how to do analysis for an agentic system.
That gap is going to cost organizations more than they realize.
I have spent a lot of time over the past few years working with BAs on AI projects, and the pattern I keep seeing is the same: a team brings a BA in to do what BAs have always done, and they run into a wall. The techniques that work beautifully for defining traditional software requirements, even for simpler AI automation, simply do not hold up when the system you are analyzing can reason, adapt, and act on its own toward a goal.
This is not a criticism of those BAs. Nobody handed them a playbook for this. The discipline of business analysis has not yet fully caught up to what agentic AI actually demands. That is exactly why this conversation matters.
What We Actually Mean by Agentic AI
Before we can talk about what BAs need to do differently, we need to be precise about what agentic AI is, because it gets conflated with AI in general all the time.
AI automation, the kind most organizations have been working with for several years now, performs defined tasks efficiently. It follows rules. It operates within a workflow you design. A document classification system, a chatbot that routes customer service requests, an AI model that flags anomalies in financial data, these are all examples of AI that does what you tell it to do, within the parameters you set. A skilled BA can analyze these systems using a modified version of familiar techniques.
Agentic AI is different in kind. An AI agent reasons toward goals. It decides what steps to take. It uses tools, accesses data, calls other systems, and adapts its approach based on what it encounters along the way. You give it an objective, and it figures out how to pursue that objective. An agentic system might browse the web, write and execute code, send emails, query databases, and coordinate with other agents, all without a human directing each individual action.
That shift from “AI that executes instructions” to “AI that pursues goals” changes everything about how a BA needs to approach the analysis work.
Agentic systems still need a lot of analysis on what the agent objectives, boundaries, and tools are; as well as when agents should communicate with one another.
Why Your Current BA Techniques Fall Short
Here is the core tension. Most BA techniques were built for deterministic systems. Deterministic means the system behaves predictably: given input A, the system produces output B. Every time. That predictability is what makes it possible to write requirements, define acceptance criteria, and test for compliance.
Agentic AI operates probabilistically. The same input can produce different outputs depending on context, the state of connected systems, what the agent has “learned” during the current session, and countless other variables. The agent is not following a script. It is reasoning in real time.
This does not mean agentic systems are unanalyzable. It means you have to analyze them differently. You cannot write a traditional functional specification for an agent’s behavior. You can, however, define the boundaries of what the agent is allowed to do, the goals it is meant to pursue, the guardrails that constrain it, the conditions under which a human must intervene, and the criteria by which success is measured. That is the BA’s job in an agentic context. It requires a different frame entirely.
This analysis also entails things like, defining where in a process agents make sense, defining the agent’s task itself, and its goals, metrics, and defining the task attributes of the agent. It also entails analyzing the governance risks and defining/designing the human in the loop processes.
The BAs who recognize this shift early are the ones who will become genuinely irreplaceable on AI project teams. The ones who try to apply the old techniques without adaptation will find themselves increasingly sidelined.
Tip 1: Master the Deterministic vs. Probabilistic Design Decision
The single most important analytical judgment a BA makes on an AI project is not what the system should do. It is deciding, step by step, whether each part of the system should operate deterministically or probabilistically.
The consequences of getting this wrong are real. If you apply probabilistic AI to a step that requires deterministic behavior, you introduce legal risk, compliance failures, or outputs that cannot be audited. If you force deterministic logic onto a step that would benefit from AI reasoning, you undercut the value the technology could deliver.
A BA who can walk into a process design conversation and confidently lead this analysis is providing strategic value that cannot be replaced by a developer or a data scientist. It is a distinctly analytical, business-facing judgment call.
Tip 2: Define the Agent’s Action Space Before Anyone Writes a Line of Code
Often, teams jump to building before anyone has clearly defined what the agent is actually allowed to do.
An agent’s action space is the full set of actions it can take, the tools it can access, the systems it can interact with, the data it can read or write, and the decisions it can make without human involvement. Defining this is foundational BA work, and it needs to happen early.
When you map out the action space thoroughly before development begins, you also surface misaligned assumptions early. Business stakeholders often have a very different mental model of what the agent will do than the technical team building it. The BA’s job is to close that gap before it becomes an expensive rebuild.
Tip 3: Build Outcomes and Guardrails Into the Definition of Done
Agentic AI systems need a different definition of done.
For traditional software, done usually means the system does what the requirements said it should do. For agentic systems, that bar is not sufficient. An agent can technically do exactly what it was designed to do and still produce outcomes that are wrong, harmful, or misaligned with business intent. The gap between “the agent followed its instructions” and “the agent produced the right outcome” can be significant.
The BA’s role is to ensure that measurement and monitoring systems are in place from day one. Governance does not happen after delivery. It is built in.
The BA Who Leads Here Will Not Be Replaced
There is a lot of anxiety in the BA community about AI replacing the role. I understand where that anxiety comes from. AI tools can already generate user stories, write requirements, and produce process models. If that is how you have been defining your value, the anxiety is warranted.
But agentic AI does not make the BA role obsolete. It makes a certain kind of BA obsolete, and creates enormous demand for a different kind. The BA is focused on outcomes and process change towards those outcomes. One who can navigate probabilistic system design, who can define the boundaries of autonomous action, who can build governance into the architecture of an AI system before it ships, that person is not threatened by agentic AI. That person is essential to it.
The question is whether you are building those capabilities now, while there is still time to get ahead of the curve.
Organizations are going to deploy agentic systems with or without skilled BA involvement. The ones that do it without skilled BA involvement are going to learn some expensive lessons. The ones that figure out how to bring experienced, AI-fluent BAs into the process early are going to move faster and with less risk.
You can be the person who makes that case inside your organization. You can be the BA who walks into an agentic AI project and knows exactly what questions to ask, what design decisions need to be made before development starts, and what governance structures need to be in place before the system goes live.
That starts with deciding to close the knowledge gap. It starts with treating agentic AI not as something that is happening to the BA role, but as the most significant strategic opportunity the profession has seen in years.
Ready to Build Your AI-Fluent BA Practice?
If you want to go deeper on business analysis for AI, including practical frameworks for analyzing agentic systems, designing human oversight, and leading AI projects with confidence, I teach this in my Maven course series.
You will learn from real-world scenarios, work through frameworks you can apply immediately, and connect with a community of BAs who are navigating the same challenges.
Visit www.maven.com/angela-wick to see current courses and upcoming cohorts.
The BA practice that serves you well on agentic AI projects is not built overnight. But every expert started by deciding to take it seriously. That decision is available to you right now.
