AI agents are so hot right now.
There are more and more articles coming out on why autonomous AI agents are relevant for businesses, what they are (which I covered here), and speculating on how things are going to play out.
I mean this in a very demure, very mindful way: most of them are wrong.
I’ll explain.
Automation is like a Rube Goldberg machine
A Rube Goldberg machine, playfully demonstrated below, is built on a chain of reactions.
In an organization, a chain of reactions might look like this:
Someone submits a purchase order
The purchase order is uploaded to a shared drive
The purchase order is also scanned with OCR to extract data
The data is then used to update various spreadsheets
BI dashboards are triggered to updated snapshots of the new total spend
The data is also fed into an LLM to summarize
The BI snapshots + the LLM summary are sent to an executive
Useful? Yes.
An agent? No.
Being positioned as an agent in marketing and sales? Believe it or not, yes.
Whether it’s an agent or not doesn’t matter if it’s useful for your organization. If the process doesn’t deviate, it would be far less expensive to invest in automation than autonomy.
But for leaders to harness the potential of these technologies, the distinction is important to understand.
An agent is like autopilot
A flight is not something that can be turned into a simple automation because it’s too complex.
There are too many variables: weather, bird strike, another airplane, systems failing, etc.
So, there are three distinct differences:
How it’s trained
Agents and autopilots are not trained like Rube Goldberg machines with simple “first this, then this” and “if this, then that” statements, though undoubtedly some of the sub-processes follow this pattern, the key distinction is that the training is instead based on principles.
Examples:
“The plane needs to be stable—run x simulations of different conditions and practice maintaining stability regardless of the wind, rain, altitude, etc.”
In a business context: “We need to notify the development team and reprioritize their workload when a customer has reported an issue that has financial implications for the customer—here’s a large sample of all customer reports and whether they ended up having financial implications. Analyze and find the key patterns between them and use those patterns to predict how likely each new customer issue is to have financial implications.”
How it makes decisions
Instead of basing decisions on binary inputs (If this, then that), agents make decisions autonomously based on confidence levels.
Examples:
“Based on the hundreds of systems and conditions the autopilot monitoring, there’s an 80% chance we will face adverse conditions if we don’t change course by x degrees.”
In a business context: “Based on hundreds of variables and patterns (sometimes imperceptible to humans), the agent is 92% confident this new customer reported issue will have significant financial implications and is therefore reprioritizing all of dev team x’s work and a hotfix deadline by EOD."
How it interacts with humans
Automation workflows often have human-in-the-loop steps depending on different conditions. Agents, however, have an overarching threshold across their entire flow of work and monitoring systems, at which they notify the human that it’s time to intervene or make a decision. This means that a human can be pulled in whenever needed, versus automation, where if the “check-if-we-need-a-human” step has cleared, a human may not be notified at the next step even if they should be. Importantly, they are also often set up to learn from the human decision.
Examples:
“The weather conditions are too unpredictable for our systems to maintain visibility and stability, so the pilot needs to take over.”
In a business context: “This customer reported issue does not fit into any category or pattern of previously reported issue, so the Customer Support team member needs to take over and categorize this issue or create a new category.”
An example from teaching chess to children
I used to train national chess champions, and I had some serious beef with how many “teachers” taught children to play chess.
They were teaching them like this: Move this pawn here on the first move. Then if they move this pawn here, bring the knight out. If they move that, bring the bishop out, and so on.
They had children memorizing hundreds of “if this, then that” scenarios. (let’s call it “Chess automation”)
When I knew my students were playing against someone taught this way, I told them to play something surprising that would break away from the memorized pattern, at which point the other player wouldn’t know what to do and would often start making random moves.
What I taught my students was to play based on principles: control the center of the board, develop your pieces, avoid isolating your pawns, etc.
They analyzed every game they played and every grandmaster game they reviewed through the lens of those principles, experimenting with and learning different ways of controlling the center, developing their pieces, etc.
Chess-playing computers used to be trained the first way (if this, then that), but they weren’t able to beat the best chess-playing humans in the world until they were trained on principles.
So how can you tell the difference? The easiest way is to look for memorized steps versus principles.
If someone tells you they have AI agents that do xyz, you can ask them to walk you through how: how are they trained, how do they make decisions, and how do they interact with humans.
Much, much more to come on what in the world AI agents are, how to think about and understand them, and how you and your organization can use them to create value.
Thanks for reading,
Brian
Whenever you're ready, here are 3 ways I can help you:
My LinkedIn Learning Course launched on October 3rd: Organizational Leadership in the Era of AI. This 48-minute course is packed with frameworks and insights to help you lead in the era of AI using the new system of leadership I introduced in Autonomous Transformation and is free to anyone with a LinkedIn Learning subscription.
If you want to go deeper as an individual, you can sign up for my live course Future Solving Fundamentals. My flagship live course on how to position yourself in the future in the era of AI. I share over a decade of AI strategy expertise, proven methods, and actionable strategies. This course sets the stage for a new era of value creation with artificial intelligence. Join leaders from Microsoft, Accenture, Amazon, Disney, Mastercard, IKEA, Oracle, Intel, and more.
If you want to go deeper as a team or organization, I’ve partnered with LinkedIn Learning, Emeritus, and Duarte to create tailored Future Solving™️ programs that leaders at Fortune 500 companies are implementing to help reembrace science in their decision-making and meet the complexities of the 21st century with a system of leadership, strategy, and decision-making that is built-for-purpose for era of artificial intelligence. You can reach out to connect and learn more here.