Let’s talk about reasoning
On December 29th, 2023, I published “5 Predictions for 2024.”
One of them was:
3. Sparks of Autonomous Transformation
There have been rumors of the next big release from OpenAI being "agents." Having agents that can autonomously handle tasks is a critical step in the direction of Autonomous Transformation.
The first wave will likely be low complexity, low value tasks, but worthy of investment as the accumulative value empowers people to focus on higher order, top-line revenue creation.
As organizations become accustomed to agents as a toolset and leaders reestablish trust with their people that they value them more than they value tools, the groundwork will be laid for an "iPhone launch-level" introduction of a product or service that could really be referred to as the first major lighthouse example and enabler of "Autonomous Transformation."
Yesterday, OpenAI announced a new model, o1 (codename: Strawberry) that now has reasoning capabilities.
This is important because reasoning is a critical component in AI agents, which I explained here.
The good news about OpenAI’s o1
OpenAI’s o1 has demonstrated a step forward in reasoning capability. While hallucinations continue to be a problem (and will continue to be without an architectural overhaul), the models are able to better navigate over information and demonstrate the steps they leverage in doing so.
Example:
This is the correct answer.
So what’s the problem?
As of this writing, there is no way to edit the reasoning processes of o1.
This is a problem because of the characteristics of reasoning:
All reasoning has a purpose
All reasoning is an attempt to figure something out, to settle some question, to solve some problem
All reasoning is based on assumptions
All reasoning is done from some point of view
All reasoning is based on data, information, and evidence
All reasoning is expressed through, and shaped by concepts and ideas
All reasoning contains inferences by which we draw conclusions and give meaning to data
All reasoning leads somewhere, has implications and consequences
This means that all reasoning done through OpenAI’s o1 will have the same assumptions, point of view, and will contain the same inferences by which it will draw conclusions.
This may already be in their roadmap, but in order for organizations to realize value from AI agents, they will need to customize how agents reason specific to their domain and expertise.
Oh and one more thing
Next Thursday, I’m speaking live with Bloomberg Technology Columnist Parmy Olson on her new book Supremacy: AI, ChatGPT, and the Race That Will Change the World, which is already an Amazon bestseller and has been selected for the Financial Times Business Book of the Year 2024 Longlist.
You can join us live on LinkedIn next Thursday for free. Sign up and mark it on your calendar here.
Thanks for reading,
Brian
Whenever you're ready, here are 3 ways I can help you:
My LinkedIn Learning Course launched on October 3rd: Organizational Leadership in the Era of AI. This 48-minute course is packed with frameworks and insights to help you lead in the era of AI using the new system of leadership I introduced in Autonomous Transformation and is free to anyone with a LinkedIn Learning subscription.
If you want to go deeper as an individual, you can sign up for my live course Future Solving Fundamentals. My flagship live course on how to position yourself in the future in the era of AI. I share over a decade of AI strategy expertise, proven methods, and actionable strategies. This course sets the stage for a new era of value creation with artificial intelligence. Join leaders from Microsoft, Accenture, Amazon, Disney, Mastercard, IKEA, Oracle, Intel, and more.
If you want to go deeper as a team or organization, I’ve partnered with LinkedIn Learning, Emeritus, and Duarte to create tailored Future Solving™️ programs that leaders at Fortune 500 companies are implementing to help reembrace science in their decision-making and meet the complexities of the 21st century with a system of leadership, strategy, and decision-making that is built-for-purpose for era of artificial intelligence. You can reach out to connect and learn more here.
From another article that discusses the evolving science trying to understand and influence LLM-based chatbots: “Other researchers found that GPT3's responses to reasoning questions can be improved by adding certain seemingly magical incantations to the prompt, the most prominent of these being "Let's think step by step." It is almost as if the large learned models like GPT3 and DALL-E are alien organisms whose behavior we are trying to decipher.” https://cacm.acm.org/blogcacm/ai-as-an-ersatz-natural-science/
Sounds like the next step you anticipate should be called “Reason Engineering” as an homage to the term of use: “Prompt Engineering”.