“In the distance, Miles saw the end of man as the Apex, and the Rise of AI in its place, a bold new universe with boundless frightening and exciting possibilities.”
Our sensibility for thinking about AI is the current “generative” phase, where we tell the AI to create something and it does as we ask.
On the fly, the AI creates images, writes songs, programs code, crafts recipes, and edits text, and these are just some of the generative AI capabilities that are emerging.
But, it’s the next phase, the Agentic Phase, when a new form of AI – autonomous agents – will change everything.
What makes an agent an “Agent” is that agents can operate with both independent thought and independent action and without human direction.
When we begin to see such entities operating in the wild, then our society will begin to grasp, viscerally, the potential (and potential for peril) from AI’s Scale, Complexity and increasing Autonomy from Man.
Case in point, if you’ve experienced Waymo’s autonomous taxi service you know firsthand this before/after “holy shit” dynamic. It doesn’t seem real, but not only is it real, it’s a really nice experience.
In not so distant future, there will be 50-250 Waymo style intelligent automation applications, we’ll have sharpened our mental models for AIs foundational use cases, much of which was predicated on innovations that seemingly overnight went from science fiction to rocket launch and then, ubiquity.
In doing so, we’ll come to understand why this represents a hierarchical shift in terms of the primacy of humankind as masters of the universe.
Consider two very different examples of types of agents that will become so endemic that within FIVE years we’ll take their existence for granted.
Think of a Marketing Assistant Bot, whose job is understand your business, its operations, economics and the industry it operates within, in terms of market dynamics, competition and customer needs.
As a bot that exists virtually in the cloud, it will tirelessly create your marketing collateral, manage your outbound communications, cultivate your web presence, and provide customer service and technical support.
Within five years, this entire set of jobs, outcomes and interfacing assumptions will be able to be performed by a software bot operating as an autonomous agent.
Such bots will excel at design, orchestration and oversight. They also will be adaptive, readily augmenting their capabilities, based on the needs of users and industry best practices, enabling agents to expand their coverage department by department, enterprise by enterprise, and industry by industry.
Now, consider something entirely different. There will exist a software-based Predator Bot that plays to human frailty by autonomously monitoring online services, chat and email, and engaging strangers, profiling them, and pushing buttons to get these strangers to be befriended or romanced, so they’ll reveal their secrets, share details on their personal wealth, provide compromising images, account numbers, passwords, and the like.
Such bots will excel at bankrupting, blackmailing and breaking hearts.
They’ll never get sick, never sleep, never feel guilty, can operate their cons over time, will keep getting better based on global scale learnings, and can scale their activities infinitely in terms of individual instigators and conspirator “cohorts.”
While such bots may operate on behalf of organized criminal networks, and **may** report back to a human or software master that governs them, there is no inherent reason that agents can’t operate as literal free agents, exercising independent thought and independent action.
This begs a question. When an AI-based network of Predator Bots decides to break off from its home criminal network, what becomes its compass, what does it optimize on and what does it build over time?
Is there any reason such a bot would practice loyalty to its human boot master?
I have no idea, but it hearkens back to Marc Andreessen’s axiom about ‘Software is Eating the World,’ though its more like “enveloping the world” in the case of AI.
Such is the promise and the peril of Agentic AI.
When Science Fiction toggles from Impossible to Inevitable
It says here that by 2030 – if not sooner – our current AI model of generative intelligence via chatbot will give rise to master and sub agent bots that can operate independently, cooperatively or in a federated fashion as:
- Intelligent Task Runners
- Generative Engines
- Managers of Stage, State and Resource Allocation
This emergence sets in motion the advent of AGI, or Artificial General Intelligence, that achieves a state of Super Intelligence that is all aware, all assimilating and all capable, certainly beyond the realm of human understanding.
It is logical to ask how close is this to reality, and how likely are the scenarios presented to come about, in the time frames suggested.
Let me first say that the data side of this argument is based on reading, ‘Situational Awareness – The Decade Ahead’ by Leopold Aschenbrenner, who was one of the founding members of the Superalignment team at OpenAI.
(Note: Better than taking my word, read Situational Awareness via the link above. As an aside, Superalignment is focused on navigating the unique technical and design challenges of reliably controlling AI systems that are much smarter than we are.)
The author makes the case for three vectors of exponential growth leading us to AGI.
The first and most basic is that we are using much bigger computers to train these models, which the author argues presents a straight line between building ever-bigger compute clusters, and the dialing up of the AI revolution.
In just a few years, we’ve gone from computers barely being able to distinguish chihuahua faces from blueberry muffins, to bots now being able to operate with the full library of knowledge and task execution skills of the most elite grad students.
The graphic below illustrates that, from a compute perspective, it does not require a lottery ticket-level event to occur in terms of leaps in technical know-how or manufacturing scale, just for the compute growth trend and ramp to continue.
(Note: AI ramp is arguably more gated on access to power, a topic worth discussion in its own right.)
Here, Aschenbrenner asserts that we can decompose the progress in the four years from GPT-2 to GPT-4 into three categories of scaleups:
- Compute: We’re using much bigger computers to train these models.
- Algorithmic Efficiencies: There’s a continuous trend of algorithmic progress. Many of these act as “compute multipliers,” and we can put them on a unified scale of growing effective compute.
- ”Unhobbling” Gains: By default, models learn a lot of amazing raw capabilities, but they are hobbled in all sorts of dumb ways, limiting their practical value. You can think of unhobbling as “paradigm-expanding/application-expanding/re-factoring/right sizing” algorithmic progress that unlocks capabilities of base models.
Needless to say, there is a natural synergy and feedback loop between growth in Algorithmic Efficiencies and Unhobbling Gains.
This is why optimizing on best practices is the gift that keeps giving in how it shapes purpose, process and (realized) potential.
But that’s qualitative. To better quantify this, In Aschenbrenner’s graph that means in four years time, we were able to achieve the same level of performance for ~100X less Compute (and concomitantly, much higher performance for the same Compute).
A related correlate of this is that as the quality of data, and the means of fortifying that data get better, we could/should see better models for AI to internalize.
Better training runs on better trains with better tracks.
Unhobbling, by contrast, may focus on overcoming current constraints of AI systems, such as lack of long-term memory, limited capacity to use a computer, severely limited in most actions that occur in the physical realm (vs. digital), lack of reflective thought, and limited collaborative skills; notably, many of the things that are native to humans.
In closing, here are some scenario planning bets curated from my own biases, but also vetted. Worth marinating on, if you are so inclined:
- By 2030, thanks to Agentic AI, you’re going to have bots that look and perform more like a Co-Worker than a ChatBot.
- One of the more interesting questions will be pricing models for Agents and Agentic systems. Will pricing be more like a licensed seat; more like a mechanical turk unit; or more like a 1099 hire?
- Once the models can automate AI research, that will kick off intense feedback loops, opening the door to solving the remaining bottlenecks to AI fully automating almost everything. In the process, AI will begin to evolve very rapidly.
- To think of the scale of AI, imagine 100 million automated researchers each working at 100X human speed with access to the full library of knowledge in the domain of focus, each be able to do a year’s worth of work in a few days.
- The hyper acceleration of intellectual activities created through AI automation at scale will yield the creation of ultra-intelligent machines and an ‘intelligence explosion’ that leaves the intelligence and inventions of man far behind. This will be a catalytic event for mankind.
- Most basically, AI will be able to self-improve through an ability to write millions of lines of complex code, keep its entire codebase in context, and spend human decades-level checking and re-checking every line of code for bugs and optimizations.
- A non-obvious “unfair advantage” of the existence of a robust AI training fabric is that you won’t have to individually train up each automated AI researcher. Instead, you can just teach and onboard one of them—and then make replicas.
- As the AGI race intensifies—as it becomes clear that superintelligence will be utterly decisive in international military, political and economic competition—we will have to face the full force of foreign espionage, hacking and intelligence wars.
- Unless we solve alignment—unless we figure out how to instill the critical side-constraints—there’s no particular reason to expect this small civilization of superintelligences will continue obeying human commands in the long run. Put another way, it seems totally within the realm of possibilities that at some point they’ll (the AI) simply conspire to cut out the humans, whether suddenly or gradually.
Either way, rest assured a wild ride is ahead, through the looking glass, that is.