Aaron Levie Just Described Every Layer 1 Problem in the Enterprise. He Just Did Not Call It That.
Aaron Levie spent last week on the road meeting with dozens of IT and AI leaders at large enterprises across banking, media, retail, healthcare, consulting, tech, and sports. He came back with eight observations about the state of AI adoption. They are worth reading carefully. Not because they are surprising, but because of what they reveal about where the real work actually lives. Every problem he describes is a sequencing problem. Enterprises tried to start at Layer 2 and Layer 3. They are now paying retroactively for Layer 1.
"Moving from chat to agents that execute real work"
This is the Layer 1 to Layer 2 transition happening in real time across the largest organizations in the world.
Chat is Layer 2 without a foundation underneath it. It answers questions but cannot act on anything because it has no unified access to systems, data, or workflows. Agents require Layer 1 to actually function. The enterprises that built clean data foundations first are the ones now running agents that do real work. Everyone else is discovering that the agent era requires a foundation they never built.
The companies making this transition smoothly are not smarter. They built in the right order.
"Change management still the biggest topic"
Aaron describes enterprises needing a ton of help to drive agent adoption, including one company that has a head of AI in every business unit reporting up to a central team just to keep all the functions coordinated.
This is a Layer 3 failure. Enterprises deployed Layer 2 tools into environments that were never redesigned to receive them. The workflows were not set up for agents because the human layer, including the roles, processes, decision rights, and accountability structures, was never updated alongside the technology. Adding a head of AI to every business unit is an organization trying to retrofit Layer 3 after the fact. It is expensive and slow because it should have come before the tools, not after.
The sequence matters. You cannot buy your way out of a change management failure with more technology. You address it before you deploy.
"Tokenmaxxing — OpEx budgets and shark tank pitches for compute"
This one is fascinating and underreported. Companies are running out of budget for AI tokens mid-year and holding internal competitions to allocate compute to the best use cases.
This is a Layer 1 governance problem in a finance costume. Companies do not know which use cases actually have the data foundation to support agents, so they cannot intelligently prioritize where compute goes. The shark tank model is actually correct. It forces teams to prove a use case is viable before getting resources. That is an audit conversation in a different form.
The companies that will win this allocation problem are the ones who built the diagnostic layer first. Who know what they have, what it costs to run, and which workflows are actually ready for automation.
"Fixing fragmented and legacy systems is a huge priority"
Aaron's most direct observation: decades of on-prem systems and partially migrated cloud infrastructure mean agents cannot tap into data sources in a unified way. Companies are spending heavily on modernization.
This is Layer 1 failure at enterprise scale. It is not a new problem. What is new is that the cost of not having a clean data foundation is now visible in a way it was not before agents existed. The enterprises spending on this modernization right now are not doing AI work. They are doing foundation work. They just had to wait until their agents broke before they understood why it mattered.
This is the same pattern at every scale. The enterprise version has more zeros attached to it.
"Most companies are not talking about replacing jobs"
The major use cases Aaron describes are software upgrades, back office automation, and document processing for client insights. These are all Layer 2 work that was consuming Layer 3 human time.
Agents are not replacing judgment. They are eliminating the manual work that was crowding out judgment. Every hour an analyst spent manually pulling data and assembling reports is an hour that analyst was not doing the strategic interpretation their title implied. Agents handle the assembly. The human handles the analysis.
This is the Technology 3.0 principle in practice. The goal is not a more efficient version of how you work today. The goal is getting people out of the work that does not require them so they can do more of the work that does.
"Headless software — enterprises will kick out vendors who do not make interoperability easy"
This is an infrastructure principle. The companies that built proprietary, closed technology stacks are now discovering that locked architectures are a liability in a world where agents need to move across systems freely.
The enterprises that win are the ones with clean, accessible, well-governed data layers that any agent can connect to. This is the governance conversation applied at scale. Build the infrastructure to work with anything, not just what you have today. Vendor lock-in was always a risk. In the agent era it becomes a ceiling.
"Hard to standardize — multi-agent world, interoperability paramount"
This is what happens when an entire market skips Layer 1 and builds Layer 2 in competing directions simultaneously.
Nobody wants to commit to an architecture because nobody built the foundation that would make architectural choices durable. The companies that will navigate the multi-agent world are the ones whose data layer is clean enough that the agent layer on top is interchangeable. The foundation does not change with the model. The foundation does not change with the vendor. The foundation is what makes every choice above it reversible.
"Everyone is working more than ever before"
The most honest observation in the post.
This is the cost of deploying Layer 2 without Layer 1. When you add agents on top of fragmented infrastructure you create more work. More coordination overhead, more troubleshooting, more edge cases to handle manually. AI in a messy environment generates more work. AI in a clean environment generates more capacity.
The enterprises Aaron is visiting are experiencing the former. The ones working the hardest right now are the ones who deployed first and built the foundation second. They will get to the other side. But it is costing them.
Aaron's Meta Observation — and What It Actually Means
Aaron closes with the most important point in the post. Skills, MCP, CLIs, and agent architecture may be simple concepts for tech, but in the real world these are all concepts that will require technical people to help bring to life in the enterprise.
Engineers are not being replaced. They are becoming the architects of the infrastructure that everything else runs on. The person who used to write the software is now designing the systems that the agents operate within. That is a more consequential role, not a diminished one.
This applies to advisory work too. The value in this era is not knowing which AI tools to buy. The value is knowing how to build the foundation that makes the tools work. And having seen enough failures to know what the right sequence looks like before the client experiences it the expensive way.
What This Means for Companies Not in the Fortune 500
Everything Aaron observed in the enterprise applies one level down.
A 50-person professional services firm with fragmented tools and no data governance is experiencing the same Layer 1 problem as a Fortune 500 bank with legacy infrastructure. The numbers are different. The sequence failure is identical.
The enterprises Aaron visited have the resources to fix the foundation after the fact, even if it is painful. Smaller companies often do not. Which means the window to get the sequence right is shorter and the cost of getting it wrong is proportionally higher.
The lesson from the largest organizations in the world is not that AI is hard. It is that foundation work cannot be skipped. The enterprises that tried to skip it are now paying twice. Once to deploy the agents, and once to build the infrastructure the agents needed all along.
Build the foundation first. Everything else follows.
Sanjay Bhutiani is the founder of Dreaming Tree AI and a Technology 3.0 advisor to founders and growing businesses. He spent 22 years building the technology infrastructure inside the world's leading M&A and strategic communications firm.
Read Aaron Levie's original post on X →