Macroeconomic impacts of AI adoption (feat. Dr. Kelly Monahan)
Kelly Monahan tells Krish that AI is really a leadership crisis: democratized expertise, exhausted middle managers, BS-talking executives, and plumbers winning.
If you’ve been losing sleep wondering whether your job will survive the AI revolution, congratulations: you’re already doing more strategic thinking than most C-suites. That, in essence, is the bracing message Dr. Kelly Monahan brought to a recent Snowpal podcast conversation with founder Krish Palaniappan. Kelly, who studies the future of work and has done time in the research trenches at Deloitte, Accenture, and Meta, has the rare distinction of having started her HR career by laying people off because of robotic process automation. It is, as career origin stories go, the equivalent of a firefighter whose first day on the job involves lighting a match. Twenty years later, the technology is more polite about it (chatbots are nothing if not cheerful), but the underlying question is the same: what is a human worker actually for?
Podcast
The People Who Spent 20 Years Becoming Experts Are About to Find Out That Experience Has Been Democratized - on Apple and Spotify.
The People Who Spent 20 Years Becoming Experts Are About to Find Out That Experience Has Been Democratized
Generative AI isn’t just automating tasks — it’s distributing the very thing that made experienced leaders valuable. For decades, seniority meant accumulated intelligence. You knew things others didn’t. You’d seen cycles, patterns, failure modes. That institutional knowledge was the moat.
Kelly’s argument is that the moat is filling in. “Most leaders are where they are today because of their expertise,” she says, “but what happens when that becomes democratized?” When a junior employee with the right prompt can surface the same analysis a 20-year veteran could, the value equation changes completely. What leaders offer can no longer be just what they know. It has to be something harder to replicate — judgment, trust, the willingness to be accountable for decisions made in ambiguity.
That shift is why Kelly insists we’re not in a technology moment. We’re in a leadership moment.
Middle Managers Aren’t Obsolete — They’re Just Being Asked to Do Something They Were Never Trained For
The tech industry has a running fantasy: flatten the org, cut the middle, let AI coordinate what managers used to. Kelly thinks this is a mistake, and she’s blunt about why.
Middle managers are already the most burned-out segment of the workforce. They’re sandwiched between a C-suite selling an AI vision that isn’t fully real yet, and a workforce eager to use tools their companies haven’t figured out how to deploy. “The tools are not quite where some of the C-suite and board thinks they are,” she says. Meanwhile, managers are expected to execute a transformation that hasn’t been designed.
The role isn’t disappearing, but it is changing in a specific direction. The old job — relay information up, execute instructions down — is shrinking. The new job is orchestration: figuring out which work gets done by humans, which by AI agents, and how to hold that hybrid accountable to outcomes. It’s messier, more political, and more human than ever. The managers who survive won’t be the ones who master the tools fastest. They’ll be the ones who can navigate the parts of organizations that AI genuinely cannot touch.
That said, Kelly draws an important distinction between product companies and everyone else. In a product company, middle managers typically contribute directly — they’re in the codebase, the architecture, the design decisions. Their expertise justifies their seniority. In government agencies or traditional consulting hierarchies, where tenure drives promotion more than output, the calculus is different. If your job is coordination without contribution, AI makes it very difficult to justify that role.
Nobody Actually Knows How Many AI Agents Their Company Needs, and That’s the Problem
Krish put a question to Kelly that she called “a billion-dollar question for consulting companies”: who decides how many AI agents a company deploys, and how do you separate the ones every team shares from the ones that are specific to a single function?
The honest answer right now is: nobody has figured this out cleanly. What Kelly is seeing in practice is experimentation without governance — teams spinning up agents independently, in parallel, without coordination. The result isn’t efficiency. It’s complexity. “I have more complexity, not efficiency, because of all these AI agents I’m trying to manage,” is the phrase she keeps hearing from inside organizations.
Her prescription is a shared leadership agenda anchored at the C-suite level. The CHRO needs to be in the room, not just the CTO and CIO. HR, which has historically been left out of technology decisions, has the exact expertise this moment demands: how do you design work, structure spans of control, and build organizations around outcomes? Those questions don’t have technical answers. They have human answers. And HR is where that knowledge lives.
The principle Kelly keeps returning to is simplification. Before adding more agents, define what you need at the enterprise level and at the functional level, and be ruthless about eliminating overlap. The companies winning with AI aren’t the ones running the most experiments. They’re the ones that have decided what they’re actually trying to accomplish.
CEOs Are Saying “AI” 17 Times Per Earnings Call While Their Dev Teams Are Still Figuring Out the Tools
There’s a gap between the AI story being told and the AI reality being lived, and Kelly names it directly. Leaders know that positioning their company as AI-enabled can mean a two-to-three times valuation lift. The incentive to overclaim is enormous. And so they do.
Meanwhile, the teams actually building things are still working through which tools are ready for production, which workflows have genuinely changed, and which “AI transformation” initiatives are really just rebranded pilots that haven’t shipped. The board gets the aspirational version. The engineers get the uncertainty.
This isn’t always cynical — some of the gap is genuinely a lag between where the technology is heading and where it is right now. But Kelly doesn’t let leaders entirely off the hook. The fundamental problem is that most companies have invested heavily in AI tools without doing the hard downstream work: redesigning the job, rebuilding the workflow, doing the change management that actually makes transformation stick. She’s seen what that takes in consulting. It’s an 18-to-24 month roadmap, minimum. Most executives are measuring progress by next quarter.
The SaaS Apocalypse Is Probably Overblown — But the Market Doesn’t Seem to Have Decided Yet
Krish raised the SaaS conversation with something real: Atlassian went up 40% on earnings, then added another 5% the next day. Workday, Salesforce, Asana, Monday — companies that had been hammered for a year — are bouncing. The market keeps changing its mind.
Kelly’s read is that this whiplash is structural. Most of the broader economy is in a low-to-no-growth environment. That’s not purely an AI story — macroeconomic complexity is doing a lot of work here. But it means that AI stocks are essentially holding up the equity markets, which creates an outsized sensitivity to any signal about AI’s actual progress. Jensen Huang’s position — that SaaS companies need to evolve but aren’t going away — is closer to Kelly’s view than the doom narrative. These companies have distribution, customer relationships, and institutional trust that takes years to build. AI doesn’t make those irrelevant overnight. It does, however, require them to rethink what they’re selling and how they’re delivering it.
The Consulting Industry Built Its Junior Pipeline on Tasks That AI Now Does Better, Cheaper, and Faster
Kelly grew up in consulting. She knows the model: junior staff spend two years learning the craft through PowerPoint decks and memos, billing at a premium while absorbing industry knowledge from senior partners. That pipeline produces the partners of the future.
The problem is that AI has made the first half of that equation untenable. “You don’t need that junior consultant anymore to do that deliverable,” she says. AI can produce a polished analytical deck faster and cheaper than a first-year analyst, without the overhead. If the business case for hiring junior consultants was always partly about developing future partners, that calculus just got a lot harder.
The second challenge is more fundamental. What you hire McKinsey or Deloitte for is intelligence — the framework, the insight, the perspective accumulated across hundreds of engagements. That is precisely what generative AI is democratizing. The consulting model has to move toward problems that are genuinely hard: change management, human dynamics, the ethics of automation, the decisions that require judgment that can’t be offloaded. Firms that keep selling software implementation and document production are going to feel the pressure first.
Managed services faces an even steeper reckoning. The large-scale outsourcing model — teams in the Philippines and India handling operations at volume — maps almost directly onto what AI automates. Kelly is candid that she worries about what this means for countries where those jobs represent significant economic opportunity. The question of responsibility — who thinks through these consequences before making the switch — isn’t a business question. It’s an ethical one.
The Economy Looks Fine Until You Realize It’s Being Held Up by One Sector
The K-shaped economy isn’t a metaphor. It’s a description of what’s actually happening: returns to capital and highly-skilled knowledge work are accelerating, while pressure mounts on everyone else. The upper tier keeps spending. Luxury travel, airlines, fine dining — demand stays strong. Spirit Airlines goes bankrupt. Both things are true at the same time.
Kelly isn’t panicking, but she’s watching the lagging indicators that don’t show up immediately: credit card debt rising, spending rotating toward necessities, the compounding effect of price pressure on anyone living without a significant financial cushion. Q3 and Q4 of this year, she thinks, will be telling. The part of consumer spending that looks healthy right now may be masking a delayed adjustment.
The deeper point she makes is about interconnection. The U.S. economy is not an island. Supply chains, outsourcing relationships, oil markets, demographic shifts in Asia — all of it connects back. When companies automate away managed services jobs in India, that has consequences that eventually ripple through trade, through goods, through prices here. “The bagel you go get for breakfast has tremendous world economic consequences,” Kelly says — and most of us haven’t thought about the chain that produced it.
The New Skill Isn’t Learning to Code — It’s Learning to Unlearn
The most surprising advice Kelly offers doesn’t come from a workforce development framework. It comes from a long look at what AI actually can’t do. Empathy, wisdom, ethical judgment, creativity, the ability to hold complexity and act in ambiguity — these aren’t soft skills. They’re the hard ones. They’re the ones nobody has systematically developed, because the STEM premium made everything else feel optional.
Her read: the professions most immune to AI automation aren’t the ones that sound impressive on a LinkedIn profile. They’re healthcare, education, skilled trades. There are already labor shortages in all three. The culture hasn’t caught up — it’s still glamorizing the path of the YouTube influencer, the vibe coder, the growth hacker. But the plumber and the electrician may end up significantly better positioned in the economy that’s actually forming.
Krish offered his own version of the same idea from the builder’s perspective. The new skill, in his words, is not any particular language or algorithm. It’s “how do I solve this problem better using the current suite of people, agents, technologies, and the changing dynamics of the larger world?” The muscle memory that made experienced engineers valuable — the deeply ingrained patterns of how software gets built — is now partly a liability. The engineers who thrive will be the ones who can unlearn it.
Kelly loved that framing: “That might be your snippet for social sharing.”
The Decisions We Make About AI Today Will Shape Things for Generations
Kelly’s closing wasn’t hedged. She believes this moment is genuinely consequential — not in the hype-cycle sense of transformative technology, but in the sense that the choices leaders make right now about how to use AI, and how to treat the people displaced by it, will compound.
Her book, Reclaim the Plot, is written as fiction, drawing on real patterns she’s observed across tech and consulting without naming anyone. The central argument is that leaders keep chasing new technologies at the expense of people, and that this moment requires something different: an active rewriting of the story, not just an optimization of the current one.
The session ended the way all good conversations do — a little open, a little unresolved, with more questions raised than answered. Kelly’s dinner order was sushi, steak, and New York cheesecake. Krish’s assessment: she’s not saving that thousand dollars a month.
Listen to the full conversation on the Snowpal Podcast. Check out Dr. Kelly Monahan’s new book, Reclaim the Plot: How Leaders Rewrite the Story When AI Rewrites Work.

