When Everyone Can Build Software using AI, What Still Matters (feat. AJ Bubb)
AI democratizes building, shifting advantage from execution to insight, problem clarity, and trust—while raising risks of shallow thinking and over-reliance.
The conversation between Krish Palaniappan and AJ Bubb offers a sharp lens into how AI is reshaping not just software development, but the very nature of work, differentiation, and expertise. At its core is a grounded but often misunderstood idea: AI is not replacing human capability, it is amplifying it. AJ frames this as “human plus AI,” where machines accelerate execution while humans remain responsible for direction, intent, and judgment. This distinction becomes critical in the context of “vibe coding,” where AI takes over much of the mechanical effort of building software.
Podcast
AI Didn’t Kill Engineering: It Changed It — on Apple and Spotify.
Vibe coding compresses the distance between idea and execution. What once required coordinated engineering effort over weeks can now be prototyped in days, sometimes hours. This shift has dramatically lowered the barrier to entry, enabling both developers and non-developers to bring products to life. But in doing so, it has also commoditized the very act of building. If anyone can create software, then creation itself is no longer a differentiator. The competitive edge moves upstream—toward problem definition, clarity of thought, and the ability to shape solutions that reflect real-world nuance rather than generic outputs.
Krish raises a subtle but important concern: when people rely on AI too early in the process, they risk outsourcing not just execution, but thinking. Without a clear mental model of the problem, the tool begins to influence direction, introducing bias and often converging outcomes across users. AJ acknowledges this tension directly when he notes that AI will only do what it is asked to do—if the user lacks clarity, the output reflects that gap. This creates a paradox: the more powerful the tool, the more important it becomes to know what you’re doing before you use it.
Experience, therefore, still matters—but in a more nuanced way. It is less about knowing how to code and more about understanding the domain you are operating in. A seasoned practitioner brings context, pattern recognition, and an instinct for what questions to ask. AI can accelerate answers, but it cannot compensate for poorly framed problems. As highlighted in the discussion, someone with decades of experience in a field will always guide the tool more effectively than someone encountering the domain for the first time, even if both have access to the same technology.
A deeper risk emerges as AI-generated output becomes abundant: the erosion of human thinking. Instead of creating, experts increasingly find themselves reviewing and validating machine-generated content. AJ points out that a significant portion of senior expertise is already shifting toward proofreading “AI slop,” a trend that, if unchecked, could lead to the atrophy of critical thinking skills. When individuals stop exercising judgment and rely too heavily on automation, they risk losing the very capabilities that make AI valuable in the first place.
At the same time, there is a powerful upside for those who remain curious. AI rewards individuals who ask better questions, probe deeper, and iterate thoughtfully. Rather than using AI as an answer engine, AJ emphasizes using it as a discovery tool—something that helps identify blind spots and uncover the “corners” of a problem space. This reframing shifts the value from knowing answers to knowing how to explore, a skill that becomes increasingly important in an AI-driven environment.
These changes extend into hiring and team design. The rise of AI-enabled workflows is pushing organizations toward hybrid roles, where individuals are expected to operate across disciplines. Engineers must think in terms of product and user experience, while product managers must engage more deeply with technical possibilities. The modern contributor begins to resemble a one-person cross-functional team. However, this shift introduces tension between breadth and depth. While generalists can move quickly and adapt, they may lack the deep expertise required to navigate complex or high-stakes challenges.
As building becomes easier, differentiation shifts toward trust and proximity to the customer. In a world where multiple teams can produce similar solutions, the deciding factor is no longer just what is built, but who is building it and how well they understand the user. AJ highlights that success increasingly depends on being close to the customer—engaging directly, iterating with feedback, and building credibility through interaction. Founder-led storytelling and community presence begin to matter as much as, if not more than, the product itself.
Ultimately, the conversation reinforces a simple but powerful idea: tools do not determine outcomes—people do. AI expands what is possible, but it does not replace the need for clarity, judgment, or responsibility. The starting point is still the problem—what you are trying to solve and for whom. Everything else, including the tools you use, follows from that. The future that emerges is not one where humans are sidelined, but one where their role becomes more intentional. The real challenge is not keeping up with AI, but maintaining the discipline to think clearly, ask the right questions, and stay grounded in purpose as the tools around us continue to evolve.

