While many worry that AI is making us dumber by doing our thinking for us, I've experienced the opposite: AI has made me a better critical thinker.
I used to think the biggest AI productivity gain was speed—fewer dev doc tabs, less syntax debugging, faster feature delivery. But after building closely alongside AI for some time now, I've discovered something deeper about my making process: I'm not just building faster, I'm learning exponentially faster.
Every interaction has become a mini-masterclass. Not because AI is teaching me like a traditional tutorial, but because it's creating a feedback loop I've never experienced before. I approach a problem with my usual mental model, AI meets me there, then consistently shows me angles I hadn't considered. The result isn't just working code—it's an upgraded way of thinking about the problem itself.
Programming has been part of my craft throughout my design career, but since starting Be01, I've been doing significantly more of it. What surprised me wasn't just how much AI accelerated my coding—it was how the learning patterns applied everywhere. The same compound effect happens when I'm iterating on design concepts, working through product strategy, or even structuring user research. AI doesn't just help me execute faster; it often shows me alternative approaches that upgrade my thinking process itself. Whether I'm debugging code logic or refining a user journey, the meta-skill of learning through AI-assisted iteration has become universally valuable.
From "Solve then Build" to "Build to Solve"
The most fundamental shift has been in my relationship with uncertainty. I used to spend significant time upfront planning and architecting, wrestling with complexity until I had it figured out before writing the initial lines of code. Getting stuck was painful because it felt like failing at the planning stage.
Now? I approach problems with what feels like "iterative confidence"—the knowledge that my initial logic can evolve, and that's not just okay, it's the point. When I'm not 100% sure my approach will be optimal, I jump into trying it out faster, knowing AI might suggest paths I haven't considered yet.
To me, it hasn’t felt like recklessness—but more like a new kind of strategic approach to building. I've grown to develop an open mind toward the idea that my architecture can change mid-build, and rather than being disruptive, this flexibility has become an advantage. I've learned to treat my first solution as a hypothesis, not a destination.
The quality paradox: When simple beats fast
Here's something unexpected: I've stopped optimizing primarily for performance and started appreciating solutions for their cognitive load. When AI suggests a "better way," what makes it better isn't always speed—it's often simpler logic that's easier to track mentally, or cleaner syntax that reduces the mental overhead of maintaining the code.
Last week, I was building a feature that needed to sync user data across multiple components when changes happened. My instinct was to set up a series of event listeners and update functions, each component watching for specific changes. I had it working, then asked AI: 'Is there a better way?' It suggested using a single state reducer with clear action types. Not only was it more performant—it was dramatically easier to reason about. Instead of tracking five different update flows, I could follow one predictable pattern. Now when debugging, I look at the reducer and immediately understand the entire data flow.
This has trained me to think differently about what "good code" means. The best solutions aren't necessarily the most clever—they're the ones that future-me (and future-teammates) can understand and modify without cognitive strain. AI has become my simplicity coach, consistently showing me that there's usually a cleaner path than my first instinct.
The learning paradox: Wanting to be wrong less, but also more
When I ask AI "is there a better way to do this?" for structuring implementation logic—which I do almost religiously now—it suggests something I haven’t thought about up to 50% of the time. This creates a strange emotional paradox: part of me wants that percentage to drop (indicating my skills are improving), but part of me would be disappointed if it did (because I'd miss out on learning opportunities).
This tension has taught me something valuable about growth: the sweet spot isn't eliminating gaps in knowledge, it's maintaining enough curiosity to keep discovering them. AI hasn't made me complacent—it's made me a better critical thinker and logic designer precisely because I know there's always another perspective available.
The meta-skills: What's really developing
Beyond specific programming patterns, I'm hopeful that I am developing meta-skills that compound across every project:
Tolerance for productive uncertainty: I'm more willing to tackle problems outside my comfort zone because I know I can learn through the process rather than before it.
Logic design intuition: Constantly seeing alternative approaches has sharpened my ability to architect systems that are functional, maintainable, and extensible.
Pattern recognition: Each "better way" AI shows me creates a reference point for future problems. I feel as if I am building a mental library of approaches faster than I ever could through traditional learning.
What this means for how we build
The more I experience this compound learning effect, the more I believe we're seeing expertise development change in a fundamental way. The traditional model—learn extensively, then apply—is being replaced by learn-through-application at unprecedented speed. This isn't just changing individual careers; it's changing what competitive advantage looks like for teams and companies.
My experience suggests that builders who thrive won't be those who know everything upfront—they'll be those who've developed the meta-skill of learning through iteration, who can hold their initial approaches lightly while remaining committed to their outcomes. They'll be those who've developed the fastest learning feedback loops, because how human expertise develops is fundamentally evolving.
One more point of view: This compound learning effect isn't just changing how I build—it's revealing what human-AI collaboration could become when we stop treating AI as a tool and start treating it as a thinking accelerator. The question isn't whether AI makes you more productive. It's whether you've structured a way to learn from every interaction and compound that knowledge into better judgment over time.
Are you treating AI as a tool that gives you answers, or as a thinking partner that's upgrading how you think about problems? I’ve found that the compound effect only works if you're paying attention to the patterns, not just the solutions.
What's your experience been with AI as a learning accelerator? I'd love to hear about the moments when it showed you something that changed how you approach similar problems.