Problem Solving with AI
A reflection on why I keep building — the puzzles, the failures, the learning loop, and why AI makes this era especially exciting for curious creators.
I've always been drawn to technology not because of what it is, but because of what it lets you do with a half-formed idea and enough stubborn curiosity to see it through. Building software is really just problem solving. You start with a question, a frustration, or a random "what if…", and then you try to turn that into something real. Sometimes it works. Sometimes you spend three hours debugging something that turns out to be a missing semicolon (humbling, every time). Both outcomes are genuinely useful, which is either a healthy perspective or a coping mechanism. Probably both.
The puzzle is the point
What keeps pulling me back is the puzzle aspect of it. Every project is basically a question waiting to be answered, and the process of working through it is weirdly satisfying even when it's frustrating: breaking it down, trying an approach, realising that approach was wrong, Googling something suspiciously specific, and eventually landing on something that works. The moments where things break are often where the real learning happens. When a dashboard I built for rostering at work stopped pulling the right data, it forced me to understand the underlying structure in a way that just reading the docs never would have. I built the thing, it fell over, and I came out the other side actually understanding it. That's the pattern.
Reflection as a deliberate practice
The reflection side of this is something I had to learn deliberately. It's easy to fix one thing and immediately jump to the next problem without ever stopping to ask what actually worked, what didn't, and what you'd do differently. I used to skip that step constantly. Now I think it's where most of the real improvement happens: build, test, reflect, adjust. When I started using Claude to help write formal correspondence at work, the first version was fine but generic. The reflection was: what's missing here? The answer was context specificity. The next version was actually good. Small loop, meaningful result.
A strange and exciting time to be building
Right now feels like a genuinely strange and exciting time to be building things. The barrier to entry has dropped so dramatically that ideas which would've required a full team a few years ago can now be explored by one person with a laptop and a willingness to experiment. I've rebuilt my portfolio three or four times in recent weeks, built out MVPs for a few different product concepts, and expanded into areas I would've written off as too technical not long ago. The pace of what's possible is kind of absurd. At work, I've built dashboards that help my team analyse data and spot trends in a fraction of the time it used to take. Tasks that used to eat up most of a morning now take minutes. That's not a marginal improvement, that's a different way of working.
The harder problem: helping others see what's possible
The thing I keep noticing is that the hard part isn't the technical side anymore. It's helping other people see what becomes possible. Some colleagues trust it immediately and want to push further. Others are skeptical, or can't yet see how it maps onto their specific workflow. Both reactions make complete sense to me. The real shift isn't just "do the same thing faster." It's "what could you build that you didn't even think was on the table?" And that question is much harder to answer than any debugging problem I've run into. The people who figure out how to answer it, both technically and for the teams around them, are going to be genuinely valuable in the next few years. That's the space I want to be in.