Six months ago, I didn’t have a GitHub account. I didn’t know what a commit was. I had never opened a terminal window on purpose.
I’m a technology executive. I’ve spent 30 years building and governing systems at every scale — most recently leading AI strategy for over 100,000 employees. I can run a program, design an operating model, and navigate an enterprise architecture diagram in my sleep. But writing code? That was always someone else’s job.
Then I got curious.
Not curious in the way that makes you read an article and move on. Curious in the way that makes you wake up at 5am, open a laptop, and start talking to an AI about what you’re trying to build — before your actual job starts at 8.
What I Actually Did
I carved out two to four hours every day. Before dawn. After dinner. Weekends. Not because someone told me to, but because I couldn’t stop.
I started with Claude — Anthropic’s AI. Not as a chatbot. As a building partner. I gave it context about what I was trying to do, and we started working together. I’d describe what I wanted. It would write the code, explain what it did, and help me understand why. When it broke — and it broke often — we’d debug it together.
Within weeks, I had something I didn’t expect: a working system. Not a toy. A production operation.
Here’s what that system looks like today:
- A 67,000-word book manuscript — written collaboratively with AI, revised through five full drafts, and currently in peer review
- Four live websites, all auto-deploying from GitHub when I push changes (yes, I push changes now)
- A coordination system that tracks 13 projects across multiple repositories
- A portfolio operation that compresses what would be 890 human hours of work into 33 hours of AI-assisted execution
I didn’t hire anyone. I don’t have a budget for this. I have a personal AI lab and the willingness to show up every day and learn.
What Went Wrong
I should be honest about this part, because the wins are only half the story.
The first month was chaos. I tried to build a website before I understood how deployment works, and spent an entire evening staring at a blank page wondering why nothing showed up. I asked my AI to do things that were technically impossible and got confident-sounding answers that were completely wrong. And I believed those answers — because they sounded authoritative and I didn’t know enough yet to push back.
That was the real danger. Not the technical mistakes — those you can fix. It’s the moments when AI sounds exactly right and you don’t have the experience to know it’s wrong. I’d get a beautifully structured answer, build on it for hours, and then discover the foundation was hallucinated. My AI didn’t remember what we’d done in previous sessions, so every conversation started from zero. Context evaporated overnight. I’d spend the first 30 minutes of every session re-explaining what we were building.
But I want to be clear: this wasn’t a smooth ascent. It was messy, frustrating, and humbling in ways that my 30-year career didn’t prepare me for. Being a beginner at 66 is a specific kind of uncomfortable.
What I Learned
Three things, and they surprised me.
First: AI doesn’t replace your judgment. It amplifies it. Every good decision my AI lab made started with me knowing what I wanted and why. The AI executed faster than I ever could, but it couldn’t tell me what mattered. That part was always mine. Thirty years of building systems taught me what questions to ask. AI taught me I could act on the answers immediately.
Second: the barrier isn’t technical. It’s emotional. The hardest part of building my AI lab wasn’t learning Git or figuring out markdown. It was admitting that I didn’t know things. Asking questions that felt basic. Making mistakes that a 25-year-old developer would never make. The vulnerability of being a beginner — at anything — when you’ve been an expert for decades. That’s the real barrier, and nobody talks about it.
Third: this changes what’s possible for one person. I’m one person with no engineering team, no design budget, and no technical co-founder. But the output of my AI lab looks like the work of a small department. Not because AI is magic, but because I learned how to direct it — like a conductor who can’t play every instrument but knows exactly what the orchestra should sound like.
Why I’m Telling You This
I’m not telling you this to impress you. I’m telling you because I think most people believe they can’t do what I did. They think you need to be technical. Or young. Or have some special access.
You don’t. You need curiosity, patience with yourself, and the willingness to be bad at something for a while.
Here’s the thing nobody warns you about: everyone’s focused on the bears — the big, scary AI risks that make headlines. Rogue agents. Deepfakes. Job replacement. But the real damage? The mosquitoes. The shadow AI your team is already using without telling you. The hallucinated data that slipped into a quarterly report. The integration that’s quietly leaking context to a tool nobody vetted. Those are the risks that bite you while you’re busy building bear fences.
My AI lab taught me to spot the mosquitoes. Not through theory — through building, breaking things, and paying attention to what went wrong in ways I didn’t expect.
I’m building this in public because I think the personal AI lab — a place where you work with AI on real problems, every day, and learn what trust means by doing it — is the most important skill anyone can develop right now. Not prompt engineering. Not model selection. The practice of working with AI in a way that makes you more capable.
I’m 66. I’ve never been more alive.
Follow along. I’ll tell you when I’m wrong.
This is the first in an occasional series about building with AI — honestly, from someone who started with zero technical skills and a lot of questions. More at bearsandmosquitoes.ai.