Back to blog
startup ai founder

Six Roles, One Founder: What Solo AI Development Actually Looks Like

When building Kvalty.cz, I'm not just a developer. I'm a tech lead, art director, PM, QA tester, brand manager, and copywriter. Here's how the split actually works.

“Solo founder” is a slightly misleading term in 2025. When I’m building Kvalty.cz, I’m not alone. It’s more like a constant meeting between me and AI.

But who actually does what?

Most people think AI writes code and that’s it. The reality of development is much more varied. I had to learn to switch between these roles — sometimes three of them in the same hour. And that context-switching? It’s the part nobody warns you about.

Here’s the rough time allocation across a typical week: Tech Lead ~35%, Product Manager ~20%, Art Director ~15%, QA ~15%, Brand Manager ~10%, Copywriter ~5%. Those percentages shift constantly — some weeks QA eats 40% because something broke in production. Some weeks copywriting takes over because I’m launching a new landing page. But the average holds.

Tech Lead & Code Reviewer (~35% of time)

This is where AI does the most work. Churning out code, proposing features. My role? Gatekeeper.

I watch code quality, architecture, and security. AI is a powerful engine, but I have to make sure the machine doesn’t fall apart under the pressure of new features. I often spend time refactoring what AI “quickly” produced.

Here’s a real example. When I asked Claude to implement geographic search for driving schools, it proposed a solution that pre-computed distance matrices between all schools and cached them in Redis. Technically elegant. Completely impractical. The matrix for 1,700 schools would be ~2.9 million entries. Updates would require full recomputation. The whole thing was roughly 3x slower than just using PostGIS with a Haversine distance query at read time — which is what I ended up implementing in about 40 lines.

This happens more than you’d think. AI gravitates toward over-engineered solutions because it’s pattern-matching against complex architectures it’s seen in training data. My job is to say “no, we’re using a simple index here” and move on. The 237K lines of TypeScript in Kvalty? Probably 30% of that started as AI proposals that I stripped back to something maintainable.

The gatekeeper role also means reviewing every database migration before it runs. Every tRPC procedure before it ships. Every type definition before it propagates through the monorepo. AI writes the first draft. I decide if the draft lives or dies.

Art Director (UI/UX) (~15% of time)

Visuals are a dance. I come with the vision and wireframes. I try to guide AI toward the right style.

But sometimes I give it free rein: “Design me a component for the driving school detail page.” And sometimes it surprises me with a solution I wouldn’t have thought of. It’s a dialogue, not a dictate.

Where it falls apart: brand consistency. I asked AI to design the Kvalty Academy landing page. What I got back was a generic SaaS template. Rounded corners everywhere, gradient hero section, the same “Start your journey” CTA that every B2B product uses. It looked like it could sell project management software, CRM, or driving school prep equally well — which means it sold none of them.

The problem is that AI doesn’t understand your brand. It understands brands in general. It’ll give you the statistical average of every landing page it’s ever seen. My job is to push it past that average into something that actually feels like Kvalty — practical, Czech, slightly irreverent. That usually means 4-5 rounds of back-and-forth where I reject the first two attempts entirely.

I’ve found the sweet spot: I design the layout and visual hierarchy myself, then use AI to implement the components in Tailwind. The creative direction stays human. The CSS output gets delegated.

Product Manager (~20% of time)

Here, the human rules. AI can’t see inside users’ heads.

I watch analytics, read feedback, and decide WHAT gets built. AI is then my “Solution Architect” — helping me figure out HOW to technically implement it in a way that makes sense.

The failure case is instructive. Early on, I asked AI to analyze my analytics data and suggest the next three features to build. It proposed: a social feed where students share their driving test experiences, gamification badges for driving school reviews, and a community forum. Classic engagement-metric-chasing features that every product AI defaults to.

My actual users? They wanted one thing: to compare prices accurately and pick a school fast. They didn’t want a social network. They wanted a tool. Every feature suggestion AI made was optimizing for time-on-site. My users were optimizing for time-to-decision. Opposite goals.

Now I use AI for solution architecture, never for product strategy. “I’ve decided to build X, what’s the best technical approach?” works. “What should I build next?” doesn’t.

QA Tester (~15% of time)

AI is an incurable optimist. It usually claims “everything will work.”

Reality? Not so much. What if a user enters nonsense? What if the API goes down? Testing and finding holes is purely on me. AI fixes bugs, but finding and replicating them — that’s my job.

My testing methodology is unglamorous but effective. I maintain a mental checklist of the flows that break most often: the school registration flow (5 steps, tons of conditional fields), the pricing import pipeline (currency formatting, VAT handling, installment plan edge cases), and geographic search near region borders. I run through these manually after every significant deploy.

The things that break most? Internationalization edge cases. Czech diacritics in search queries. Price formatting differences between “5 000 Kč” and “5000 CZK” and “5,000.-”. Dates in DD.MM.YYYY vs. YYYY-MM-DD. These are the unglamorous bugs that AI introduces confidently and never catches.

The worst AI QA failure: I shipped a tRPC procedure for updating school pricing. AI generated it, assured me the validation was solid. First user interaction — someone submitted a price with a comma as decimal separator (standard in Czech: “12 500,50 Kč”). The parser treated the comma as a thousands separator, saved the price as 12.5 million CZK. One driving lesson for the price of an apartment. I caught it within hours because I have error alerts on pricing anomalies, but AI’s “it works” certification was worthless on anything locale-specific.

Now I have a rule: every piece of AI-generated code that touches user input gets manually tested with Czech-formatted data before it ships. No exceptions.

Brand Manager (~10% of time)

Here, AI is (still) blind.

Sensing tonality, emotion, and brand trustworthiness? That’s a purely human discipline. Knowing what the product is about at its core and how it should affect people — no algorithm will figure that out for me.

This got bad enough that I had to create a formal Brand Voice document — a 3-page guide that goes into every AI context window when generating Kvalty content. It includes:

  • Tone: Practical, direct, slightly informal. Like a friend who happens to know everything about driving schools.
  • Forbidden words: “Seamlessly,” “leverage,” “cutting-edge,” “world-class,” “holistic,” “synergy,” anything ending in “-ize” that doesn’t need to.
  • Forbidden patterns: “Whether you’re a… or a…” constructions. “In today’s fast-paced…” openers. Bullet points that start with “Unlock” or “Discover.”
  • Example phrases: “Porovnej autoškoly ve svém městě” (Compare driving schools in your city) — not “Objevte nejlepší autoškoly” (Discover the best driving schools). The difference is subtle but it’s the difference between a tool and a brochure.
  • Czech-specific rules: Use “ty” (informal you), not “Vy” (formal). Reference real Czech cities, not generic placeholders. Prices always in CZK with spaces as thousands separators.

Even with the Brand Voice doc loaded, AI slips back into corporate mode roughly every third interaction. It’s a constant correction loop. But without the document? Every output reads like it was written by a mid-level marketing agency that’s never been to Czechia.

Copywriter (~5% of time)

I’ll be completely honest: even my LinkedIn posts are written with AI help.

But it’s not like I say “write a post.” I provide the thoughts, context, and direction. AI polishes it. I iterate and finalize. For Kvalty.cz, I even had to create that Brand Voice document I mentioned. Without it, AI kept sliding into boring corporate phrases.

The LinkedIn example is worth detailing. I wrote a post about Kvalty reaching 1,000 validated schools. First AI draft: “Excited to share that Kvalty.cz has reached a major milestone! After months of hard work, we’ve validated over 1,000 driving schools across the Czech Republic. This journey has been incredible and I’m grateful for…” I stopped reading. It sounded like every other AI-generated LinkedIn post — the same cadence, the same hollow excitement, the same “grateful for the journey” energy.

My rewrite: “1,003 driving schools. Validated one by one. 20 AI agents crawling websites simultaneously. A few that were just a phone number taped to a car. Here’s what I learned about Czech driving school pricing: it’s chaos.” Same information. Completely different voice. The rewrite got 4x the engagement.

That’s the copywriting role in a nutshell. AI produces the average. I push it toward something with an actual pulse.

The Mental Tax

Here’s what nobody talks about: the context-switching cost.

At 9 AM I’m reviewing a database migration, thinking about index performance and query plans. At 10 AM I’m designing a comparison card component, thinking about visual hierarchy and whitespace. At 11 AM I’m writing meta descriptions for 14 region pages, thinking about keyword density. At noon I’m debugging a geocoding edge case where a school in Prague 5 was showing up in Plzeň.

Each switch costs 10-15 minutes of mental ramp-up. Multiply that by 6-8 switches per day and you’ve lost almost two hours just getting back into context. That’s the hidden overhead of being a solo founder — not the work itself, but the constant gear-shifting between architect and copywriter, between pixel-pusher and data engineer.

Some days I batch roles. Mondays are heavy on code review and architecture. Wednesdays lean toward content and brand work. But deadlines don’t respect role-batching, and a production bug at 3 PM doesn’t care that I had “copy day” planned.

How it adds up

Working with AI doesn’t mean you stop working. AI is the engine, but I’m still the one steering. From the first line of code to the last word on the website.

The six roles aren’t a burden — they’re the job description of building something real without a team. AI compresses the time each role takes. It doesn’t eliminate any of them. And the ones it’s weakest at — brand, product strategy, QA — are the ones that matter most for whether the product actually succeeds.

Would I hire humans for some of these roles if I could? Absolutely. A dedicated QA engineer would catch locale bugs I miss. A real copywriter would nail the Czech voice faster. But at the current stage, the economics don’t support it. So instead, I wear six hats, switch between them 8 times a day, and try not to forget which one I’m wearing.

Martin Svoboda

Martin Svoboda

Android developer at Fortuna, founder of Kvalty.cz and Ferda App. Building products with Kotlin, React, and AI-assisted engineering from Prague.