
Your MVP Isn’t Minimal Enough
Sarah Williams
You’re not building an MVP. You’re building a small product.
I’ve had this conversation dozens of times over the past six months. A founder shows me their “MVP”—and it has user authentication, a settings page, email notifications, a dashboard with charts, onboarding flows, and a feedback widget.
That’s not an MVP. That’s a product. A small one, sure. But a product nonetheless.
The distinction matters more now than ever. When AI can generate entire applications in hours, the bottleneck isn’t building—it’s learning. And learning happens fastest when you build the absolute minimum required to answer one question.
The Original MVP Was Radical
Eric Ries didn’t invent the concept of MVPs to help founders ship small products. He introduced it to help founders learn faster.
The original Dropbox MVP was a video. Not a working product—a video showing what the product would do. Drew Houston wanted to know if people cared about easy file syncing. A video answered that question in days, not months.
Zappos started by taking photos of shoes from local stores and posting them online. When someone ordered, Nick Swinmurn went to the store, bought the shoes, and shipped them himself. He wanted to know if people would buy shoes online. Manual fulfillment answered that question without building inventory systems.
These founders weren’t building small products. They were running experiments.
Why We Forgot
Somewhere along the way, MVP became synonymous with “version 1.0.” The minimum viable product became the focus, rather than the minimum viable test.
I think this happened because building was hard. If you’re going to spend three months writing code anyway, you might as well make it somewhat functional. The fixed cost of building anything pushed founders toward building “enough” rather than building “just.”
AI broke that calculation.
When you can build a functional app in a day, the calculus changes completely. Now the constraint isn’t building time—it’s your own clarity about what you’re testing. The fixed cost dropped to near zero, which means the marginal cost of adding features you don’t need is pure waste.
And yet, founders keep over-building. Old habits die hard.
What Minimal Actually Means
I worked with a founder last month who wanted to test whether small businesses would pay for AI-generated social media content. Her instinct was to build a content generation tool with scheduling, analytics, multi-platform posting, and team collaboration.
We talked her down to this: a simple form where users describe their business, a button that generates a week’s worth of posts, and a Stripe checkout for $29.
She launched it in 48 hours. Within a week, she had her answer: small businesses clicked generate eagerly but abandoned at checkout. The problem wasn’t generating content—it was that $29/week felt expensive for something they could do themselves “eventually.”
If she’d built the full tool, she would have spent months before learning the same thing. And she probably would have misinterpreted the learning—blaming the UI or the onboarding instead of questioning the core value proposition.
That’s what minimal means. Not “small product.” Smallest possible experiment to validate your riskiest assumption.
Finding Your Riskiest Assumption
Every startup has a stack of assumptions. Most founders instinctively test the safest ones first—the ones they’re secretly confident about. It feels productive. It avoids the scary questions.
Real MVPs flip that order.
Here’s a framework I’ve seen work well:
List every assumption your business depends on. Be honest—there are probably more than you want to admit.
Rank them by how catastrophic it would be if they’re wrong. An assumption like “users will sign up with email” is probably safe. An assumption like “enterprises will pay $500/month for this” might not be.
Rank them by how confident you actually are—not how confident you want to be. The gap between “catastrophic if wrong” and “actually confident” reveals your riskiest assumptions.
Build only what you need to test the riskiest one.
For most B2B products, the riskiest assumption is usually about willingness to pay. Which means your MVP might just be a landing page with pricing and a “buy now” button that leads to a waitlist or calendly link.
For most consumer products, the riskiest assumption is usually about whether the core action is compelling enough to repeat. Which means your MVP might be a single-feature prototype that you watch real users interact with.
The AI Advantage (If You Use It Right)
AI doesn’t just make building faster. It makes rebuilding faster.
This changes everything about how MVPs should work.
In the old world, you wanted to build something extensible because rebuilding was expensive. You’d add infrastructure “just in case” because starting over would cost months.
In the AI world, rebuilding costs hours. Which means you can afford to build something ugly and throwaway. You can afford to hard-code things that “should” be configurable. You can afford to skip the architecture entirely.
Because you’re not building a foundation. You’re running an experiment. And when the experiment concludes—whether it succeeds or fails—you’ll probably rebuild anyway with everything you learned.
The teams I see moving fastest have internalized this. They treat every MVP as disposable. They build knowing they’ll throw it away. That mindset is liberating. It lets you focus entirely on learning, without the distraction of building something “good.”
A Practical Test
Before you start building your MVP, ask yourself: “If I learned tomorrow that my core assumption is wrong, how much of this code would I actually reuse?”
If the answer is “most of it,” you’re probably building too much. You’re investing in infrastructure before you’ve validated the foundation.
If the answer is “almost none”—if learning you’re wrong would send you in a completely different direction—then you’re appropriately minimal. The code is a means to an end, not an end in itself.
I find this mental exercise clarifying. It separates the learning-focused from the product-focused, even when both teams call what they’re building an “MVP.”
When to Graduate
To be clear: I’m not arguing you should only ever build scrappy experiments. At some point, you do need to build a real product. You need architecture, scalability, polish.
But that point comes later than most founders think. It comes after you’ve validated the core assumptions. After you know users want what you’re building and will pay for it. After the riskiest questions have answers.
Most startups that fail don’t fail because they built the wrong architecture. They fail because they built the wrong thing—and discovered it too late to recover. Every week spent on infrastructure is a week not spent validating assumptions.
Graduate to product-building when your assumptions are validated. Not before.
A Different Speed
There’s a paradox here that I keep coming back to.
The founders who build the least upfront often end up building the most in the long run. They validate faster, iterate faster, find product-market fit faster. Then they have runway and conviction to build something great.
The founders who build too much upfront often run out of time or money before they learn what they need to learn. They build products, not experiments. And products require conviction that they haven’t earned yet.
AI amplifies this dynamic. It’s never been easier to build a lot. Which makes it more important than ever to build only what you need.
Your MVP probably isn’t minimal enough. Ruthlessly cut until you’re uncomfortable. Then cut a little more.
The experiment is the point.
ProductOS helps founders go from assumption to validated experiment in hours. Five AI agents handle the building, so you can focus on the learning. Try it free at productos.dev.
Photo by Vizito Visitor Management on Unsplash