Fourteen days. That's how long it took to go from kickoff to a live, functional product for NXTGEN Media -- a youth soccer talent network connecting players, coaches, and scouts.
That timeline sounds aggressive because it is. Two weeks is barely enough time for most agencies to finish their discovery phase. We shipped a working platform. Not a mockup. Not a prototype. A real product with real users.
Here's how we did it, what we cut, and the framework we now use on every project to separate what matters from what doesn't.
Why two weeks
The constraint wasn't arbitrary. NXTGEN Media had a specific window: a tournament season was approaching, and the platform needed to be live in time to capture content from early events. Missing that window meant waiting months for the next cycle of tournaments.
When the timeline is fixed and non-negotiable, scope becomes the only variable. You can't add people to make software faster (Brooks's Law has been true since 1975). You can't skip testing without paying for it later. The only lever is: build less.
The question shifts from "what do we want?" to "what's the absolute minimum that delivers value?"
The scope ruthlessness
We started with the founder's full vision. It was ambitious -- a comprehensive talent network with player profiles, video highlights, coach evaluations, scout dashboards, messaging, event calendars, and analytics. Easily a three to four month build.
Then we started cutting.
The decision framework
Every feature went through three questions:
-
Does this need to exist for the first user to get value? If the platform doesn't work without it, it stays. If the platform works but is less convenient without it, it gets cut.
-
Can we ship a manual workaround for now? Automated email notifications are nice. A founder manually sending emails for the first 50 users is fine. We optimized for features that couldn't be faked.
-
Does this create data we'll need later? Some features don't deliver immediate user value but capture information that's critical for the next phase. Player profiles with structured data fell into this category -- even the MVP version needed clean data schemas so that future features could build on real information.
Every feature landed in one of three buckets:
Must-have (shipped in week two):
- Player profiles with structured data (name, position, age group, club, highlight reel links)
- Content feed showing player highlights and tournament coverage
- Basic search and filtering by position, age group, and region
- Mobile-responsive design (90% of the target audience is on phones)
- Admin dashboard for content management
Nice-to-have (scheduled for month two):
- Coach evaluation forms
- Direct messaging between users
- Advanced analytics and player comparison tools
- Email notification system
- Social sharing with Open Graph optimization
Not-now (backlog for future consideration):
- Scout dashboards with saved searches
- Event calendar integration
- Video upload and processing pipeline
- Automated recruiting workflow
- Payment processing for premium features
That's roughly a 70% cut from the original vision. It felt aggressive at the time. In hindsight, we could have cut more.
What the two weeks looked like
Days 1-2: Architecture and data model
We set up the Next.js project with our standard stack -- TypeScript, Tailwind CSS, Prisma for the database layer. The data model was the most important decision of the entire build. Get the schema wrong and every future feature fights against it. Get it right and new features snap into place.
We spent more time on the data model than any other single decision. Player profiles, content entries, user roles, relationships between entities -- all designed with the full vision in mind, even though we were only building a fraction of it.
Days 3-7: Core features
Player profiles and the content feed consumed the first full week of development. This wasn't just CRUD operations -- the content feed needed to feel engaging on mobile, search needed to be fast and intuitive, and player profiles needed to present structured data in a way that was immediately useful to coaches and scouts.
We built the admin dashboard in parallel. Every platform needs ops tooling from day one (we've written a whole separate post about this). The NXTGEN admin panel let the founder publish content, manage player profiles, and monitor platform activity without touching code.
Days 8-12: Polish and edge cases
This is where most two-week builds fall apart. Teams spend all their time on features and ship something that technically works but feels broken. We allocated three full days to responsive design, loading states, error handling, empty states, and the dozens of small interactions that make a platform feel real.
The difference between a prototype and a product is in the details: what happens when a search returns no results? What does the page look like before content loads? What error message does a user see when something goes wrong? These aren't features -- they're the baseline quality that determines whether users trust your platform.
Days 13-14: Testing and launch
Final testing across devices and browsers. Content seeding so the platform didn't launch empty. DNS configuration, SSL, analytics setup, and the Vercel deployment pipeline. We had the founder do a full walkthrough of the admin dashboard to make sure the publishing workflow actually made sense to someone who isn't a developer.
The site went live on day 14. Not in beta. Not behind a password. Live, public, indexed by Google.
What we learned
1. The data model is the most important decision
Everything else can be refactored. A bad data model poisons every feature built on top of it. We now spend a disproportionate amount of time on schema design during every MVP build, regardless of timeline.
2. Admin tooling is not optional
The founder needed to operate the platform independently from day one. If we'd shipped the public-facing product without admin tools, every content update would have required a developer. That's not an MVP -- that's a demo.
3. Cut features, not quality
The temptation on a tight timeline is to ship everything half-finished. We've learned that it's dramatically better to ship five features at full quality than ten features that kind of work. Users forgive missing features. They don't forgive broken ones.
4. Manual processes are a valid V1
NXTGEN's founder handled user onboarding manually for the first month. No automated welcome emails, no self-service account setup. This sounds primitive, but it meant every early user got personal attention -- which is actually better for a platform in its first weeks. The automation came later, informed by actual usage patterns instead of assumptions.
5. The MVP is a learning tool, not a product launch
The purpose of a two-week MVP isn't to build a finished product. It's to get something in front of real users as fast as possible so you can learn what actually matters. Half of NXTGEN's V2 roadmap was informed by user behavior that we couldn't have predicted during planning.
The framework we use now
Every project at Aetheris -- regardless of timeline -- goes through the same scope exercise. We call it the "three buckets" approach, and it's become our most valuable planning tool.
-
Must-have: The platform doesn't function without this. Users can't get core value. This is your two-week build.
-
Nice-to-have: The platform works without this, but it's less convenient, less polished, or less competitive. This is your month-two build.
-
Not-now: This is valuable but not urgent. It goes on the backlog, revisited quarterly based on actual user feedback.
The hard part isn't sorting features into buckets. It's being honest about which bucket each feature actually belongs in. Every founder thinks their pet feature is a must-have. Our job is to pressure-test that assumption.
The results
NXTGEN Media launched on time, within budget, and immediately started capturing tournament content. Within the first month, the platform had active player profiles, regular content updates, and engagement from coaches in the target market.
More importantly, the data from that first month shaped the entire V2 roadmap. Features we thought were critical turned out to be irrelevant. Features we'd deprioritized turned out to be the most requested. That's the whole point of shipping fast -- you replace assumptions with evidence.
Two weeks isn't the right timeline for every project. But the discipline of scoping like you only have two weeks? That's useful on every project, no matter how long it takes.