Most founders don’t fail because the tech is hard. They fail because they build something nobody really wants, then realise it after burning their savings.
You don’t need a full product, a big team or a massive ad budget to know if your startup idea has legs. You need real customer data, fast, and the discipline to kill what doesn’t work.
This article walks you through a practical, step-by-step way to validate your startup idea before you spend serious money.
Mindset: you’re not building, you’re testing
In the early days, your job is not to “launch a startup”. Your job is to run experiments.
Change your vocabulary:
- Not “I’m building a product” → “I’m testing a problem and a solution”
- Not “I need funding” → “I need evidence”
- Not “I hope customers will come” → “I will measure if customers show up”
Good validation is:
- Cheap: time and a few hundred euros, not tens of thousands
- Fast: days and weeks, not months
- Quantified: numbers, not opinions
- Binary: clear pass/fail criteria you define upfront
Let’s put that into a concrete process.
Step 1: Turn your idea into falsifiable hypotheses
An “idea” like “an app for remote team culture” is useless for testing. You need precise, falsifiable statements.
Break your idea into hypotheses in three categories:
- Customer: Who exactly is this for?
- Problem: What painful problem do they have now?
- Solution & value: Why is your way better than what they do today?
Example for a B2B SaaS idea:
- Customer hypothesis: HR managers in tech startups with 20–200 employees, remote or hybrid.
- Problem hypothesis: They struggle to maintain team engagement and lose high performers due to poor culture.
- Solution hypothesis: A lightweight tool that identifies culture issues early and suggests concrete actions will be valuable enough that they’ll pay €150/month.
Write these down. They will drive the rest of your experiments.
Step 2: Define what “validation” means in numbers
“People liked the idea” is not validation.
Before launching any test, define your success criteria. Examples:
- “If we drive 300 qualified visitors to a landing page and at least 30 (10%) leave their email, we consider the problem/solution promising.”
- “If 10 out of 15 target customers say they have this problem and 5 are ready to test a prototype, we continue.”
- “If 5 customers pay a deposit of €50 for early access, we move to MVP build.”
Good criteria are:
- Specific (numbers, time frame, channel)
- Defined upfront (before you see any data)
- Hard to rationalise away (no “yeah but…”)
If you’re not willing to kill the idea when the data is bad, you’re not validating, you’re looking for confirmation.
Step 3: Talk to customers before touching code
The cheapest data you can get comes from conversations.
Your goal is not to pitch. Your goal is to understand:
- How they describe their problem
- What they do today to solve it
- How painful it really is (time, money, emotion)
- What they’ve already tried and abandoned
Structure your interviews around three parts:
1. Context
- “Tell me about your role / your day.”
- “How do you currently handle X?”
2. Problem depth
- “When was the last time this was a serious problem?”
- “What did you do then?”
- “What happens if you do nothing?”
3. Willingness to change / pay
- “If a solution existed that [benefit], what would it need to do for you to seriously consider it?”
- “What budget do you have today for tools in this area?”
A few rules:
- Interview at least 10–15 people in your target profile
- Avoid leading questions like “Do you think this is a good idea?”
- Listen more than you talk; take notes verbatim
- Look for patterns and repeated pain, not isolated comments
Red flag: people say “Nice idea, good luck!” but don’t express urgency, don’t ask “When can I get this?”, and don’t want to commit to anything (time, pilot, email, deposit).
Step 4: Validate problem/solution fit with simple landing pages
Once you’re reasonably confident the problem is real, move to a landing page test. You still don’t build the full product.
Your landing page has one job: make a clear promise and measure if the right people raise their hands.
Key elements:
- Headline: Clear outcome, not features. “Stop losing top talent in remote teams” beats “An AI-powered culture analytics platform”.
- Subheadline: Who it’s for + how it works in one sentence.
- 3–5 benefits: In customer language, ideally quantified (“Reduce voluntary turnover by up to 20% in 6 months”).
- Social proof (if you have it): logos, testimonials, or even “Built by former [role] at [company]”.
- Single call to action: “Join the beta”, “Get early access”, “Pre-order at 40% off”.
Tools like Carrd, Webflow or WordPress let you build this in a few hours.
Now you need traffic.
Common low-cost channels:
- Relevant Facebook/Reddit/Slack/Discord groups (with value-first posts, not spam)
- Your LinkedIn network + direct outreach
- Small paid tests (Google Ads, Meta, LinkedIn) with very tight targeting and budget
- Communities and newsletters in your niche
Track:
- Total visitors
- Visitors from your target segment (if you can identify them)
- Conversion rate on your main CTA (sign-ups, demo requests, etc.)
As a rough benchmark:
- <3% opt-in from relevant traffic → your message, audience or offer is likely off
- 5–10% → worth iterating, especially if traffic is well targeted and the value is high-ticket
- >10% → strong signal, dig deeper
Step 5: Use “fake door” and concierge tests before full automation
A common mistake is to move straight from “landing page interest” to “let’s build the full SaaS”. You can still get much stronger evidence with minimal product.
Two useful methods:
1. The “fake door” test
You advertise a feature or product that doesn’t exist yet, and measure clicks/interest before you build it.
Example:
- Add a button on your site: “Get instant culture risk report”.
- When users click, they see: “We’re not live yet, but we’re inviting 20 beta users. Leave your email to be first.”
- Measure click-through and sign-up rate.
Do not lie. Be transparent that you’re in beta. The point is not to trick people; it’s to test demand before investing in building the feature.
2. The concierge test
Instead of building software, you manually deliver the service behind the scenes for a few early customers.
For our HR/culture example:
- You ask 3–5 HR managers to send you their existing engagement data, survey results, churn data.
- You manually analyse it (Excel, Notion, whatever) and send them a detailed report with recommendations.
- You charge something, even if it’s low (e.g. €200) to test willingness to pay.
This gives you:
- Real proof people care enough to pay now
- Clarity on what to automate later (what steps repeat often)
- Words and outcomes you can reuse in marketing
Yes, it doesn’t scale. That’s the point. You’re buying learning with time instead of money.
Step 6: Test pricing and willingness to pay early
Validation without money on the line is weak validation. The ultimate question: will they pay, and how much?
Ways to test this before a full launch:
1. Pre-orders or deposits
- Offer an early bird deal (“Lifetime 50% discount for the first 20 customers who pre-order”).
- Use Stripe, Gumroad or similar to collect payment, even if the product ships in 2–3 months.
- If you’re not comfortable charging the full price, ask for a smaller, non-refundable deposit.
Even 5–10 paying pre-orders from cold or semi-cold traffic is a strong signal in B2B or higher-ticket B2C.
2. Paid pilot
- For B2B, propose a 3-month paid pilot instead of a free beta.
- Price it realistically, maybe with a slight discount, but not free.
- Define clear success metrics with the client (e.g. reduce time spent on manual reporting by 30%).
Everyone loves “free”. Very few will put budget, political capital and time into something they don’t truly need.
Step 7: Decide with discipline: pivot, persevere or stop
After a few cycles of interviews, landing pages, fake doors and concierge work, you’ll have actual numbers. Now you need the courage to act on them.
Ask yourself:
- Did we meet or exceed our predefined success criteria?
- Are customers pushing us forward (asking for more, following up, introducing us to others)?
- Is there a consistent segment where interest and willingness to pay are clearly stronger?
Typical scenarios:
- Strong signal: Good conversion rates, some pre-orders or paid pilots, repeated problem language. Move to build a minimal, focused MVP around what your early users actually used in the concierge phase.
- Mixed signal: Interest but no money, or good feedback from a very narrow niche. Tighten the segment, adjust the offer, run another test. Don’t expand; narrow.
- Weak signal: Low conversion despite multiple message iterations, interviews reveal nice-to-have problem, nobody commits time or money. Stop or radically change the problem/segment.
Killing an idea is not failure. It’s buying back years of your life for a few hundred euros and a few weeks of effort. That’s a win.
Common validation mistakes that cost founders dearly
After working with many startups and SMEs, I see the same patterns again and again. Avoid these if you want your data to mean something.
- Asking friends and family for feedback: They’ll be supportive, not honest. At best you get compliments, at worst you get misleading enthusiasm.
- Confusing interest with intent: Likes, comments and verbal “That’s cool” don’t pay your AWS bill. Prioritise actions: sign-ups, time investments, payments.
- Over-engineering the MVP: If your “MVP” needs 6 months, it’s not an MVP. Aim for something you can get in front of real users in 2–4 weeks.
- Talking to the wrong people: If your target is HR managers and you spend your time with employees who don’t control budget, you’re optimising for the wrong buyer.
- Ignoring distribution: A validated problem and product is useless if you have no realistic way to reach customers at scale. Test channels while you test the product.
- Moving the goalposts: You said 10% sign-up rate was your bar. You got 3%. If your reaction is “Maybe 3% is okay actually…”, you’re lying to yourself.
A simple playbook you can reuse for any new idea
To make this concrete, here’s a repeatable checklist you can follow each time you want to test a new startup idea.
- Write 1–2 sentences each for:
- Who it’s for (segment, role, company size, situation)
- The main problem (in their words)
- Your proposed solution and outcome
- Define numeric success criteria for:
- Interviews (e.g. 10–15 calls, at least 70% recognise the problem as “serious”)
- Landing page (e.g. 8%+ opt-in from targeted traffic)
- Payments (e.g. 5 pre-orders of €100+)
- Schedule and run 10–15 customer interviews in 1–2 weeks
- Build a focused landing page with a clear promise and single CTA
- Drive at least a few hundred qualified visitors through 1–2 channels
- Analyse the data honestly against your criteria
- If promising, run:
- A fake door test for a key feature, or
- A concierge test with 3–5 early adopters
- Attempt to charge:
- Pre-orders, deposits, or
- A small paid pilot
- Decide: narrow, pivot or stop based on what people actually do, not what they say
It’s not glamorous. It doesn’t look like the startup stories you see on LinkedIn. But it’s how serious entrepreneurs reduce risk and avoid building products that only look good on pitch decks.
Before you pour months of work and money into your next big idea, ask yourself a simple question: “What is the smallest, cheapest experiment I can run this week to get real customer data?”
Then run it. Measure it. Decide. And only then, if the data backs you, start spending serious money.

