March 4, 2026

At some point in this process, you will receive a quote.
It will arrive in a PDF, or a slide deck, or a follow-up email after a call that felt promising. It will look considered. It may come with a timeline, a team structure, and a list of deliverables. And you will have almost no way to know whether it's right.
That is not an accident. It is how software procurement is structured, and it systematically disadvantages the person who has not done this before.
Ask anyone what custom software costs and you will get a range so wide it tells you nothing. $10,000 to $500,000, sometimes higher. That is not an estimate. It is a way of being technically correct while leaving you no better informed than before you asked.
What you actually need is a way to walk into that conversation with an independent read on what your project should cost. Not a number from the vendor. Not a guess. A structured reference point you built before anyone was trying to sell you anything.
You write a brief, find three agencies with good reviews, and send it out.
The proposals arrive.
One comes in at $40,000. Another at $180,000. A third at $95,000. All three reference the same brief. None of them explain why the numbers are so different.
At this point, most buyers do one of two things. They pick the middle number because it feels safe. Or they pick the lowest because the budget is tight and the vendor seemed trustworthy on the call.
What they do not yet know is that each vendor scoped the project differently. One included a custom admin dashboard. One assumed you would use an off-the-shelf tool. One priced in three rounds of revisions. One priced in none. Those decisions are buried in assumptions nobody showed you, invisible in the final number.
The cheapest quote is usually cheapest because it contains the most unstated assumptions, not because the vendor is more efficient. You find out which assumptions were wrong when the change orders arrive, typically after kick-off, when walking away is no longer a real option.
Six months later, the $40,000 project cost $110,000 and still is not live.
The cause is rarely technical. It is almost always scope that was assumed rather than defined, and a budget that reflected optimism more than analysis. Across first-time builds, we see the same pattern: the money does not disappear in the build. It disappears in the gap between what the buyer thought they were buying and what the vendor thought they were building.
The question worth asking is why. If this failure pattern is so consistent, why has the industry not fixed it?
The standard defence is that software is complex and every project is different. Both are true. Neither explains why buyers are routinely left with numbers they cannot interrogate.
The structural problem is this: pricing depends on scope, scope depends on requirements, and requirements are almost never fully defined when an estimate is produced. A vendor quoting you in week one is quoting against assumptions they have not shown you. When those assumptions turn out to be wrong, the contract does not protect you. It becomes the starting point for a conversation about what has changed.
We repeatedly see buyers discover this at the worst possible moment: after kick-off, after the first invoice, after the relationship has enough momentum that stopping feels more expensive than continuing. By then, the information asymmetry that existed before the contract was signed has become a budget problem that is entirely the buyer's to absorb.
The vendor is not always acting in bad faith. They are often quoting from a one-hour call and a rough brief, filling gaps with optimism because that is what closes deals. You are reading the number as a commitment because that is what it looks like. Nobody corrects the mismatch until the money is already moving.
The incentive structure makes this worse. In a time-and-materials model, where you pay for developer hours regardless of what gets built, a project that runs long is not a failure for the vendor. It is more revenue. There is no financial pressure on their side to finish faster or scope more accurately. That pressure sits entirely on you.
So before you evaluate any quote, you need to understand what actually moves the number.
Two projects can look identical in a brief and cost completely different amounts to build. That gap is not random. It follows from a small number of variables that vendors understand and most buyers do not, at least not until after they have signed something.
The first is complexity, and it is almost always underestimated. Buyers think in outcomes. A healthcare operator describes a "patient intake form." A developer hears role-based access, audit logging, HIPAA-compliant data storage, and a document management layer. Both descriptions are accurate. They are describing the same thing from opposite ends of the build. The gap between those two perspectives is where scope quietly expands before anyone calls it a change order.
Integrations compound this. The moment your software needs to communicate with something else, a payment processor, an EHR system, an accounting tool, a logistics API, you have introduced a dependency on a third-party system you do not control. Each one adds coordination, testing, and debugging overhead that is genuinely hard to predict in advance. A project with three integrations is not three times harder than a project with one. It is often considerably more, because the failure modes multiply.
The single largest lever on total cost is simpler: where your team is based. A US-based agency can cost three to four times more per hour than a comparable Eastern European team, and more still compared to South Asian teams. Rate is not a reliable signal of quality. But if you are comparing proposals without knowing where each team is located, you are not comparing the same thing.
What surprises most first-time buyers is that unclear requirements often cost more than any of the above. Not because ambiguity is expensive in principle, but because vendors quote optimistically into it. When a requirement that was assumed turns out to be more complex than the vendor priced, that is not a mistake. It is a change order. And change orders arrive after kick-off, after commitment, when the cost of stopping exceeds the cost of paying.
With those variables in mind, it is possible to reason about relative cost before you talk to anyone. A simple internal tool with two or three features and no integrations is a fundamentally different build from a consumer-facing product with authentication, a core feature set, and one or two third-party connections. That is a different build again from an operational platform with multiple user roles, several integrations, an admin dashboard, and reporting. And all of those are materially simpler than a regulated-industry product in healthcare, financial services, or logistics, where compliance requirements, audit trails, and data sensitivity add significant surface area to every feature.
The progression matters because it tells you which category your project sits in before a vendor tells you. A brief that describes a two-feature internal tool should not produce a quote in the same range as a multi-role platform with integrations. If it does, something in the scoping conversation went wrong.
Where your team is based shifts the numbers significantly. A US-based or senior nearshore team costs considerably more per hour than an offshore team. That differential is real and can change total project cost by 40 to 60 percent. But it is a separate decision from scoping, and worth treating as one. Rate alone does not tell you what you are buying.
None of this accounts for what comes after launch. Maintenance, hosting, security updates, and future feature development are ongoing costs that rarely appear in an agency proposal. The build is the beginning of the spend, not the end of it.
Most first-time buyers treat the product brief as a formality. Something to send so the vendor has enough to go on. A few paragraphs about the idea, a rough feature list, a note about the industry. Then wait for the proposals.
The brief is actually the most important document in the entire project. Not the contract. Not the statement of work. The brief. Because every requirement you leave undefined is an assumption the vendor fills in on your behalf, and when their assumption differs from what you meant, that gap has a price. It shows up as a change order, after kick-off, after the relationship has enough momentum that stopping feels more expensive than continuing.
The brief is also the only moment in procurement where you have full control and no cost. Once you have signed, ambiguity belongs to the vendor.
What the brief really forces you to answer is not "have I given them enough to quote against?" It is "do I actually know what I am building?" Vendors rarely help you answer that second question. Their incentive is to start, not to scope.
If you cannot define what "done" looks like for a core feature, you are not ready to receive a quote. Not because an estimator will not produce a number, it will, but because any quote built against a vague brief is a number the vendor invented. Filled with assumptions you have not seen, against requirements that do not fully exist. You have no way to evaluate it, because there is nothing solid to evaluate against.
Understanding cost ranges gets you to the table. It does not protect you once you are sitting at it.
The standard agency model is time-and-materials: you pay for developer hours regardless of what gets built. If the project runs long because a requirement was underestimated, or a third-party integration behaved unexpectedly, or scope shifted mid-build, you absorb the cost. The vendor's margin is protected either way. There is no financial consequence for them when the project runs over. That consequence is entirely yours.
Outcome-based pricing works differently, and the difference matters before you sign anything. When cost is tied to defined, deliverable pieces of functionality rather than hours logged, the vendor's margin depends on building efficiently. That financial pressure changes the dynamic. It also forces transparent scoping earlier, because you cannot price a deliverable that has not been defined. That front-loaded clarity is where a significant amount of budget protection comes from. Not the contract language. Not the project manager. The definition of done that exists before kick-off. AI is also changing how delivery teams manage this process in practice.
This is worth asking about directly when you evaluate vendors. How do you price? What happens when a feature takes longer than estimated? What does a change order look like, and when does one get triggered? A vendor who can answer those questions clearly, before you have committed to anything, is a different proposition from one who cannot.
The Fraction estimator exists for the gap between "I have an idea" and "I am sitting across from a vendor who has already decided what it costs."
Feed it your product brief. It returns a structured breakdown by feature area, with cost bands and assumption flags. It does not replace a vendor quote. It gives you an independent reference point before you negotiate one, so you are no longer evaluating a number in a vacuum. You are comparing it against a structure you built before anyone was trying to sell you anything.
Where it earns its keep is before you talk to anyone. Feed it your brief and it shows you quickly which parts of your thinking are solid and which parts are still assumption fog. A well-defined feature returns a useful cost band. A vague one returns a range so wide it is meaningless. That is not a flaw. It is the signal. It tells you exactly where a vendor will quote optimistically into your ambiguity, and where you will pay to find out later. Clearing that up before you enter any vendor conversation is the cheapest work you will do on this project.
No estimate produced before a full discovery process will be perfectly accurate. Requirements shift. Technical constraints surface. Priorities change as the product gets closer to real users. That is normal, and anyone who tells you otherwise is either inexperienced or not being straight with you.
What is not normal, and should not be acceptable, is committing to a vendor relationship with no independent read on whether their number is reasonable. No visibility into the assumptions behind the quote. No shared definition of what "done" actually means.
The buyers who end up $100,000 over budget are not always the ones who received a bad quote. They are often the ones who had no way to recognise it.
You do not need to become technical to buy software well. You need cost certainty before you commit, and a signal that was not produced by the person selling to you.
A reasonable quote breaks down cost by feature or functional area, states the assumptions behind each estimate, and defines what "done" means for each deliverable. If your quote is a single number with a timeline and not much else, you have no way to evaluate it because there is nothing to evaluate against. The most useful thing you can do is build an independent reference point before you respond to any vendor. When you compare a structured estimate from a neutral source against what a vendor is proposing, gaps become visible: features priced significantly higher than expected, assumptions that were never discussed, scope that was quietly added or removed.
Because each vendor scoped the project differently. A brief that describes outcomes leaves significant room for interpretation at the component level. One vendor included a custom admin panel. Another assumed you would use an off-the-shelf solution. One priced three rounds of revisions. One priced none. Those decisions are rarely visible in the final number, which means you are comparing figures that do not reflect the same scope. Before you can meaningfully evaluate quotes, you need a shared definition of what is actually being built. Without that, choosing between three different numbers is closer to a guess than a decision.
Accurate enough to be useful, not accurate enough to be a contract. An AI-assisted estimator works by breaking your brief into feature areas and returning cost bands based on comparable builds. The accuracy depends heavily on the quality of the input: a well-defined brief produces a useful range, a vague brief produces ranges wide enough to be meaningless, which is itself useful information because it shows you where your thinking needs more work before you talk to a vendor. What AI estimation does well is consistency. It applies the same logic across every feature without the optimism bias that often shapes a vendor's opening quote. Use it as a calibration tool, not a substitute for a properly scoped engagement.