exit form
sign UP TO BROWSE TALENT
Apply to be talent
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By providing a telephone number and submitting this form you are consenting to be contacted by SMS text message. Message & data rates may apply. You can reply STOP to opt-out of further messaging.
Fraction logo
Case Studies
Services
GrowthTechnical
Resources
Apply to WorkClient ReferralsWhy US OnlyFAQ
Resources
Apply to WorkClient ReferralsWhy US OnlyFAQ
Pricing
Book an Intro Call
Browse Our Talent

AI Opportunity Assessment: How to Pick the Right First AI Project

March 24, 2026

AI Opportunity Assessment: How to Pick the Right First AI Project

The most expensive AI mistake is not building the wrong thing.

It is building the right thing in the wrong order.

A company with 20 potential AI use cases that picks the flashiest one instead of the highest-ROI one burns budget, loses organizational confidence, and makes the second AI project harder to fund. The opportunity assessment is the step that prevents this. And most companies skip it entirely.

McKinsey's 2025 State of AI report found that of 25 organizational attributes tested, fundamental workflow redesign had the single strongest correlation with EBIT impact from AI. But only 21% of organizations have redesigned even some of their workflows. The other 79% are layering AI on top of existing processes without asking whether those processes are the right place to start.

That gap between "we want AI" and "we know exactly where AI creates the most value" is what an opportunity assessment closes.

Why Most Companies Pick the Wrong First AI Project

Three selection biases show up again and again.

The shiny object bias. Leadership picks the most impressive-sounding use case. "Build us an AI agent" sounds better in a board meeting than "automate the 40 hours per week our team spends categorizing inbound requests." The second one has 10x the ROI. The first one gets funded.

The vendor-led bias. The company picks the use case that their AI vendor recommends. That recommendation is shaped by what the vendor's product is designed for, not by what has the highest business impact. The vendor sells what they have. You buy what you need. Those are rarely the same thing.

The technical-first bias. The engineering team picks the use case that is most technically interesting. Training a custom model on proprietary data is more exciting than connecting an off-the-shelf API to a manual workflow. But the off-the-shelf integration might save $200K per year. The custom model might produce a nice demo and a conference talk.

All three biases produce the same outcome: a technically working AI feature that nobody can connect to revenue or cost savings.

The AI Opportunity Assessment Framework

Here is the framework we use at Fraction before recommending any AI build. Five steps, and the order matters.

Step 1: Process inventory

List every business process that involves repetitive human judgment, information synthesis, or pattern recognition.

Do not filter yet. Just list. Customer support triage. Invoice processing. Sales lead qualification. Report generation. Data reconciliation. Quality checks. Scheduling. Document review.

The goal is a complete inventory, not a curated shortlist. Filtering too early is how companies miss the highest-value opportunities hiding in unglamorous workflows.

Step 2: Impact scoring

For each process, estimate four things:

  1. How many hours per week it consumes
  2. The error rate or quality issues
  3. The cost of those errors
  4. The revenue impact if the process were 50% faster or more accurate

Score each on a 1 to 5 scale. The processes that score highest here are your candidates, regardless of how exciting they sound.

A mid-market company we assessed last year had 14 potential AI use cases on the board. The one the CEO wanted to fund was an AI-powered customer recommendation engine. The one that scored highest on impact was automated invoice matching, a process that consumed 60 hours per week across three people, had a 12% error rate, and cost the company roughly $180K annually in corrections and delays.

The invoice matching project shipped in 6 weeks and paid for itself in 4 months. The recommendation engine is still on the roadmap, properly sequenced behind the project that funded it.

‍

AI Audit and Playbook, Delivered in Two Weeks

A hands-on deep dive into your AI opportunities, gaps, and competitive blind spots. Walk away with a prioritized playbook you can act on immediately.

Book Your Audit

$8K flat fee. No surprises.

‍

Step 3: Feasibility scoring

For each process, evaluate:

  1. Is the data available and structured?
  2. How complex is the integration with existing systems?
  3. Are there compliance constraints?
  4. Is there an off-the-shelf solution, or does this need custom development?

Score each on a 1 to 5 scale. RAND Corporation research identified data readiness as the second most common root cause of AI project failure. If the data is not accessible, the project will stall at the data engineering phase, regardless of how strong the business case is.

Step 4: Priority matrix

Plot impact versus feasibility. The top-right quadrant, high impact and high feasibility, is your starting shortlist.

High impact, low feasibility goes on the roadmap for later. Low impact, high feasibility is a quick win if you need an early proof point. Low impact, low feasibility gets cut.

Step 5: Select one

Not three. One.

The first AI project is a proof of the operating model, not a transformation program. It proves that AI works in your environment, with your data, for your team. It builds organizational confidence. It creates the internal case study that funds everything after it.

Companies that launch three AI projects simultaneously split focus, compete for the same data engineering resources, and end up with three half-finished prototypes instead of one production feature.

What a Good AI Opportunity Assessment Delivers

The output should be a prioritized list of 3 to 5 AI opportunities, each with:

  • The business process it targets
  • The estimated business impact, in hours saved, error reduction, or revenue impact
  • The technical approach: buy, integrate, or build custom
  • The estimated cost and timeline
  • The data readiness status
  • The success metric

The deliverable should not be a 50-page report. It should be a 3 to 5 page document that a CEO can read in 15 minutes and a CTO can execute against immediately.

If your assessment deliverable requires a presentation to explain it, it is too complicated.

How Long an AI Opportunity Assessment Takes and What It Costs

A focused assessment for a mid-market company, 50 to 500 employees across 3 to 5 departments, takes 2 to 4 weeks and involves stakeholder interviews, data infrastructure review, and process mapping.

Cost varies. Typically $10K to $30K for an external assessment, or 40 to 80 hours of internal work if done in-house. The internal route is cheaper upfront but slower and prone to the same selection biases described above, because internal teams have existing assumptions about which processes matter most.

The ROI math is straightforward. A $15K assessment that prevents a $200K investment in the wrong use case is the best money you will spend on AI. A $15K assessment that identifies a $180K annual savings opportunity pays for itself before the build is finished.

Why AI Opportunity Assessments Fail

Even when companies do an assessment, two patterns cause problems.

The assessment is done by the same team that will build. They have an incentive to recommend the project that is most interesting to build, not the one with the highest business impact. Separating assessment from build eliminates this bias. If the team doing the assessment also does the build, outcome-based pricing is the check: they only get paid for what ships and delivers value, so the incentive is to scope the right thing.

The assessment skips the data layer. The team identifies the highest-impact opportunity, scopes the build, starts development, and discovers 6 weeks in that the data they need lives in four systems, two of which have no API. The project stalls for months on data engineering. A proper assessment surfaces data readiness issues before a dollar is committed to the build.

The Difference Between Assessment and Strategy

An AI strategy tells you where AI fits in your business over the next 2 to 3 years. An AI opportunity assessment tells you what to build first and why.

Most companies that come to us asking for an AI strategy actually need an assessment. They do not need a roadmap for the next 3 years. They need clarity on the next 90 days.

The strategy can come later, once the first project proves the model works and the organization has real production data to plan around.

The companies that succeed with AI are the ones that spend 2 weeks choosing the right problem before spending 8 weeks building. The companies that fail are the ones that skip the assessment and start building whatever sounded exciting in the last vendor demo.

<!-- CTA EMBED: /book-a-demo styled Webflow component -->

Fraction's AI audit starts with this exact framework. Before any technology is discussed, we map business processes, score them on the impact-feasibility matrix, and identify the 1 to 2 highest-ROI opportunities. The output is not a strategy deck. It is a scoped, costed plan for the first AI build. Book a free consultation to start your assessment.

Frequently Asked Questions

How do I find a good AI consultant for a small business?

Look for three things. First, ask if they have built AI for a company your size, not just for enterprises. Second, ask how they price. If the answer is only hourly billing with no scope estimate upfront, the incentive is for the project to take longer. Third, ask what happens after the build. A consultant who ships a feature and disappears leaves you with something nobody internal can maintain. The best consultants scope tightly, price transparently, and plan for handoff or ongoing support from day one.

What is the minimum budget needed to start using AI in a small business?

Zero, if you are starting with free tools like ChatGPT or Claude for content and research. $500 to $2,000 per month for off-the-shelf customer service or marketing AI. $5K to $15K for a focused assessment that tells you where AI will create the most value in your operations. $15K to $40K for a custom pilot build targeting your highest-ROI workflow. You do not need to spend $100K to start. You need to spend the right amount on the right problem.

Is AI consulting worth it for a business with fewer than 50 employees?

It depends on the problem, not the headcount. A 20-person company with a manual process that costs $150K per year in labor is a better candidate than a 200-person company with no clear workflow to automate. If you can name a specific process that consumes measurable time and money, and you are not sure whether to buy off-the-shelf or build custom, a consultant saves you from making the wrong bet.

What is the biggest mistake small businesses make with AI?

Buying tools before defining the problem. A small business owner hears about AI, signs up for five SaaS products, and six months later has spent $10K on subscriptions that nobody uses consistently. The tools are not the problem. The missing step is identifying which specific workflow those tools should serve and measuring whether they actually improve it.

Related: AI Leadership Blind Spot, AI automation Consulting and Building Agentic AI with a Problem-First Approach

Sources

McKinsey, "The State of AI in 2025" (November 2025). Workflow redesign is the strongest predictor of EBIT impact from AI, out of 25 organizational attributes tested. Only 6% of organizations qualify as AI high performers.

RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed" (2024). Over 80% of AI projects fail to deliver business value. Data readiness is the second most common root cause of failure.

Gartner (June 2025). Over 40% of agentic AI projects predicted to be canceled by end of 2027, citing unclear business value as a primary driver.

‍

Back to Blog
Fraction logo

Get in Touch

ContactBook a DemoClient ReferralsApply to WorkLogin

Company

FAQAboutWhy US OnlyPricing

Services

Senior DevelopersUI/UX DesignProject ManagersProduct ManagersGrowth MarketersLow CodeCMOsCTOs

Resources

BlogPressProfit 101PodcastsCase Studies

Industries

FinTechHealthTechFractional HiringOutsourcing
Reviewed on Clutch - see reviewsRead our reviews on G2
Sales@hirefraction.com404.343.7747
YouTubeLinkedIn
Built with 🤍 by the fractional developers and designers at Fraction.work
Copyright ©2025 GXHR Inc.

Privacy