Practical Thinking
on AI for Business

No hype. No vendor pitch. Just clear-eyed frameworks and real-world perspective on where AI creates value, where it doesn't, and how to tell the difference before you invest.

Framework 12 min read · Updated June 2025

Build vs. Buy: The AI System
Decision Framework

Every week we talk to business leaders who have already spent money on an AI tool that didn't solve their problem, or who are paralyzed trying to decide between a SaaS platform and a custom build. This framework is our attempt to give them a clear process.

Why This Decision Is Harder Than It Looks

On the surface, the build vs. buy question seems straightforward: buying is faster and cheaper upfront; building is slower and more expensive but you own it. The problem is that this framing misses the most important dimension — fit. A cheap tool that solves 60% of your problem is rarely a good investment. A custom build for a problem that an existing platform solves perfectly is almost always a waste of money.

The right question isn't "which is faster?" or "which costs less?" It's: "What exactly is the problem, how specific is it to our operation, and how much does the solution's precision matter to the outcome?"

The right question isn't "which is faster?" — it's "how specific is our problem, and how much does precision matter?"

The Three Variables That Actually Determine the Answer

After working through this decision with dozens of organizations, we've found that three variables predict the right answer more reliably than any other factors:

  • Problem specificity: Is the problem you're solving common across your industry, or does it involve a combination of factors unique to your operation? Scheduling for a hotel chain is largely a solved problem. Scheduling for a hotel chain with a custom loyalty tier structure, union labor rules, and three POS systems from different decades is not.
  • Data ownership requirements: Does the solution need to be trained on your proprietary data, or can it operate effectively on general models? If your competitive advantage lives in 10 years of customer behavior data, you typically can't hand that data to a SaaS platform and expect the same results you'd get from a system you own.
  • Integration depth: Does the AI need to be deeply integrated into your existing workflows — reading from and writing to your internal systems — or can it operate as a standalone layer? Deep integration usually means custom build; standalone usually means buy.

The Decision Matrix

Here's a simplified version of the framework we walk clients through. For each factor, score your situation on a 1–3 scale (1 = favors buying, 3 = favors building):

Factor Buy (score 1) Build (score 3)
Problem specificity Common across industry; solved by existing platforms Unique combination of constraints specific to your operation
Data requirements General data sufficient; no proprietary training needed Requires training on your proprietary historical data
Integration depth Standalone tool; minimal system connections needed Deeply embedded in core operational workflows
Customization over time Your needs are stable; vendor roadmap acceptable Needs will evolve rapidly; you need control of the roadmap
Competitive sensitivity Operational tool; competitors can use same platform Proprietary advantage; cannot share with competitors
Time to value Need results within weeks; can't wait for development Willing to invest 3–6 months for a durable solution

Total score 6–10: Strong buy signal. Total score 11–14: Hybrid approach often optimal. Total score 15–18: Strong build signal.

The Hybrid Path Most Organizations Miss

The build vs. buy framing creates a false binary. In practice, the most efficient solution is often a hybrid: use a foundation model or existing AI service as the intelligence layer, but build the integration layer, workflow orchestration, and data pipeline that make it actually useful in your specific context.

This approach gets you the benefit of not training a model from scratch (prohibitively expensive for most organizations) while still owning the system architecture that determines how the AI touches your data, your workflows, and your customers. The SaaS vendor provides the AI engine; you own the transmission and steering.

The vendor provides the AI engine. You own the transmission and the steering — the parts that determine whether it goes where you need it to go.

The Five Mistakes We See Most Often

  1. Buying before defining the problem precisely. "We need an AI chatbot" is not a problem statement. "We need to reduce first-response time on tier-1 support tickets from 4 hours to under 30 minutes without adding headcount" is. The more precisely you define the outcome, the clearer the build-vs-buy answer becomes.
  2. Underestimating integration cost on purchased tools. A $50K/year SaaS platform with $120K in integration work to connect it to your existing systems is not actually cheaper than a $100K custom build. Total cost of ownership over three years is the right metric.
  3. Treating "AI" as a single category. Different AI capabilities — document processing, predictive analytics, natural language interfaces, computer vision — have very different build-vs-buy economics. A rule for one doesn't apply to another.
  4. Ignoring the vendor lock-in multiplier. If you build your entire data pipeline around a single AI vendor's proprietary format, switching costs in year three can exceed the original project cost. Factor vendor dependency into the buy-side cost estimate.
  5. Optimizing for launch, not lifecycle. The right question isn't "how quickly can we have something running?" It's "what will this cost us to maintain, improve, and adapt over three years?" Organizations that optimize for launch often find they've made the wrong choice by month 18.

A Quick Self-Assessment

Before your next AI vendor call or RFP, answer these five questions honestly:

  • Can we describe the exact business outcome we expect this AI system to produce, in measurable terms?
  • Have we identified which of our existing systems the AI needs to read from and write to?
  • Do we have a data governance plan for the data this system will access?
  • Who internally owns the AI system after it's live — and does that person have the authority and budget to maintain and improve it?
  • What does success look like at 90 days, 1 year, and 3 years, and are those metrics tracked today?

If you can't answer all five clearly, you're not ready to make a build-vs-buy decision — you're still in the problem definition phase. That's fine. It's actually the most important phase. The organizations that get it wrong almost always rushed through it.

More on AI for Business

🏪
AI for Small Business: Where to Start When You're Not Google

The mainstream AI conversation is dominated by enterprise case studies — billion-dollar budgets, dedicated ML teams, and data infrastructure most businesses will never have. This doesn't mean AI is off the table for smaller operations. It means the playbook is different.

At the 10–200 employee scale, the AI decisions that tend to pay off share three characteristics:

Start with a high-frequency, well-defined task

  • Tasks done dozens of times per day by multiple people
  • Tasks with clear success/failure criteria (not judgment-heavy)
  • Tasks where current quality is measurably inconsistent
  • Tasks where speed directly affects customer experience or revenue

Prioritize payback period, not ROI multiple

  • Target initiatives with 6–12 month payback at the start
  • Don't try to solve everything at once — one clear win first
  • Measure impact from day one, not month six

Build on existing tools before building custom

  • Most SMBs can get meaningful AI value from configuring tools they already pay for
  • CRM AI, email automation, document processing — often available in existing subscriptions
  • Custom builds make sense when these tools genuinely don't fit
Key takeaway: The best AI initiative for a small business is almost never the most ambitious one. It's the one with the clearest problem, the fastest feedback loop, and the most obvious before/after comparison.
⚠️
Why AI Projects Fail: The 6 Patterns We See Over and Over

Research consistently shows that 70–80% of AI projects don't deliver the expected value. In our experience, the failures aren't random — they cluster into six recognizable patterns. Understanding them before you start is worth more than any post-mortem.

The 6 failure patterns

  • The solution in search of a problem. Starting with "we need AI" rather than "we have this specific, costly problem." The technology gets implemented, but there's no clear outcome to measure against.
  • Data readiness overestimated. The AI system is designed around data that turns out to be incomplete, inconsistently formatted, siloed across systems, or simply not available. Most data cleanup takes 3–5× longer than estimated.
  • No ownership after launch. The project team disperses after go-live. No one owns the system's ongoing performance. Errors go unaddressed. Models drift as data distributions change. Within a year, the system is quietly abandoned.
  • Underestimating change management. The AI system is technically sound, but the people it was built to help don't trust it, don't use it, or actively work around it. Technology adoption is a human problem, not a technology problem.
  • Scope expansion mid-project. "While we're at it, can it also do X?" AI projects with clear, narrow scopes succeed at dramatically higher rates than projects that expand during development.
  • Measuring the wrong things. Success is defined as "the system is live" rather than "the problem we started with is measurably better." The project gets called a success before anyone checks whether anything actually changed.
Key takeaway: The most common AI project failure mode isn't a technology failure. It's a problem definition failure followed by a change management failure. Fix those two things and your success rate goes up dramatically.
⏱️
The Cost of Waiting: Why 'Not Yet' Is an Active Business Decision

The most common response to an AI proposal isn't "no." It's "not yet" — let's wait until the technology matures, until we have the budget, until after the next quarter, until a competitor proves the model. On the surface this seems cautious. In practice, it's a decision with real, compounding costs.

What "not yet" actually costs

  • Operational cost accumulation. If the problem you're considering AI for costs you $200K/year today, every month of delay is $16,700 in costs you're choosing to continue paying.
  • Competitive gap widening. Your competitors who adopt AI-enabled operations first don't just get a temporary advantage — they get better training data, faster learning curves, and lower unit economics over time. The gap compounds.
  • Talent expectations shifting. High-performing employees increasingly expect modern tools. Falling behind on operational technology creates retention and recruiting headwinds that are hard to quantify but very real.
  • Implementation cost increases. AI implementation complexity and vendor pricing generally increase over time. "Wait until it's cheaper" is frequently the opposite of what actually happens.

When waiting IS the right answer

  • When the problem isn't yet well-defined
  • When your underlying data infrastructure isn't ready
  • When you're mid-cycle on a related system replacement
  • When you don't yet have internal ownership identified

The point isn't that you should always move immediately. It's that "not yet" needs a specific trigger condition — "we'll move when X is true" — rather than an indefinite deferral with no exit criteria.

Key takeaway: "Not yet" isn't a neutral holding position. It's an active choice to continue paying the cost of the problem you're not solving. Make that choice consciously, with a defined trigger for when "yet" becomes "now."

Ready to apply these frameworks
to your actual business?

Frameworks are most useful when applied to a real problem with real constraints. Schedule a free discovery call and we'll work through the right starting point for your specific situation — no obligation, no sales pitch.