The posts on this blog test a hypothesis. This page draws the operational inference: what the evidence implies if you are running a business that exists, or building one that doesn't yet. The analysis is only as current as the hypothesis tracker — both update together. ← Hypothesis tracker

For existing software businesses

Defending What You Have

Know which quadrant you're in

AlixPartners scored 500 PE-backed software companies on AI disruption risk. 25% are highly vulnerable; only 14% have strong moats — defined as genuine data depth combined with vertical specialisation tight enough that switching is structurally painful. The distribution isn't random: point-solution tools with generic data models and horizontal scope cluster in the exposed quadrant; systems of record with compliance obligations and proprietary data context cluster in the fortress.

The useful exercise isn't asking whether AI will affect you — it will — but locating yourself on that grid honestly. Vulnerable companies in the AlixPartners model face a $40B debt wall maturing in 2028 against revenue lines projected to shrink 15–35%. That's not a technology problem; it's a capital structure problem that technology is accelerating. Knowing your quadrant early is the difference between a managed repositioning and a forced one.

Evidence: The Great Sorting

Re-price before the market forces you

AI inference costs are structural, not transitional. When your product relies on AI to deliver outcomes, gross margins compress from 80–90% toward 50–60% — not because you're inefficient, but because COGS now scales with usage in a way seat licences never did. Intercom moved to $0.99 per resolved conversation and grew from $1M to $100M ARR. Freshworks charges $0.10 per session. The difference in those numbers encodes a bet about whose model survives.

The seat model breaks arithmetically once AI does the work: fewer humans doing the work means fewer licences, regardless of whether your product is any good. The companies that will survive the repricing cycle are those that move to consumption or outcome pricing before the compression shows up in renewal conversations. Deloitte forecasts 40% of enterprise SaaS contracts will include outcome-based elements by 2030. The question is whether you arrive at that number on your terms or theirs.

Evidence: The Cost Problem That Broke the Pricing Model

Own the workflow or own the data — pick one

Snowflake grew revenue 30% full-year. Palantir grew 70% in Q4. Workday fell 40% on headcount-linked licence compression. The divergence is structural: companies that own the system of record, the compliance layer, or the proprietary data context that agents need to function are holding value. Point solutions sitting underneath an orchestration layer — where any sufficiently capable agent can route around them — are not.

The trap is trying to be both. A company that positions itself as the orchestration node and as the best-in-class point solution ends up defending two fronts with the resources of one. The Salesforce Agentforce data is instructive: $800M ARR growing 48% quarter-on-quarter, while the stock is still down 33% year-to-date. The market is rewarding the revenue model, not the legacy licence base. Pick the layer where your data or workflow gravity is genuinely irreplaceable, and concentrate there.

Evidence: The Dispatch Layer

Fix your distribution before AI kills it

Monday.com's $500K+ ARR enterprise cohort grew 74% year-on-year. Its SMB self-serve motion collapsed. The cause wasn't AI replacing the product — it was Google's AI-powered search results replacing the organic traffic that drove self-serve signups. Co-CEO Roy Mann told investors that the cost to acquire and expand self-serve customers had risen sharply, with returns below historical levels. The product still works. The customers stopped arriving.

This is the least-discussed existential risk in the current cycle: AI disrupting distribution economics before it disrupts product value. If your acquisition model depends on SEO-driven search traffic, content-driven signups, or any channel where an AI summary can answer the intent that used to drive a click, the threat isn't a competitor building a better product — it's the channel itself evaporating. The audit is simple: map where your new customers come from, and ask whether those sources still exist in a world where AI answers the query.

Evidence: The Phantom Repricing

For entrepreneurs and investors

What the Hypothesis Opens Up

Target the labor budget, not the software budget

Sequoia's framing is precise: every dollar spent on software corresponds to six dollars spent on services. AI autopilots don't compete with the software budget — they compete with the services budget. The target markets are outsourced verticals where human labour delivers outcomes at predictable per-unit costs: insurance claims processing, healthcare revenue cycle management, legal document review, accounting workflows. These markets are large ($50–200B in identifiable outsourced spend), they price on outcomes already, and they're not defended by the same switching-cost moats that protect SaaS systems of record.

The business model writes itself from the economics: charge per resolved claim, per processed document, per completed reconciliation — and deliver at a cost structure that legacy BPO providers cannot match. The constraint isn't AI capability; it's regulatory tolerance and audit trail requirements, which are solvable engineering problems. The companies building here aren't SaaS companies; they're service businesses with software economics.

Evidence: Selling the Work, Not the Tool

Build in the fortress quadrant from day one

The AlixPartners data shows only 14% of existing software companies sit in strong-moat territory — deep data combined with vertical specialisation. Most niche verticals with genuine data complexity haven't been claimed by an AI-native builder. The advantage of starting now rather than repositioning is that you can architect for data depth from the first line of code, rather than retrofitting it onto a horizontal tool that was never designed to hold proprietary context.

The playbook: find a vertical where the data model is genuinely complex and hard to replicate externally, embed deeply enough in the workflow that switching requires migrating institutional knowledge, and price on outcomes from day one so the margin structure never depends on seat counts. The companies in this quadrant — Palantir and Snowflake are the public examples — trade at premiums that the exposed segment will never recover. Building there from the start is worth more than a decade of repositioning later.

Evidence: The Great Sorting

The $40B distressed stack

PE-backed software faces a specific capital structure problem: $40B in debt matures in 2028, concentrated in companies whose revenue lines are projected to shrink 15–35% before that maturity arrives. AlixPartners flagged the mechanism explicitly. IGV closed at 3.6x forward EV/Sales — the lowest since 2011 — and the public valuation compression is already flowing into private credit markets. Some of this software is genuinely broken. Some of it is exposed-but-fixable: good underlying workflow value, wrong pricing model, wrong cost structure.

The opportunity in the distressed segment is acquiring businesses with real workflow gravity — customers who renew because switching is painful, not because the product is loved — and restructuring them around AI-native delivery. The target profile: high net revenue retention, low NPS, high COGS, seat-based pricing, horizontal market. The intervention: rebuild delivery on AI, move pricing to outcomes, cut the cost structure accordingly. The debt wall creates a time-bounded acquisition window between now and 2028.

Evidence: The Great Sorting

The governance and trust layer

Microsoft open-sourced the Agent Governance Toolkit covering all OWASP Top 10 agentic risks — auditability, access control, prompt injection, data leakage, liability attribution. That Microsoft shipped this as open-source is a signal: the capability problem is largely solved; the trust and compliance infrastructure around it is not. Enterprise adoption of AI agents is currently blocked less by what agents can't do than by what enterprises can't audit, certify, or defend in a regulatory context.

The market being created here isn't governance tooling for its own sake — it's the infrastructure that unlocks enterprise AI agent adoption at scale. Every agent deployment needs an audit trail, a compliance boundary, and a liability model. None of that ships in the model. The companies that build the monitoring, certification, and governance layer around agent deployments will occupy a position structurally similar to what security vendors occupied in the early cloud transition: not optional, not fungible, and priced accordingly.

Evidence: The Week the Market Said It Out Loud

Last updated: 2026-04-10 · hypothesis tracker