The Empty Chair Problem: When Agents Take the Seat
Why your pricing model is about to break
Here’s something I’ve been puzzling over.
Enterprise software has been priced the same way for thirty years: per seat. You pay for the number of humans who use the thing. Salesforce charges per sales rep. Zendesk charges per support agent. GitHub charges per developer.
The logic is simple. More humans using the product means more value extracted. More value extracted means you should pay more. The seat is a proxy for value.
But what happens when the thing sitting in the seat isn’t human?
The Seat as Economic Fiction
Let me walk through how seat-based pricing actually works.
A company buys Zendesk for their support team. They have 50 customer service reps. Zendesk charges $89/ (human) agent/month for their Professional tier. That’s $4,450/month, $53,400/year.
Why does Zendesk charge per (human) agent? Because an (human) agent handles tickets. More (human) agent means more tickets handled. More tickets handled means more customer value delivered. The seat is a proxy for throughput.
Now here’s the implicit math that makes this work:
Average CSR handles ~50 tickets/day
That’s ~1,000 tickets/month per seat
Cost per ticket: $89 ÷ 1,000 = $0.089
The company isn’t really paying $89/seat. They’re paying roughly 9 cents per resolution. The seat is just a convenient fiction that makes the math predictable.
Same thing happens in developer tools. GitHub charges $21/developer/month for their Enterprise tier. A developer might make 200 commits, open 30 PRs, and review 40 others in a month. The seat is a proxy for code velocity.
Same thing in sales. Salesforce charges $165/user/month for Enterprise. A sales rep might manage 100 opportunities, send 500 emails, log 200 calls. The seat is a proxy for pipeline activity.
The entire enterprise software economy runs on this fiction: humans occupy seats, seats produce outcomes, outcomes justify price.
What happens when that fiction breaks?
The AI Agent in the Chair
Let’s go back to Zendesk.
A company deploys an AI agent to handle Tier 1 support. The agent runs 24/7. It doesn’t take breaks. It doesn’t have bad days. It handles tickets in 90 seconds instead of 8 minutes.
Conservative estimate: one AI agent handles the throughput of 10 human agents.
Now the math gets weird.
Old model:
50 human agents × $89/seat = $4,450/month
50,000 tickets/month
Cost per ticket: $0.089
New model (if priced the same):
5 AI agents × $89/seat = $445/month
50,000 tickets/month
Cost per ticket: $0.0089
The vendor just lost 90% of their revenue while the customer got the same outcomes.
No software company is going to accept that. So what do they do?
Three Pricing Futures
I see three ways this plays out. Each has different implications for buyers and sellers.
Future 1: The Seat Gets Expensive
The simplest move: charge more per AI seat.
If one AI agent does the work of 10 humans, charge 10x per seat. The customer pays the same total, the vendor keeps their revenue, everyone’s happy.
The math:
5 AI agents × $890/seat = $4,450/month
Same revenue for vendor
Same cost for customer
Same outcomes delivered
This is the “nothing changes” scenario. The seat remains the unit of sale, it just costs more when an agent occupies it.
Problem: This only works if you’re the only game in town. The moment a competitor offers AI seats at $500, you’re in a race to the bottom. And the moment an open-source agent framework gets good enough, the floor falls out entirely.
Seat-based pricing for AI agents is a transitional fiction. It won’t hold.
Future 2: The Outcome Becomes the Product
Here’s where it gets interesting.
Instead of selling seats, sell outcomes. Don’t charge per agent—charge per ticket resolved, per lead qualified, per PR reviewed.
The math:
50,000 tickets/month × $0.15/ticket = $7,500/month
Vendor revenue increases 68%
Customer pays more, but only when they get more
This is the consumption model. Snowflake did it for data warehousing. Twilio did it for communications. The unit of sale is the unit of value.
For the seller: Your revenue scales with customer success. If your AI agent gets better and handles more tickets, you make more money. Aligned incentives.
For the buyer: You pay for what you get. No shelfware. No unused seats. But also no predictability. Your support costs now fluctuate with ticket volume.
Problem: Budgets don’t work this way.
A VP of Support has a headcount budget. They know they can afford 50 CSRs at $60k/year. That’s $3M in labor. They can plan around that number.
But if support costs are now variable—$0.15/ticket, volume unknown—how do they budget? How do they forecast? How do they explain to the CFO why support costs doubled in Q4 because of a product issue that spiked ticket volume?
Outcome-based pricing is economically elegant and organizationally terrifying.
Future 3: The Hybrid Emerges
This is what I think actually happens.
Vendors create a new pricing primitive: the “agent allocation” or “outcome commitment” or whatever branding they land on.
The structure:
Base platform fee: $2,000/month (access, integrations, dashboards)
Included outcomes: 25,000 tickets/month
Overage: $0.10/ticket beyond commitment
The math:
Base: $2,000
Customer uses 50,000 tickets
Overage: 25,000 × $0.10 = $2,500
Total: $4,500/month
This gives buyers predictability (they know the floor) and sellers upside (they capture value when usage grows).
It’s basically what cloud infrastructure did. You commit to a base, you pay extra for burst.
The Budget Reallocation Problem
Here’s the part nobody’s talking about.
Enterprise budgets are organized around humans. You have headcount budgets for sales, support, engineering, marketing. Each function gets a number of bodies and a compensation range.
Software budgets live in a different bucket—usually “tools” or “infrastructure” or “IT.” They’re a fraction of headcount budgets. A company might spend $3M on 50 support reps and $100k on their support software.
Now imagine that same company can replace 40 of those reps with AI agents. Where does the money come from?
Scenario:
Current: 50 CSRs × $60k = $3M labor + $100k software = $3.1M total
Future: 10 CSRs × $60k = $600k labor + ??? AI agents
If the AI agents cost $500k/year (much cheaper than $2.4M in saved labor), the company saves $2M. Great outcome.
But whose budget pays for the AI agents?
It’s not the software budget—that was $100k. The AI solution costs 5x the entire previous software allocation.
It’s not the headcount budget—you can’t buy software with headcount dollars. Different GL codes. Different approval chains. Different stakeholders.
The buying center breaks.
The VP of Support who controlled $3M in headcount now controls $600k. They lost 80% of their budget and 80% of their team. Are they going to champion this transition?
The CIO who controlled $100k in support software now needs to find $500k. Where does it come from? Their budget didn’t grow.
This is the underrated friction in AI agent adoption. It’s not that the economics don’t work—they work beautifully. It’s that the organizational plumbing wasn’t built for this flow of money.
Napkin Math: The Buyer’s Dilemma
Let’s make this concrete from a buyer’s perspective.
You’re the VP of Customer Success at a mid-market SaaS company.
Current state:
30 CSRs, fully loaded cost $70k each = $2.1M/year
Handle 360,000 tickets/year (1,000/rep/month)
Cost per ticket: $5.83
Current software (Zendesk): $32k/year
You’re evaluating an AI agent solution. The vendor proposes:
Platform fee: $3,000/month ($36k/year)
Included: 300,000 tickets/year
Overage: $0.08/ticket
Scenario A: AI handles Tier 1 (60% of volume)
AI handles 216,000 tickets
Humans handle 144,000 tickets
You need 12 CSRs (down from 30)
New costs:
Labor: 12 × $70k = $840k
AI platform: $36k + (0 overage) = $36k
Total: $876k
Savings: $2.1M - $876k = $1.22M/year (58% reduction)
Cost per ticket: $876k ÷ 360k = $2.43 (58% reduction)
Scenario B: AI handles Tier 1 + Tier 2 (85% of volume)
AI handles 306,000 tickets
Humans handle 54,000 tickets
You need 5 CSRs (down from 30)
New costs:
Labor: 5 × $70k = $350k
AI platform: $36k + (6,000 × $0.08) = $36.5k
Total: $386.5k
Savings: $2.1M - $386.5k = $1.71M/year (82% reduction)
The economics are undeniable. But now you have to:
Lay off 25 people
Find $36k in software budget (easy) while removing $1.75M from headcount budget (hard)
Convince your CFO that support costs should be partially variable
Bet your career on the AI actually working
The math works. The politics are murder.
Napkin Math: The Seller’s Dilemma
Now let’s flip it. You’re building the AI agent platform.
Option A: Seat-based pricing
You charge $500/agent/month. Customers deploy 5 agents to replace 30 humans.
Revenue per customer: $2,500/month, $30k/year
Customer saves $1.2M
Your share of value created: 2.5%
That’s a terrible capture rate. You created $1.2M in value and kept $30k.
Option B: Outcome-based pricing
You charge $0.15/ticket resolved.
Customer does 360,000 tickets/year
Revenue: $54k/year
Your share of value created: 4.4%
Better, but still leaving most of the value on the table.
Option C: Value-based pricing
You charge 10% of labor savings.
Customer saves $1.2M/year
Revenue: $120k/year
Your share of value created: 10%
Now we’re talking. But how do you verify the savings? How do you prove causation? How do you prevent the customer from gaming the baseline?
Option D: The hybrid
Platform fee based on company size + outcome-based overage + success fee on verified savings.
Platform: $24k/year (based on ticket volume tier)
Outcome overage: $0.05/ticket above tier
Success fee: 5% of verified annual savings (audited)
This gives you predictable base revenue, upside on usage, and alignment on outcomes. But it requires sophisticated contracting, transparent measurement, and trust.
Most enterprise sales orgs aren’t built for this. They’re built for: “How many seats do you need?”
The Perception Shift
Here’s what I think is the deepest change.
For thirty years, buyers have evaluated software by asking: “How much does it cost per person?”
That question made sense when the software was a tool that made people more productive. Photoshop makes designers faster. Salesforce makes reps more organized. Excel makes analysts more capable.
But AI agents aren’t tools. They’re workers.
The evaluation question shifts from “how much per person?” to “what’s the fully loaded cost of this outcome?”
When you hire a CSR, you don’t ask “what’s the cost per ticket?” You ask “can this person do the job, and what’s their total compensation?” Then you back into the unit economics.
AI agents invert this. You start with the unit economics—$0.15/ticket—and ask whether that’s cheaper than the alternative.
The buyer’s mental model has to shift from:
“I have 50 seats budgeted, which tools fit?”
To:
“I need 500k tickets resolved, what’s the cheapest way to do that?”
This is a procurement revolution disguised as a pricing problem.
What This Means
For sellers:
Your pricing model is a transitional structure. Seats will hold for 18-24 months while buyers figure out how to reallocate budgets. Then the pressure toward outcome-based pricing will become irresistible.
Start building the measurement infrastructure now. You need to prove value delivered, not just features shipped. The companies that can demonstrate ROI with audit-grade precision will command premium prices. The ones that can’t will race to the bottom.
For buyers:
Your org chart is about to become a liability. Budgets organized by headcount will create friction against AI adoption. The companies that move fastest will be the ones that create “AI outcome budgets” that can absorb both labor savings and software costs.
Start thinking about outcomes, not seats. When you evaluate an AI agent platform, don’t ask “what’s the per-seat cost?” Ask “what’s my cost per resolved ticket today, and what will it be tomorrow?”
For everyone:
The seat was always a fiction. A convenient proxy for value that let buyers budget and sellers forecast. That fiction is breaking.
What replaces it will be messier, more variable, and more honest. You’ll pay for what you get. And the companies that can measure what you got—precisely, transparently, verifiably—will own the next era of enterprise software.
The agent is in the chair now. The question is whether your pricing model—and your budget model—can handle what happens next.
This is part of my series on AI economics. Previously: The Revenue Trap: When AI Differentiation Masquerades as Defensibility, The Token Trap, The Economics of Organizational AI.


