Written by
Argos Multilingual
Published on
15 Oct 2025

If you’ve ever compared localization quotes, you know how confusing they can be. That’s because vendors don’t all use the same methods to calculate costs. Some rely on word counts and discounts, others add hourly tasks, and still others offer bundled or subscription packages. The result is that they aren’t all pricing the same work. That split goes back to the way pricing models were built.

“Pricing in our industry was built on assumptions that stuck,” says Erik Vogt, Solutions & Innovations Director at Argos Multilingual. “Over time, those assumptions became standard practice, even if they didn’t exactly match the real effort.”

Those conventions shaped the way localization has been priced for decades, and they’re the reason buyers still struggle to make sense of quotes today.

How Traditional Pricing Took Hold

For more than two decades, most localization quotes followed the same formula. Vendors set a per-word base rate for new content, applied sliding discounts when text repeated or closely matched past translations, and added hourly charges for project management, engineering, or QA. Later, machine translation post-editing tiers were layered in.

Some providers also rely on TMS auto-quoting. These systems apply translation memory tiers automatically, locking in vendor-specific match thresholds and discount ladders. The math looks precise, but it means two “per-word” quotes can be based on different assumptions before anyone reviews the content.

This approach looked predictable but it was never backed by hard data. Translators don’t work at a fixed pace, and repeated text doesn’t always reduce the effort needed to get a usable result. Still, those formulas were copied into contracts and became the industry’s default, creating an impression of precision that rarely matched the effort behind them.

“Those percentages were never proven,” says Erik. “They were rough estimates that became the norm.”

Why Quotes Aren’t Comparable

Localization pricing has split into different systems. Some vendors still use the old per-word model with discounts from translation memory. Others present bundled packages. More recently, quotes have started to include line items for AI oversight or governance. They all arrive looking like cost estimates for the same job, but each one is built on different assumptions. Per-word pricing also persists because it’s embedded in procurement and reporting systems, not because it reflects actual effort. But that’s where comparisons fall apart.

“Two quotes that look almost identical may be pricing completely different work,” Erik says.

One vendor may count fuzzy matches differently, producing a lower rate in the quote. Another may leave connector setup or SME review outside the unit price. A third may commit to stricter quality gates, while competitors only promise a basic pass/fail.

What AI Changed (and What It Didn’t)

AI highlighted the weaknesses in traditional pricing. Match percentages were never a reliable measure of effort, and once machine output entered production workflows, the gap between word counts and actual work became impossible to ignore.

Some vendors now split their quotes into clear parts. One covers machine-generated drafts and routing steps. Another covers quality assurance work: terminology checks, brand review, and risk controls. Some also list AI-specific line items, such as prompt engineering, AI draft generation, AI-assisted LQA with human acceptance, and security or governance measures like model policies and audit trails.

“AI can bring speed, but it can also bring new costs. Aside from the cost in tokens, security, oversight, and governance have to be part of the price,” Erik says. “This can include data localization—or where the data is allowed to be stored and processed—data persistence, backup and explainability needs, and tracking human review in new ways.”

Where Pricing Is Headed

More teams are scoping at the program level, pairing portfolio-based pricing with explicit service levels and usage reporting. Some add difficulty scoring tied to content type, regulatory exposure, and turnaround so the model reflects real conditions.

Quality assurance is also becoming a defined part of the price, connected to measurable results such as MQM targets, brand review, or in-market validation. At Argos, newer pricing models are set up to evolve with LLM performance data. A pilot phase establishes baseline edit distance and cycle times, and scope is adjusted as those metrics stabilize.

For regulated content, that quality assurance may include dual review and auditable quality scores as required by ISO 17100 for example. For marketing content, it often involves brand protection and in-market acceptance. Both add effort, but for different reasons, and quotes should name those gates explicitly.

A credible quote should make clear what drives cost: content difficulty, risk class, edit distance, and applied controls. It should also show how those factors will be measured and reported.

“The future is about pricing outcomes. If a control reduces risk or speeds delivery, it should be part of the scope and the price,” Erik explains.

What Buyers Can Do Now

Getting an accurate quote starts with the information in the request for proposal (RFQ). The more detail vendors have, the more realistic the price will be. That means including content samples that reflect the real mix of work—elements like product UI, legal material, and marketing copy—along with volumes and formats. It also means flagging risk class and compliance requirements, since regulated content often demands dual review or auditable MQM scores.

Quality targets need to be explicit. Setting MQM thresholds or linking the work to business KPIs makes it clear what level of assurance is expected. Scope also needs to reflect the principle of right work, right worker. Machines handle throughput; linguists, reviewers, and SMEs handle judgment and acceptance. Clean translation memories, termbases, and style guides, supported by stable connectors, act as a cost flywheel. They reduce rework over time and should be reflected in updated pricing.

Quotes themselves should return the same level of clarity. They should state risk class, quality gates, acceptance criteria, and the reporting cadence that shows how pricing adjusts as baseline metrics improve.

“Price follows the controls in scope, not the word count. If you want higher assurance, you should see exactly how it shows up in the quote,” says Erik.

Evidence, Not Assumptions

For decades, localization pricing was guided by habits that looked predictable but weren’t grounded in data. Word counts, fuzzy discounts, and flat percentages created the appearance of precision while masking the real effort behind projects. AI has widened those cracks. It has shown that match rates are a poor measure of work and has introduced requirements for oversight, security, and measurable assurance.

Evidence-based pricing replaces assumptions with proof. Quotes should surface the cost drivers above and report against them over time. Clean assets lower costs quarter by quarter, and clear role assignment ensures machines are used for throughput and experts for judgment.

Misconceptions about pricing persist. Buyers may assume fuzzy discounts reflect effort, MTPE tiers guarantee quality, or that AI can be added at no cost. Each of these hides the real work involved. Evidence-based pricing replaces them with measured difficulty, clear roles, and explicit quality gates.

“It’s time to replace guesswork with measured difficulty, clear risk controls, and shared metrics. That’s how we make pricing fair and reliable,” says Erik.

Explore how Argos makes localization pricing fair and reliable. Contact us today to start the conversation.

What to read next...