Back to blog
Published

KPIs every translation agency should track to stay profitable

Track the translation agency kpis that change how you price deals, staff projects, and pick clients — a dashboard agency owners can build this week.

KPIs every translation agency should track to stay profitable

We spend a lot of time with translation agency owners who feel like their business is running them instead of the other way around. The symptoms repeat. Projects close without anyone knowing whether they made money. Linguist rates creep up while client rates stay flat for three years. A handful of accounts produce most of the revenue, but no one tracks which of them quietly lose money every quarter. This post is about the translation agency kpis we actually recommend tracking, based on what we have seen work in agencies that stay profitable past a dozen people. We skip the generic lists here and focus on numbers that tie to real decisions about pricing, staffing, and which clients to keep.

Why most translation agency kpis lists miss the point

Most KPI lists written for translation agencies read like reporting templates copied from a multinational LSP. Dozens of metrics, every operational cut, every possible slice by language pair and service line. For an agency billing under roughly USD 5M per year, that level of reporting is more burden than benefit. The numbers take hours to assemble, nobody reads them, and the decisions you make every week do not depend on most of them.

The practical question is different. Which four or five numbers, if you looked at them once a month, would actually change how you price deals, how you staff projects, and which clients you chase or drop? That is the working definition of a useful metric for a small or mid-sized agency. Everything else is optional reporting.

In our experience, agencies get the most out of a short list. Three or four financial metrics, two or three operational metrics, one or two quality and retention metrics. That is it. The difference between a ten-metric dashboard and a thirty-metric dashboard is usually not more insight. It is more noise.

We also split metrics into two types. Lagging indicators tell you what happened: gross margin last quarter, revenue per translator for the year, client churn since January. Leading indicators tell you what is likely to happen next: proposal win rate this week, days to client feedback this month, translator utilization right now. Both matter. Most agencies only track the lagging kind and wonder why problems show up too late to fix.

One warning before the list. A metric only works when the data behind it is clean. If project profitability comes from three different systems that do not reconcile, the number will quietly mislead you for months.

Gross margin per project: the single most useful financial metric

If we had to pick one number for an agency owner to watch every month, it would be gross margin per project. Revenue minus direct linguist and reviewer costs, divided by revenue. Not net profit, not EBITDA. Just the contribution margin that each project puts toward overhead and profit.

A healthy gross margin for a translation agency running a mix of MTPE and human translation usually lands between 45% and 60%. The range depends on domain mix, language pair, and whether the agency uses staff translators or works fully with freelancers. Agencies that consistently run below 35% are usually competing on a price point they cannot sustain, or paying certain language combinations at rates they cannot earn back. Agencies running above 60% are often specialized in high-margin domains like patent or clinical work. Some of them are cutting corners on review that will show up as quality problems later.

One agency we worked with found that a large government client, which looked like their top account on revenue alone, was running at 28% gross margin because the contract included unlimited revision rounds. Another client at one-tenth the revenue was running at 61%. The financial picture changed once gross margin entered the conversation, and the agency ended up restructuring the revision clause at renewal instead of chasing more volume from that account.

Tracking this metric requires discipline about cost allocation. Freelancer payments on a project are direct. Project manager time usually is not, unless you bill it separately. Review and QA time for in-house staff typically gets allocated using an hourly cost rate. The point is to be consistent rather than perfect. The absolute number matters less than whether it trends up or down over time, and how it varies across clients, language pairs, and service lines.

For agencies that do not have this data in a ready-to-analyze form yet, a spreadsheet export from the project management tool is enough to start. We usually suggest pulling six months of completed projects and sorting by margin. What shows up at the bottom of that list is almost always surprising.

Revenue per translator and per project manager

Revenue per linguist and revenue per project manager are two of the most honest productivity numbers an agency can track. They are simple. Total revenue divided by the number of active translators (or project managers) in the same period. They are hard to game. And they connect to one of the biggest drivers of agency profitability: how efficiently your people turn project work into billable output.

Industry benchmarks vary. Slator and Nimdzi reports over the last several years have shown revenue per employee at larger LSPs ranging from roughly USD 120K to USD 200K, with the higher end correlating with more MT-assisted work and more technology or subscription revenue. Smaller agencies tend to sit below that, because their project mix leans toward smaller, more bespoke projects that carry more overhead per unit of revenue.

A practical way to use this metric is directional. If revenue per translator is flat or falling while headcount grows, something is off. Often project mix, often pricing, sometimes utilization. If it climbs, whatever combination of pricing, MT use, and workflow change you made is working.

One agency we know ran this calculation for the first time and realized their project manager team had grown faster than revenue for three consecutive quarters. They had been adding PMs every time a senior one complained about overload, instead of asking what had changed in the workload. A rebalancing of the client portfolio and one workflow automation project removed the need for two of the planned PM hires. Revenue per PM climbed by 23% over the next two quarters, and complaints about overload actually went down.

This metric works best alongside the next one. Total revenue alone hides volume effects that can disguise either a growing or a shrinking business.

Word output and throughput per person per day

Words translated per translator per day (and edited words per post-editor per day) sounds like a factory metric. Used badly, it is. Used well, it is the most practical throughput signal an agency can get.

The point is not to rank linguists. It is to understand capacity. If you know that your average technical post-editor moves 4,500 edited words per day on a specific domain, and a client asks for 50,000 words in five days, you instantly know whether the deadline is realistic and how many linguists you need. Without that baseline, project managers promise deadlines based on feel and then scramble to make them work.

We also use this metric to catch workflow problems. If output per translator drops sharply on one project type, something in the preparation step is probably wrong. Bad source files. Missing glossary. Segment-level inconsistency that forces constant context switching. The drop is often the first visible signal that something earlier in the pipeline needs attention.

Two caveats. First, word counts mean very different things across domains. Legal translation at 2,000 words a day is not the same kind of work as marketing transcreation at 600 words a day, and comparing them distorts the picture. We recommend tracking this metric separately per domain or service line. Second, output numbers must never be used punitively. The moment translators believe their output is being ranked against peers, they will game the number, quality will drop, and you will lose the signal.

For agencies that want a structured way to think about operations overall, our translation agency operations guide goes deeper into how throughput fits inside a broader operations stack.

Quality kpis that connect to client decisions

Quality metrics are the category where most agency KPI programs quietly fail. Everyone wants to measure quality. Very few agencies actually track it in a way that changes decisions.

The reason is that useful translation quality kpis require a consistent QA process behind them. If every project manager uses a different review template and error categorization, the quality data is not comparable across projects. Before tracking any quality metric, the agency needs a shared error typology. Most agencies we work with use a simplified MQM-style framework with four or five categories (accuracy, fluency, terminology, style, formatting), each with three severity levels.

Once that structure exists, two quality numbers carry most of the load. First, errors per 1,000 words, tracked by severity. This is what shows up on the QA report and what editors actually look at. Second, client revision rate: the share of delivered projects a client sends back for revision after delivery. That second one is where quality meets business outcomes. A low errors-per-1000-words number means very little if clients are still sending work back. And a quiet client is not always a happy client. Some just stop sending work.

For agencies that want more depth on measurement approaches, our practical guide to measuring translation quality covers how to design a QA process that feeds into these metrics without slowing delivery.

Quality targets should be set per service line. Human translation for a regulated-industry client will (and should) aim for a lower errors-per-1000-words figure than a light MTPE project for an internal communications client. Holding them to the same standard penalizes the project type and skews staffing decisions in ways that hurt profitability later.

Client retention and revenue concentration

Financial and operational metrics describe how efficiently you deliver work. Retention and concentration describe whether you still have a business in eighteen months.

Two numbers matter here. Gross revenue retention tracks what percentage of last year's client revenue is still with you this year. A healthy agency sits between roughly 85% and 100%. Some churn is normal, and often healthy, when you are dropping unprofitable accounts on purpose. Anything below 75% points to a retention problem that will eventually outrun even a strong sales engine.

Revenue concentration is the share of total revenue coming from your top clients. We usually look at the top one, top three, and top five. An agency where the top client represents more than 30% of revenue is one contract change away from a very bad quarter. The GALA Worldwide LSP study from several years back flagged this as one of the most common vulnerabilities in mid-sized agencies, and we have not seen anything since that contradicts it.

Retention and concentration should always be reviewed together. An agency with 95% retention and 45% revenue concentration on a single client looks stable if you only look at the retention number. The concentration number tells you the real story.

One agency we know only realized how concentrated they were when their largest client went through an acquisition and froze translation spend for six months. They had been telling themselves they were a growing business. The retention line said yes. The concentration line, if they had been checking it, would have said they were a single-client consultancy with extra steps.

This is also where lead and pipeline metrics belong. Monthly qualified leads, proposal win rate, and average sales cycle all feed into retention and concentration over time. Agencies that only track delivery metrics tend to find out about pipeline problems too late.

A minimum viable kpi dashboard you can build this week

Most agencies we talk to do not need a new analytics tool. They need fewer numbers, tracked more reliably, reviewed on a regular cadence by people who can actually act on them.

A monthly KPI dashboard for a translation agency of under 30 people can fit on one page. Eight to ten metrics, refreshed monthly, reviewed in a standing meeting.

A version we recommend starting with:

  • Gross margin per project, with breakdowns by top five clients and top three service lines.
  • Revenue per translator and revenue per project manager, trended over six months.
  • Words delivered per translator per day, by service line, trended month over month.
  • Errors per 1,000 words, severity-weighted, by service line.
  • Client revision rate, with a flag for any client above 15%.
  • Gross revenue retention, trailing twelve months.
  • Revenue concentration for top one, top three, top five.
  • Qualified leads and proposal win rate, trended over the last six months.

Everything else (employee satisfaction, utilization by language pair, turnaround time by project type) can live in supporting reports reviewed quarterly.

The discipline is not in the dashboard. It is in the standing meeting. If a number moves the wrong way two periods in a row, it needs a named owner and a specific action. A metric that nobody feels responsible for is just decoration.

For most agencies, the first 90 days of running this kind of dashboard tell them more about the business than the previous two years of financial reporting did. Not because the data is new. Because the decisions the data forces are new. Start narrow, keep the list short, and add metrics only when a specific decision actually requires them.

Newsletter

Get the next article without checking back.

We send occasional product notes and workflow essays when there is something worth reading.

Need the product walkthrough instead? Read the docs.

We care about your data. Read our privacy policy.