How to run a pre-delivery QA check without slowing down your team
Pre-delivery translation QA does not have to eat your deadline buffer. A structured, risk-tiered approach that agencies can run consistently and fast.

Pre-delivery translation QA is where projects either hold together or fall apart, and it happens under the worst conditions: the project is at its end, the deadline is close, and the team is stretched. In our work with translation agencies, the teams that deliver consistently clean work are not necessarily doing more QA than their competitors. They are doing QA that has a shape. The review starts somewhere specific, follows a defined sequence, and stops at a defined point. This guide is about how to build that shape without adding hours to every delivery cycle.
Why most pre-delivery QA processes are slower than they need to be
The core inefficiency in most pre-delivery translation QA is that the review is open-ended. A reviewer sits down with source and target documents and reads until they feel confident or run out of time. There is no defined scope, no tiered priority, and no clear stopping condition.
The problem with open-ended review is that it scales with reviewer anxiety rather than with document risk. On a low-stakes marketing summary, a nervous reviewer might spend two hours checking every sentence. On a high-stakes regulatory filing, the same reviewer might spend thirty minutes because the deadline is close. Neither outcome reflects the actual risk of the document.
We have also seen the inverse: review scope that is too narrow by default. Teams pressed for time settle into a pattern of checking the first and last page of every document, which catches opening and closing errors but leaves the middle, often where the densest technical content lives, largely unreviewed.
The fix is not to do more review overall. It is to do calibrated review: more attention where document risk is highest, less where it is not. That requires a risk-tiered model and a defined checklist for each tier. Getting to that structure is the subject of this guide.
What pre-delivery translation QA should cover and what it should not
Pre-delivery QA is not a full re-translation check. It is a structured pass over a completed translation to confirm that the output is ready for delivery. Getting clear on what it covers, and what it does not, is the first step toward making it faster.
Pre-delivery translation QA should cover: accuracy of numeric content (dates, quantities, measurements, reference codes), terminology consistency against the approved glossary, absence of untranslated source segments, formatting integrity (headers, tables, lists, footnotes), and overall register appropriateness for the target audience. For AI-translated documents, it should also include a spot-check for hallucination-type errors in high-risk sections.
Pre-delivery QA should not be: a full post-editing pass, a style rewrite, a client preference review, or a catch-all for anything that does not feel quite right. Those are legitimate tasks, but they belong in the revision stage. Conflating them with QA is one of the main reasons pre-delivery review expands to fill available time without a corresponding improvement in deliverable quality.
Defining the scope explicitly, ideally in a written checklist that the reviewing team uses consistently, is what separates a structured QA pass from an open-ended revision session.
A risk-tiered model for pre-delivery review
Not every document needs the same pre-delivery QA check. A risk-tiered model lets you calibrate review effort proportionately, which speeds up low-risk deliveries without reducing rigor on high-risk ones.
Tier 1 (high risk): Legal contracts, medical or pharmaceutical content, regulatory filings, financial disclosures, and any document where an error carries legal, health, or financial consequences. These get a full pre-delivery QA pass: numeric verification, glossary check, entity consistency check, spot-check of all definitional and exclusion clauses, and a read of the opening and closing sections for register accuracy.
Tier 2 (medium risk): Technical documentation, product manuals, B2B marketing content, corporate communications. These get a targeted QA pass: numeric verification, glossary spot-check for domain terms, confirmation that all sections are translated, and a review of sections with the highest technical density.
Tier 3 (lower risk): Internal communications, general informational content, marketing summaries with no contractual or regulatory content. These get a focused check: untranslated segment scan, numeric spot-check, and a quick read of the introduction and conclusion.
The tiers do not change the type of errors you are looking for. They change how much of the document you review to find them. A Tier 1 document warrants 10 to 15 percent of its translation time in QA. A Tier 3 document warrants 2 to 5 percent.
This classification should happen at project intake, not at delivery. If a reviewer has to determine which tier applies when they sit down to do QA, you have already introduced unnecessary delay.
QA tools that speed up the process and what they miss
Automated QA tools, whether built into your CAT tool or run as a standalone check, are the most reliable way to accelerate the mechanical parts of pre-delivery QA.
Smartcat's built-in QA checks flag glossary violations, missing translations, number discrepancies, tag errors, and formatting issues automatically. Running these before any human review step removes the mechanical pass from the human reviewer's workload. There is no good reason a human should spend attention confirming that all segments are translated or that numeric formatting is consistent. Tools handle both faster and more reliably than manual checking.
What automated tools do not catch is semantic accuracy: whether the translation means the same thing as the source. A QA report with no flags tells you the structure is intact and defined rules were satisfied. It says nothing about whether a sentence was incorrectly translated while remaining structurally valid.
The right sequence: run automated QA first, resolve all flags, then start human review. Human reviewers should not be spending attention on issues the tool should have caught. If they are, either the tool is not configured correctly or it is not being run before review starts. Our complete guide to translation QA for agencies and freelancers covers what automated QA addresses and where its limits are.
For teams running AI translation on Smartcat bilingual DOCX files, SnapIntel includes a QA report with each completed translation job, giving reviewers a document-level quality signal to start from rather than auditing cold.
How to run a fast numeric and entity check
Numeric verification is the highest-return manual check in pre-delivery QA. It is fast, unambiguous, and catches errors with serious consequences.
The method: extract all numeric values from the source document and confirm their presence and correctness in the target. For short documents this is a manual two-column comparison; for longer documents, a simple script that parses segments by numeric token handles it in seconds. The check covers dates, measurements, currency values, percentages, version numbers, and reference codes. When formats differ by language convention, such as decimal separators or date ordering, confirm the conversion rule is correct, not just that a number appears.
For named entities, scan the target document for all proper nouns and confirm they match the source's approved forms. Product names, company names, regulatory body names, and personal names all need to be consistent. In AI-translated documents particularly, entity inconsistency is a common failure mode: the model transliterates a name one way in the second paragraph and a different way in the seventh. Automated QA does not catch this because there is no single rule to validate against.
Glossary verification follows: for documents with an approved termbase, confirm that domain-critical terms appear in their approved target forms. You do not need to verify every glossary entry. Spot-check the ten to fifteen most critical terms per document. If those are consistent, broader glossary compliance is generally solid.
These three checks together, numerics, entities, and glossary spot-check, take fifteen to thirty minutes for a 5,000-word document. They cover the majority of high-consequence errors without requiring a full segment-level review.
Building a pre-delivery QA checklist your team will actually use
A QA checklist only works if the team uses it consistently, and consistency requires that the checklist itself is short and specific enough to be actionable under deadline pressure.
Checklists that fail in practice tend to have one of two problems: they are too long (so reviewers mark items complete without doing the work), or they are too vague ("check for accuracy" without specifying what that means operationally). An effective pre-delivery QA checklist has specific, observable steps that a reviewer can definitively complete or not.
A working Tier 2 checklist might look like this: automated QA clean with no unresolved flags; numeric verification done with source numbers listed and targets confirmed; glossary spot-check done with ten domain terms verified; entity scan done with proper nouns consistent across the document; untranslated segment check done with zero remaining; opening and closing sections read for register. Six items, each specific enough that a reviewer either completed it or did not.
Build the checklist once per document tier, attach it to your project intake template, and require it to be returned with delivery. The checklist is not a formality. It is the stopping condition for QA. When the checklist is complete, QA is done.
For teams tracking translation quality metrics over time, the checklist also produces data: which checks find the most errors, which document types need the most review time, and where the QA process has gaps.
Limiting scope so reviews actually end on time
The last and most practically difficult part of structured pre-delivery QA is accepting that the checklist defines the scope, and when the checklist is done, the QA pass is done.
This sounds obvious. In practice, reviewers often continue past the defined scope because something else looks questionable or because there is anxiety about delivery. That is understandable, but it defeats the purpose of the checklist. If the same style or phrasing issues recur across projects, they belong in a style guide update or a translator feedback session, not in pre-delivery QA, which is not the right stage to catch issues that should have been resolved upstream.
Set a time budget for QA at project intake, based on document tier and word count. Tier 1: 10 to 15 percent of translation time. Tier 2: 5 to 8 percent. Tier 3: 2 to 5 percent. When that budget is spent and the checklist is complete, delivery proceeds. If the checklist surfaces issues requiring more work, that is a revision task with its own time estimate, not an expansion of the QA pass.
The goal of structured pre-delivery QA is not to eliminate every possible error before delivery. It is to catch the errors that matter most, consistently, without unpredictable time overruns. A QA process that reliably delivers that outcome is worth more to a translation team than one that aspires to perfection and misses deadlines.