Our focus on linguistic quality assurance combines multiple quality metrics into a single system that measures all quality assurance processes, storing the results for future reference and allowing us design unique workflows that deliver error-free translations.
The overarching principle behind QaS is the notion of common responsibility. At Argos Multilingual, it isn’t only the job of the quality assurance department to maintain appropriate levels of quality – it’s the obligation of every stakeholder involved in the process.
We consider everyone involved in a project to be a member of the quality assurance department, and we expect all contributors to check their work for errors and implement corrections before the work proceeds to the next stage. The results? Fewer errors, faster delivery, and satisfied clients.
Linguistic Quality Assurance
Another factor that sets us apart is our strict focus on Linguistic Quality Assurance, which combines multiple quality metrics into a single quality management system (QMS). Our QMS measures all quality assurance processes, storing the results for future reference and providing us with data for feedback and quality control.
Our quality assurance managers can examine areas that need improvement while the system shows the results of past changes. Within this systematic approach, we have also incorporated Six Sigma principles and methodologies that analyze and categorize errors to determine both their standard deviation and any corrective actions that need to be taken.
Combining all quality measures under one QMS means that we can design unique workflows that enable us to deliver error-free translations every time:
Hybrid Quality Model
By following a combined process that incorporates Six Sigma principles, multiple ISO standards, and translation metrics such as J2450 and TAUS DQF-MQM, we make sure the translation projects we complete will meet your quality expectations as well as the strictest international quality standards currently in place.
LQA Process Controls
Our LQA process controls are performed by a team of elite linguistic specialists to ensure consistency. A 100% sample check verifies the quality of translated projects, measures linguist performance, and identifies areas for improvement and rework. The outcome of the LQA check determines whether a project passes or fails. If the translation is shown to be error-free, it is flagged as “passing” and the project is sent to the next step in the translation process.
If the LQA check uncovers translation errors, the project is marked as “failing” and both the translation and the LQA report are sent back to the translator for rework. What constitutes a “failing” project varies – the score calculated for an error considers the level of severity assigned, and all scores are automatically converted into an overall weighted score during review. We have set a rigorous passing threshold within our organization, but we’re open to setting different acceptable thresholds with client quality teams.
At their most basic, QA Checkers are plug-ins that we’ve customized to enhance the range of checks in automatic QA tools. When we use the available tools in default mode, we can catch incorrect numerical values, inconsistencies within a bilingual document, and non-adherence to approved glossaries. This would be enough for most companies in our industry, but we’ve decided to push the envelope.
Because there are still errors and anomalies that are surprisingly not possible to detect in default mode, we’ve created client and language-specific sets of rules for the automated QA tool. Our ability to enhance QA tools individually for each of our clients has proven to be effective and efficient, eliminating errors that are often missed by both the human eye and existing QA tools.
Neural Machine Translation Quality Assurance
Our approach to neural machine translation (NMT) is all about making sure that it’s right for a client’s content. To do that, we first run a pilot project where we test both the content and the engine to determine the quality of the output generated by the engine and the amount of post-editing work needed.
Our first step in a pilot project is to select the appropriate NMT engine. We then run the sample content through multiple MT engines that provide raw output for tested language pairs. If we’re provided with enough bilingual content, we can also build a customized MT engine.
Automated checks then allow us to eliminate the engines that provide low quality for specific content and language combinations. Once we shorten the list of potential engines, we run human post-editing tests to evaluate the productivity of post-editing for predefined quality levels.