Translation quality

Significantly better than generic models

Unlimited number of users

Easy integration into editorial systems

High availability of services

Data Protection & Data Security

Adaptability to the company

Quality 

 70%

better

Translation quality

Significantly better than generic models

We tested:

How powerful is a custom-trained model compared to a generic model from DeepL?

Small spoiler – it’s 70% better translations.


Test case (DE-EN):

The performance of the model trained by wonk.ai was tested against DeepL using the customer’s data.

In the customer area and in the customer-specific language, the model trained by wonk.ai was clearly convincing compared to DeepL.

2500 records were calculated from the customer domain.

365,691 sets won for training

Data Sources: Customer Websites & TMS Export

BERT Score for calculating semantic similarity to customer reference translation sentences

Better variant DeepL (sentences)

higher BERT score

Better variant DeepL (%)

higher BERT score

Better variant wonk.ai (sentences)

higher BERT score

Better variant wonk.ai

higher BERT score

Same Rating (Sentences)

Same BERT-Score

Same Rating (Sentences)

Same BERT-Score

We have mathematical models to assess a good translation.


But in the end, translations are communication,
between your business and the world – so let your team decide. 

You decide on the quality.

In your own evaluation environment.

After the model training, all your stakeholders and translation managers will have their own access to your separate evaluation environment. There you can evaluate the quality of the translations without being influenced by the source.

In this way, you as the project manager receive the highest level of independent evaluation and, as a consequence, a very high level of acceptance of the trained models.

And if the evaluation is not positive? 
This can happen – usually with not so strong basic models and little training data. Then everyone involved knows right from the start that the models need even more quality.
With our continuous process of training data collection and enrichment, you can then collect enough data over time to successfully train your own model.

Here’s how it works  – Phase Plan Training & Operations

From data to translation

01

Data exchange

wonk.ai will receive the list of languages to be translated and access to the data sources available for training.

02

Data Checkup

wonk.ai checks the quality of the language data and the number of language pairs to be achieved and provides feedback on feasibility.

03

Specifying the testers

The client determines which stakeholders and experts in the company will review the language models and evaluate the results – compared to previous translations or alternative solutions.

04

Training of language models

wonk.ai extracts the language data, validates and cleans the training set, and trains the language models with mathematical evaluation.

05

Evaluation of the results

The customer’s testers evaluate the language models within their own assessment environment and provide feedback on individual results.

05

Commissioning

Once the language models have been initially approved, they can be put into operation directly and can be used via the customer’s own web environment. The trained models can also be integrated into third-party systems via the API and can thus be used in the entire system landscape.

The quality fits…

That’s exciting.