Microsoft Translator Hub User Guide PDF
Microsoft Translator Hub User Guide PDF
Microsoft Translator Hub User Guide PDF
October 2016
Microsoft Translator Hub allows you to customize a language pair for a specific domain (area of
terminology and style) or to build automatic translation for a language that is not yet covered by
Microsoft Translator. You can access the customized translation systems you created using the
Microsoft Translator API, and through the applications that make use of the Microsoft Translator API,
for instance most of the leading translation memory providers.
Microsoft Translator is a statistical machine translation system, which learns how to translate from
previously translated documents. The translation logic it has learnt from such previously translated
documents is stored in a so-called statistical model. Microsoft Translator comes with pre-built models
for more than 100 language pairs, which are in fact the same ones used for Bing Translator. When you
train a custom translation system you build an additional statistical model just from the documents you
uploaded and included in a training. Microsoft Translator allows you to use the models coming from
Microsoft and your own models in combination, giving you a much wider coverage than you could
achieve with your documents alone, and a much better specialization to your area of work, than the
generic models from Microsoft would give you.
This User Guide will now take you through the step-by-step process of building your custom translation
system using the Microsoft Translator Hub, referred to as the Hub from here on.
1.1 Audience
This guide will benefit any person who is interested in building a custom translation system using the
Hub. A deeper background in machine translation is not essential to use the Hub.
2.1 Workspace
Microsoft Translator Hub provides users with a workspace to enable creation of customized translation
systems.
A workspace is a work area for composing and building your custom translation system, either alone or
with a community of collaborators who you can invite into your workspace. A workspace is separate
from any other workspace, there is nothing that connects them. You may create or become a member
of multiple workspaces, but no documents you upload are shared between workspaces, and your
management of collaborators is unique per workspace. Each workspace is identified by a unique name,
the Workspace Name, and has a unique ID known as the Workspace ID.
You can create a workspace simply by visiting the Microsoft Translator Hub web site. The person
creating the workspace is the owner of the workspace. An Owner can then invite more people to be
members of this workspace instance and can designate them either additional Owners or Reviewers.
The permissions associated with each role are further discussed in Section 2.5
Documents in one workspace are not visible to users belonging to another workspace.
For example, data in Contosos workspace is not visible to users belonging to Hmong communitys
workspace or Fabrikams workspace. This design isolates each workspaces data from the other and your
data is always safe.
If you are a member of multiple workspaces, you can choose the workspace to work with in a dropdown
at the top right of the screen, next to your name.
The workspace ID is the central component of the category ID, for using Hub systems programmatically
via the Microsoft Translator API. See the Hub API guide for details. A Category ID consists of the
workspace ID, the project label and Category code (format: Workspace ID-Project Label Category Code).
The workspace ID helps you to connect Adobe Experience Manager to your custom trained translation
systems.
2.3 Project
Within a workspace instance, you can create a number of translation projects for translating from one
language to another. A project consists of a series of trainings, with their associated training documents.
The language to translate from is called the Source language and the language to translate to is called
the Target language. If you are building a domain specific translation system, the Hub allows you to
associate a category like Sports or Medicine with your project.
The Category
The category identifies the domain the general area of terminology you want to use for your project.
Please choose a category that is most appropriate and relevant to your type of documents. In some
cases, your choice of the category directly influences the behavior of the Hub:
If you choose Technology and choose to use Microsoft models, the Hub will use a different set
of underlying models than for all other categories.
If you choose Speech, the Hub will use models that are optimized for processing the output of
speech recognition systems.
Any other category selection uses the general Microsoft models, and is used as an identifier in
your category ID.
In the same workspace, you may create projects for the same language pair in different categories. The
Hub prevents creating a duplicate project with the same language pair and category. Applying a label to
your project allows you to avoid this restriction. The Hub recommends to NOT use the label, unless you
are building translation systems for others, for instance multiple clients.
Trusted documents
http://office.microsoft.com/hi-in/excel-help/HA010354384.aspx?CTT=3
http://office.microsoft.com/zh-cn/publisher-help/HA010354384.aspx?redir=0
http://office.microsoft.com/en-us/publisher-help/trusted-documents-
HA010354384.aspx?redir=0
2. Monolingual documents:
Monolingual documents in the target language help a translation system decide which of the
considered alternative translations is the most appropriate in the context, more fluent, and
inflected the right way. In order for the target language document to have an effect, you also
need parallel documents to generate a set of translation candidates in the first place. But even if
you do not have parallel documents to generate the candidates yourself, you can expect that
the underlying Microsoft models can generate the candidate, and the target language
documents will help pick the right ones, if your tuning set has examples of them.
If your project is domain specific, your documents, both parallel and monolingual, should be consistent
in terminology with that category. The quality of the resulting translation system depends on the
number of sentences in your document set and the quality of the sentences. The more examples your
documents contain on diverse usages for a word, the better job it can do during the translation of
something it has not seen before.
We recommend that you have a minimum of 10,000 parallel sentences for full trainings. As a best
practice, you can continuously add more parallel content and retrain, to improve the quality of your
translation system.
Microsoft requires that documents uploaded to the Hub do not violate a third partys copyright or
intellectual properties. For more information, please see the Terms of Use. Uploading a document to the
Hub does not alter the intellectual property in the document itself.
Documents uploaded are private to each workspace. Sentences extracted from your documents are
stored separately in your workspaces repository as plain Unicode text files, and are available for you to
download. The number of sentences extracted from each document is reported as Extracted Sentence
Count in the Hub.
In order to detect parallel documents in your set, the Hub requires that you follow a document naming
convention
<document name>_<language code>.ext
where
document name is the name of your document
language code is the standardized to ISO language IDs, indicating that the document contains
sentences in that language. The language code information is displayed in the Upload Document dialog.
ext refers to the extension of document that should belong to one of supported document formats.
Note that there must be an underscore (_) before the language code.
Documents can be uploaded one at a time or they can be grouped together into a single zip file and
uploaded. The Hub supports popular zip file formats (ZIP, GZ and TGZ).
If the zip file has a nested folder structure, Hub prefixes the folder names to the document names when
it is displayed in the UI.
2.6 Training
You have following 3 options to do trainings.
Dictionary only training: You can now train a custom translation system when with just a dictionary and
no other parallel documents. There is no minimum size for that dictionary, one entry is enough. Just
upload the dictionary, which is an Excel file with the language identifier as column header, include it in
your training set, and hit train. The training completes very quickly, then you can deploy and use your
system with that dictionary. The dictionary applies the translation you provided with 100% probability,
regardless of context. This type of training does not produce bleu score and this option only available if
MS models are available for given language pair.
Training with 1000 parallel sentences only: You can now train a custom system with only 1000 parallel
sentences. Use 500 sentences for the tuning set and 500 sentences in the test set. The Hub will build a
system based on Microsoft models, and will tune the models to your tuning set, giving you a better
adjusted system than the generic translation system. You may add target language documents in the
desired domain as you like, the Hub will use it to build your custom target language model. It is not
required though. For the Training to succeed, the 1000 parallel sentences must be unique and pass the
Hub filtering. To be safe, better have 1100 or more sentences. With less than 1000 parallel sentences,
you can still build a system with only a dictionary.
Full training: you will need to assemble training, test and tuning data for full training. Training is used by
the Hub to understand what documents you want to use to build the translation system and how you
want to use these documents. When setting up a training, Hub allows you to partition your documents
between 3 mutually exclusive data sets:
1. Training data set:
Sentences of parallel and monolingual documents included in this set are used by the Hub as the
basis for building your translation system. You can take liberties in composing your set of
training documents, include documents that you believe are of tangential relevance, and
exclude them again in the next training run. As long as you keep the tuning set and test set
constant, feel free to experiment with the composition of the training set it is your most
effective handle of modifying the quality of your translation system, after you have settled on
the tuning set and test set.
The tuning set is used during training to adjust all parameters and weights of the translation
system to the optimal values. Choose your tuning set carefully, to be optimally representative of
the content of the documents you intend to translate in the future. The tuning set has a major
influence over the quality of the translations produced. Tuning enables the translation system to
provide translations that are closest to the samples you provide in the tuning dataset. Only
bilingual documents can be part of the tuning data set. You do not need more than 2500
sentences as tuning set. Recommendation is to select the tuning set manually, in order to
achieve the most representative selection of sentences.
When you pick the tuning set manually, choose not too long and not too short sentences, and
use the words and phrases representing the variety of words and phrases you intend to
translate, in the approximate distribution that you expect in your future translations. In practice,
a sentence length of 8 to 18 words will produce the best results, because these sentences
contain enough context to show inflection, and provide a phrase length that is significant,
without being overly complex.
A good description of the type of sentences to use in the tuning set is prose: actual fluent
sentences. Not table cells, not poems, not lists of things, not only punctuation or numbers in a
sentence - just regular language.
When you let the system choose the tuning set automatically, it will use a random subset of
sentences from your bilingual training documents, and exclude these sentences from the
training material itself. When you let the system choose the tuning set, please review it, to make
sure it indeed is composed of non-trivial sentences and satisfies the criteria above.
Only bilingual documents can be part of the testing data set. You do not need more than 2500
sentences as testing set. When you let the system choose the testing set automatically, it will
use a random subset of sentences from your bilingual training documents, and exclude these
sentences from the training material itself.
4. Dictionary (optional)
The dictionary determined the translation of phrases with 100% probability. Use it to define
proper names and product names exactly the way you want to see them translated. See section
Using Dictionaries for a description of the dictionary.
You can run multiple trainings within a project and compare the resulting BLEU scores across all the
training runs. You would use the best training to deploy your system into production use.
During the training execution, sentences present in parallel documents are paired or aligned and Hub
reports the number of sentences it was able to pair as the Aligned Sentence Count in each of the data
sets. For a training run to succeed, the table below shows the minimum # of extracted sentences and
aligned sentences required in each data set. Please note that the suggested minimum number of
extracted sentences is much higher than the suggested minimum number of aligned sentences to take
into account the fact that the sentence alignment may not be able to align all extracted sentences
successfully.
Microsoft Translator Hub will automatically align the sentences in bilingual documents with the same
base name you uploaded. The base name is the part of the file name before the underscore and
language identifier. It will fail doing correct sentence alignment when the number of sentences in the
documents differ, or when the documents you supply are in fact not 100% translations of each other.
You can perform a cursory check by verifying the number of extracted sentences: if they differ by more
than 5%, you may not have a parallel document.
If you know you have parallel documents, you may override the sentence alignment by supplying pre-
aligned text files: You can extract all sentences from both documents into text file, organized one
sentence per line, and upload with an .align extension. The .align extension signals Microsoft
Translator Hub that it should skip sentence alignment.
An Owner is authorized to do the following activities in addition to the activities that can be
done by a Reviewer.
- Create and remove projects
- Conduct Trainings
- Clone Trainings
- Upload and remove documents
- Invite Reviewers and other Owners into the workspace
- Change the role of an existing person from Owner to Reviewer or Reviewer to Owner.
- Assign Reviewers to review translations of sentences in the test data set.
- Request the deployment of a translation system resulting from a successful training run.
- Invite Community Members to post-edit sentences translated by custom translation system
belonging to the workspace instance.
- Approve/Reject alternative translations submitted for deployed translation systems.
- Remove other Owners or reviewers from the workspace.
2. Reviewer:
Reviewer is a person who is either invited by the Owner of the workspace.
A Reviewer is authorized to do the following activities
- Review translations of sentences in the test data set and submit review comments
- Test Translations of deployed translation systems.
- Review documents containing sentences translated by deployed custom translation system
belonging to the workspace instance & submit alternate translations
3. Community Member:
This role refers to a person who is not a part of the workspace and who has been invited by an
Owner to review documents translated using the deployed translation system, and to submit
alternate translations. This role is useful if you want to share selected documents containing
generic sentences with a set of people without assigning them a Reviewer role.
Owners and Reviewers need to sign in with a Microsoft Account (formerly Live ID)
(http://www.microsoft.com/en-us/account/default.aspx) to perform the authorized tasks. If you are
using multiple Microsoft Accounts, please be sure to uncheck Keep me signed in in the login page, so
that you will be prompted to use the intended account with the Microsoft Translator Hub. Or invite your
other accounts as members into your project.
Community Members have the option to submit alternate translations as anonymous users or
authenticated users.
Translate()
Your site or Microsoft Translator
application API
AddTranslation()
Used immediately in
Translate() calls.
Used delayed when
trained with.
Collaborative
Microsoft Translator
Train Translations
Hub
Store
2.10 Document Translator
Document Translator is an open-source tools which runs on Windows, and translates Microsoft Office
documents and plain text, using your custom translation system and your API account.
Information, download and install: http://www.microsoft.com/en-us/translator/doctranslator.aspx.
Upload Documents or
Invite People Import Collaborative
Translations
Train
No
Collect Corrections
3.1 Create a Workspace
Anyone can create a workspace. To create a workspace, visit https://hub.microsofttranslator.com, and
choose Build a translation system.
At this point, you will have to sign in with your Microsoft Account (formerly Live ID), or create a new
Microsoft Account. If you have a Hotmail, Xbox or Outlook.com email address, you already have a
Microsoft Account.
When logged in, you are transferred to your existing workspace, if you have one, or you get the chance
to create a new one. Or you can join an existing workspace, if an owner of that workspace has invited
you, by following the links in the invitation email.
Associating Microsoft Translator Subscription
When you create a workspace, you will have option you to associate your hub workspace with your
Microsoft Translator subscription. It is recommended to do this association at this stage, but if you have
not subscribed to the Microsoft Translator API yet, you can do this association at a later time. Without
this association, you will not be able to deploy your training or download your community translations.
Use the following steps to complete the association.
1. If you dont have subscription to Microsoft Cognitive Services follow the steps provided at
https://www.microsoft.com/en-us/translator/getstarted.aspx
2. Note down the key either of Key 1 or Key 2
3. Navigate back to https://hub.microsofttranslator.com/Home/Settings
4. Enter the subscription key that you have noted down in step 2
5. Click Save
6. Verify that the pricing tier of the subscription matches with your subscription
Create Project
1. In the Projects tab, select Add Project.
2. The system shows the Create Project page.
Enter the appropriate data into the following fields:
Project Name (required): Give your project a unique, meaningful name. It is not
necessary to mention the languages within the title, though you may find that useful.
Description: Write a short summary about the project.
Source Language (required): Select the language that you are translating from.
Target Language (required): Select the language that you are translating to.
Category: Select the category that is most appropriate for your project. The category
describes the terminology and style of the documents you intend to translate.
Project Label (optional not recommended to use): The Project Label will help
distinguish one project with the same language pair and category from another project
with the same language pair and category. As best practice, use a label if you are
planning to build multiple trainings for same language pair, and do not use it if you are
building systems for one category only. A project label is not required and not helpful to
distinguish between language pairs. If you are using a label, do not add a language ID
into the label.
For example, the field of Technology has a number of specific terms and requirements that
require special attention. For example, a word like cloud may have very different meaning in
the field of technology than in meteorology. If you just want to work on generic language
support, choose general.
Category Descriptor: Use this field to better describe the particular field or industry in
which you are working. For example, if your category is Medicine, you might add a
particular topic, such a surgery, or pediatrics. Use this to qualify the category even
further. The description has no influence over the behavior of the Hub or your resulting
custom system.
Note: You will have to type desired language to see if it is available. If a desired language is not available,
click the Request a new language link. See section 4.5
Invite Members
To invite people to become members of the workspace follow the steps given below.
1. Go to the Members tab, select Invite New Member.
2. In the Invite Member page, enter the following information:
Name
Email Address
Role (select Reviewer or Owner)
Edit Message as needed
Setup Training
1. To setup training, select the project from the Projects tab and go to the Project Details page.
Click Train New System.
2. This opens up the Train system page. Each system has a unique name. The Hub generates a default name
which you can replace with a more descriptive name.
While loading this page, the Hub will search your workspaces document repository for
documents which have a language code in the document name that matches with the language
code of the source or target language for this training. Multilingual documents like TMX or XLIFF
files as well as the dictionary file do not need a language identifier in the name the language ID
in the XML or Excel content is sufficient.
If documents are found, Hub will display them on this page as Parallel or Monolingual and by
default include them in your training dataset. Parallel documents will be paired, so only one
document name appears in the list. Please see the box titled How Hub displays documents in
the document selection grid? for an example of how Hub displays documents in the document
selection grid
a. If you are training an English German system, Hub will scan the workspaces
document repository for all documents with en and de in the Document name
and display the following documents in the Document selection grid.
File1.txt and File2.txt as parallel documents
File3.txt as monolingual German document.
b. If you are training French English system, Hub will scan the workspaces document
repository for all documents with fr and en in the Document name and display
the following documents in the Document selection grid.
File1.txt as parallel document
File2.txt as monolingual English document
c. If you are training French German system, Hub will scan the workspaces document
repository for all documents with fr and de in the Document name and display
the following documents in the Document selection grid.
File1.txt as parallel document
File2.txt and File3.txt as monolingual German documents.
If documents are not found, you will need to upload your documents now.
Please refer to Section 2.3 for more information on Parallel and Monolingual documents and
Section 2.4 for Document Naming Convention.
3. By default, the option to Use Microsoft models is checked in the Training tab, if a Microsoft
model exists for this pair (approximately 100 pairs, for most languages from and to English). The
effect of using this option depends on whether the source language and target language for the
training are currently supported by Microsoft Translator.
If there is no Microsoft model for your language pair, the option does not exist.
Using Microsoft models in training the system may make your translations more accurate and
more fluent. Microsoft models might not be available for some language pairs and domain
combination. You can do sequential trainings with and without using Microsoft models. You will
be able to get a higher score without using Microsoft models, if your training and test data are
within a very narrow domain (area and of terminology and style) and will show worse results
when you break out of that narrow domain. Always make sure that both your test and tuning
set are representative of what you are going to translate, which is possibly less representative
what you already have. In that case you will almost always get better results with Microsoft
models.
Observe in the screenshot below, the documents named Doc005_en.pdf and Doc006_en.docx
do not have a corresponding document in Hmong (mww). Hence these documents will be
treated as monolingual documents for the purpose of this training, unless you upload the other
document in the pair at a later point in time.
14. If the results of sentence extraction look good, you can use the checkboxes to include/exclude
the documents in the training data set. If you have a lot of documents, use the search box to
look for them by name. As you select the documents, Hub will update the values for Parallel
Sentences Selected, Parallel Documents Selected in Training Dataset and Monolingual
Documents selected in the Training Dataset. You will see a green tick mark next to Parallel
Sentences Selected as soon as the no. of parallel sentences selected exceeds the suggested
minimum of 10,000.
15. Iintegration with the collaborative features (CTF) and the Translator API: You can now directly
download data you have collected from users via the AddTranslation() method in the API. Use
the get community translations option when you are composing a training. To get these
translations click the get community translations link to open up the Get Community
Translations dialog. The get community translations option is available only if the training is in
draft mode.
If you assocaited your Translator API account with your workspace, you can click Get
Community Translations to import the corrections and use them as training documents.
These sentences will be stored in a file named AlternateCommunityTranslations-<src>-
<tgt>.align as shown below.
16. With documents selected in the Training data set, click on the Tuning tab to select documents in
the Tuning data set. By default, Auto selection is enabled. This allows the Hub to randomly
select approximately 5% of the aligned sentences from the Training data set and consider them
for tuning. For a training to succeed, Hub requires to have a minimum of 500 sentences in the
Tuning data set. The Hub ensures that sentences used in the Tuning data set are not used to
train the translation system.
17. If you want to manually select documents for the Tuning data set, click the Manual selection
option. The Hub will display all documents that have not yet been selected in the Training data
set and Testing data set. If you cannot find documents in this list, you should switch to either
Training tab or Testing tab, unselect the desired document from that set and come back to the
Tuning tab.
You should select a parallel document for which the extracted sentence count is equal. For
example, in the screenshot below Doc0010.txt has 1,653 sentences in both the source language
version and the target language version. Assuming the sentences in this document are well
translated, sentences in Doc0010.txt are suitable for inclusion in tuning data set. In the manual
selection mode, The Tuning tab currently allows you to select only one document. If you want to
combine sentences from different documents, we recommend that you create a separate
document containing such sentences and use it here.
To switch back to Auto selection, you can click on auto select Tuning Data.
The Hub will automatically remove the sentences in your Tuning set from the training set,
avoiding duplicates.
Even with auto selection, the tuning set remains unchanged when you add or modify the
training data, until you choose to re-generate the automatically selected tuning set.
18. Having selected documents in the Tuning data set, click on the Testing tab to select documents
in the Testing data set. By default, Auto selection is enabled. This allows Hub to randomly select
approximately 5% of the aligned sentences from the Training data set and consider them for
testing. For a training to succeed, Hub requires to have a minimum of 500 sentences in the
Testing data set. The Hub ensures that sentences used in the Testing data set are not used to
train the translation system.
19. If you want to manually select documents for the Testing data set, click the Manual selection
option. The Hub will display all documents that have not yet been selected in the Training data
set and Tuning data set. If you cannot find documents in this list, you should switch to either
Training tab or Tuning tab, unselect the desired document from that set and come back to the
Testing tab.
In the testing data set, you should select a set of parallel documents for which the extracted
sentence count is equal or almost equal. For example, in the documents selected in the
screenshot below are good candidates for inclusion in the Testing data set.
To switch back to Auto selection, you can click auto select Testing Data.
The Hub will automatically remove the sentences in your Tuning set from the training set,
avoiding duplicates.
Even with auto selection, the tuning set remains unchanged when you add or modify the
training data, until you choose to re-generate the automatically selected tuning set.
20. With the Training, Tuning and Testing data sets well defined, you are now ready to train your
translation system for this language pair. To start the training, click on the Start Training button.
If you want to save it and modify the training configuration, you can click Save link.
21. Before proceeding to execute the training, the Hub will alert you if it finds any issues with the
documents selected in the Training, Tuning or Testing data set and also recommend fixes. You
can either ignore the warning or fix the issues and then resubmit the training.
22. After your training is submitted for execution, it may take up to several hours to build your
translation system depending upon the amount of data included in the training. The status will
show as In Training on the Project Details page and you can see the more details about the
Training by clicking on the System name in the Project Details page.
Using Dictionaries
You can specify a dictionary of terms that Microsoft Translator should use in translation, in addition to
your training data for the translation system.
Use of a dictionary has the potential of degrading the quality of the translations. Here are some
guidelines and hints:
Training documents showing the terms used in context are better than a plain dictionary. Terms
used in sentence form teach the system the correct inflection and agreement, better than a
dictionary can.
The dictionary maps the dictionary term or phrase exactly to the given translated form.
Try to minimize the dictionary to the terms that the system does not already learn from the
training documents.
The dictionary works well for compound nouns like product names (Microsoft SQL Server),
proper names (City of Hamburg), or features of the product (pivot table). It doesnt work
equally well for verbs or adjectives, because these are typically highly inflected in the source or
in the target language. Avoid dictionary entries for anything but compound nouns.
Both sides of the dictionary are case sensitive. Each casing situation requires an individual entry
into the dictionary.
You may create dictionary entries for longer phrases and expressions. In fact, entries of 2 words
or longer typically have better effects on the translation than single word entries.
Chinese and Japanese are relatively safe with a glossary. Most other languages have a richer
morphology (more inflections) than English, so the quality will suffer if there is a glossary entry
for a term or phrase that the system already translates correctly.
c. After the upload completes, the Excel file will appear in the Dictionary tab as shown
below. Select the files listed in the Dictionary tab in order to use the terms defined in
those files in your training.
Multilingual Dictionaries
You can create dictionaries for a specific source language and multiple target languages. To do that,
simply add additional columns to the Excel sheet, as columns C, D, E and so on, following the same
guidelines as above, with a language ID in row 1.
Once you uploaded this Excel file into your workspace, you can use it in any project for the respective
language pair, and the Hub will automatically use the appropriate language column. Column A is always
the source language, and columns B to ZZ are target languages.
2. Click on a project name to open the Project Details page. Click on the name of the system
trained to open the page that shows the latest status of the training.
3. The Status field shows the latest status for the training. The Duration field shows how long it
took to train the system starting from the time the training request was submitted till the
training completed.
If the training has succeeded, the BLEU Score field shows a value indicating how well the
machine translation correlates to human judgment. For all trainings conducted after 6-July-
2012, Hub shows how the trained system compares with Microsoft general domain translation
system by displaying a delta of the BLEU scores between the two systems. As seen below, the
score of the system trained is 8.16 points lower than Microsofts General Domain system for the
language pair. You may consider adding more training data or train the system with Microsoft
Models and see if it improves the BLEU score.
4. If the training fails, click on the Training name to view more information. For example, you might
see:
Status: Training failed:
Number of sentences in the training set is too small.
This indicates that the system is not able to produce a useable translation system based on the
small amount of data provided and you should find additional documents.
5. Each of the three tabs shows the Aligned Sentence Count and Extracted Sentence Count.
Hub displays 3 sentence counts.
a. Extracted sentence count: Count of sentences extracted from the document
b. Aligned sentence count: Count of sentences aligned
c. Used sentence count: No. of sentences used in building the system after excluding
sentences that overlap with tuning or testing set.
NOTE: For trainings conducted before 6-July-2012, aligned sentence count will be the same as
used sentence count.
6. If you notice a large difference between the aligned and extracted sentence counts, you have
the option to download the files and inspect them. To do so, click the checkbox next to the
document(s) and click download sentence files to download a zip file containing the extracted
sentence files and aligned sentence files corresponding to the document(s).
7. For a training that succeeds, you can click Evaluate Results link. This will open the Evaluate
results page that shows the machine translation of sentences that were a part of the test
dataset.
8. The table on the Review Translations page has two columns - one for each language in the pair.
The column for the source language shows the sentence to be translated. The column for the
target language contains two sentences in each row. The first sentence, tagged with "Ref:", is
the reference translation of the source sentence as given in the test dataset. The second
sentence, tagged with "MT:", is the automatic translation of the source sentence done by the
translation system built after the training was conducted.
9. Click Download Translations link to download a zip file containing the machine translations of
source sentences in the test data set.
This zip file contains 4 files.
a. BaselineMT_Dataset_<target language code>.txt: Contains machine translations of
source language sentences in the target language done by Microsofts general domain
system.
b. MT_Dataset_<target language code>.txt: Contains machine translations of source
language sentences in the target language done by the system trained with users data.
c. Ref_Dataset_<target language code>.txt: Contains user provided translations of source
language sentences in the target language
d. Src_Dataset_<source language code>.txt: Contains sentences in the source language
10. Navigate through the list of sentences using the paging controls and compare the machine
translation against the original translation.
11. Click the Assign Reviewers to invite reviewers and other owners to review the translations for
this training.
12. If you would like to retrain the system using new documents, click Clone link. This will create a
copy of the existing system. You can now modify the training configuration and resubmit it.
Best Practices
You can use the auto option for selecting a testing and tuning data set for the first couple of
trainings for a language pair. Then download the auto-generated tuning/test set from the
Training results page. Review the downloaded sentences and modify them if required. We
recommend that you switch to using the manual option for selecting tuning and testing datasets
so that for over a period of time the comparisons can be done as you vary the training data
while keeping the tuning and testing data unchanged. Both, the tuning set and the test set,
should be optimally representative of the documents you are going to translate in the future.
In order to compare consecutive trainings for the same systems, it is important to keep the
tuning set & testing set constant. This is particularly relevant for trainings where the parallel
sentence count is under 100,000 sentences. This set should be made up of sentences you think
most accurately reflects the type of translations you expect the system to perform.
If you elect to manually select your tuning data set, it should be drawn from the same pool of
data as the test set, but not overlapping. There should not be a duplication of sentences
between training and tuning. The tuning set has a large impact on the quality of the translations
- choose the sentences carefully.
When you engage in a series of training runs you are likely to receive differing BLEU scores in a
given language pair. Though BLEU is not a perfect metric for accuracy, there is a very high
likelihood that the system with a higher score would provide a better translation in human
judgment. It is best to test the system or have bilingual reviewers evaluate the accuracy of a
particular training.
When you clone an existing training, we recommend that you only make changes to the training
data set. In order to train a system that is comparable to the previous one, it is important to
leave the tuning and testing data sets unchanged. That way, the newly trained system will be
tested against the same set of sentences.
A higher BLEU score often indicates that a translation matches the target language more closely.
Dont be disappointed if your scores arent reaching 100 (the maximum). Even human
translators are unable to reach 100% accuracy. For ideographic and complex languages you will
get a lot of utility out of a score between 15 and 20, for most Latin script languages a score of 30
gets you into a desirable range.
When using dictionaries, try to reduce the dictionary to the terms that are absolutely necessary
to regulate. In a second pass further reduce the entries to the ones that dont come out right in
a majority of test cases, and let the system choose the best option for the ones that come out
right, mostly.
If the auto-selected sentences in tuning data set or testing data set are not of a suitable quality
in the last trained system, Hub offers an option to reset the tuning data and testing data in the
new training, which forces the system to resample the tuning and testing data. If you add a large
number of documents to Training data set, remember to check the Reset checkbox so that the
test set and tuning set are resampled. As soon as you change the test set, you have lost the
ability to compare the result to a previous training. If you need to update the test set, perform a
training before you make any changes, and then perform a after changing the test set and
nothing else. It doesnt matter if it lower or higher than your previous test: This is your new
baseline, and you can continue improving from here.
You can use the dictionary, even if you dont use any other entries, to watermark your
translation system. Put a non-word entry in it, like mytranslatorversion, translated to
version20150928. You can then translate this word using your own or someone elses
application, to ensure that your application is using the correct category, and that you have
reached your own custom system.
3.4 Share & Translate
This section describes how you can test the translation system you have created and then share it with
people from your community so as to further improve the quality of your translation system.
Request Deployment
It may take several trainings in order to create an accurate translation system for your project.
After a set of systems have been trained, go to the Project Details page and select one with a good BLEU
score. You may want to consult with reviewers before deciding that the quality of translations is suitable
for deployment.
In the Request Deployment page of the selected system, click on Request Deployment.
If you have not already associated your Translator API subscription when you created your workspace,
your training wont be deployed and you will see a message to associate your Translator API subscription
by clicking setting tab and associating your subscription.
Please allow around 2 business days for the deployment request to be processed. You will receive an
email confirming the availability of your translation system as soon as the deployment has taken place.
The status of a training for which a translation system has been deployed appears as Deployed. You
can also use train and Deployment option. If you want to train and deploy system in one step.
As soon as the system is deployed, you can use it via the Microsoft Translator API, or any application
that uses the API. Be sure to identify the correct category ID in the API translation request.
Test Translations
After the translation system generated by your training has been deployed, you will receive an email
from [email protected].
To verify the translation system after deployment
1. Click Projects tab.
2. From the Projects list, select the Project for which the translation system has been deployed and
select the translation system with status as "Deployed". Go to the Test system page.
3. Enter the test sentence into the From box. Click Translate.
The translation of the source sentence(s) will be displayed in the To box. The value to use in the
category field of the Translator API, to use this this deployed system, is listed on this screen.
4. You can click on the name of the document to preview how the deployed translation system
translates source language sentence in the document to the target language. The source
language will be on the left, and the target language will be on the right. If you are intending to
share this document with Community Members (See section 2.6), who are not a part of the
workspace, please review the document to ensure that it does not contain sensitive
information.
5. To share the document you have previewed, select it and click Share
6. The document will now be listed under Shared Documents. Documents listed in the Shared
Documents tab are visible to all Reviewers in this workspace. Reviewers can click the document
name to open up the Review Documents page and submit alternate translations. To stop sharing
a document, click the stop sharing link in the Shared Documents tab.
7. If there are a lot of documents listed in it, use the search feature to locate the document, select
it and click on email to invite a Community Member to review the document.
8. In the Send Email dialog, type in the email address of the community members.
9. Click Send.
4. Hub will connect to Microsoft Collaborative Translation Framework store, retrieve all the
alternate translations submitted by your community and show you the Download Community
Translations dialog
5. Click Download Sentence File to download a zip file containing these translations.
8. You can review the sentences in these documents, correct them if required and re-upload them
for use in your training. Since these documents already follow the naming convention, there is
no need to rename the documents.
9. You can include the community submitted documents in any of the data sets Training, Tuning
or Testing. Please refer to Section 3.2.1 for further information of setting up the training.
3. Click Unused Documents tab. You will see a list of documents that have been uploaded but are
not used. These documents can be safely deleted by selecting them and clicking on Remove
selected documents.
4. The search box in at the top of the grid allows you to search by document name. You can search
by the three letter language code. For e.g.: mww to see a list of all Hmong language files present
in the workspaces document repository. You can further sort by any of the column.
4.5 Manage Members
As co-owner and owner, you can manage membership of the workspace.
1. Go to Members tab
2. The system displays a list of all users of the workspace along with the roles assigned to them.
3. As Owner/co-owner, you can select a role from the Role drop-down, change the role of an
existing member and click Save Changes
4. To remove an existing member, select a person from the list and click Remove Selected
Member(s). You will see a confirmation dialog to confirm the deletion. Click OK to proceed with
it or Cancel to abort.
4.6 Request Language
MT Hub might not support the language that you want to use in your project. However, you can request
a new language for your project.
1. At the time of Creating a project, if you cannot find the language in the list, select Request New
Language.
2. This brings up a form
5.1 FAQs
Q: Are the documents in my workspace visible to people outside the workspace? What is the document
retention policy?
A: Information generated and stored as a result of your project from the Hub belongs to you, but may be
stored for an indefinite period of time. Documents uploaded by you are only visible to authorized users
in your workspace. Your data will not be shared with anyone, except the people and organizations who
are tasked with the quality assurance of the Translator service. It is protected from access by anyone
else. We may use your data for translation to derive statistical and quantitative information about the
text being translated, in order to build and apply the most appropriate models to optimize the
translation.
For more information, please see the Terms of Use.
Q: What are the minimum requirements for training a language pair that is not yet supported by
Microsoft Translator?
A: Microsoft Translator Hub will fail if there are less than 10,000 sentences of parallel data. In order to
achieve a very basic level of understandability you will generally need 10,000 or more sentences.
Q: What are the minimum requirements for building a category customization in a supported language
pair?
A: If you are allowing the use of Microsoft models in your training setup, the minimum is the tuning set
and the test set. This will give a minimal improvement for your documents at best. In order to improve
further, you can add parallel material, or target language material, or both. More material is generally
better, but try to stick to the category and terminology you want to improve.
Q: I selected Technology category when creating a project. When I train the system for this project and
check the option Use Microsoft Models, will the training use Microsofts Technology model?
A: No. At this moment the choice of Use Microsoft models always invokes the general training data,
same as Bing Translator. The category selection serves only to identify your purpose, but has, as of now,
no influence on the behavior during training or translation. That will change in an upcoming release.
Q: What if I am from a country that is not supported by the Microsoft Azure Marketplace?
A: There are a number of possibilities to work around this for your project. Please contact
[email protected] for more information.
Q: What if I have documents in Word 97-2003 DOC format? Can I use them in a training?
A: You will need to use OFC.exe which is a free download included with "Microsoft Office Compatibility
Pack for Word, Excel, and PowerPoint File Formats"
(http://www.microsoft.com/download/en/details.aspx?id=3) and convert the DOC files to DOCX.
If you have Word 2007/2010, you can use it to convert DOC files to DOCX and then upload the
documents.
Q: After deployment of the trained system, there does not seem to be a way to upload a TMX file and
get it machine translated on the server side?
A: Most of the commercial and open source TM tools offer a way to translate TMX files using a custom
MT system. Microsoft does not directly offer a TMX translation tool.
Q: I trained and deployed a customized MT system last week, made a few translations, and this week I
notice that my translations are different than last week.
A: Microsoft regularly updates the base translation models that are being used in all translations using
Microsoft models, and the implementation of the translation engine itself. It may happen every couple
of weeks. Each one of these basic changes has the chance to change the translations being produced
with your custom system.
This is not a cause of concern, because Microsoft makes sure that the majority of changes are positive.
If you notice your translations have changed, and you want to verify if they are positive even for your
own test set, you may go ahead and request a new training, which will produce a new BLEU score, based
on the new Microsoft models. In more cases than not, the new score will be slightly higher than the
previous one.
It may happen that in particular cases the score is lower. The only certain thing is that on average the
scores will be higher. The changed translations are unavoidable when upgrading the basic translation
models and algorithms.
Q: I uploaded a TMX file today for training and I got the message that the file size exceeded the limit of
50 MB.
A: Yes we do have a 50 MB size limit for the files being uploaded. Zip the TMX file and retry the upload.
If your training fails with the message An error occurred while building the translation system. Please
try again after some time. If you continue to get the same error, please email
[email protected]. we recommend to wait for few hours before re-submitting the system for
training. If you are encountering these errors on a regular basis and the Hub team has not already
reached out to you, please send an email to [email protected].
Q: Is there a feature in MT HUB which would enable a project owner to approve all the submitted
translations?
A: Translations provided by the community or reviewers can be approved all at once. To approve the
translations, navigate to Community > Invite Reviews, click on the Manage Translations link and
select the Suggested radio button on the Manage Translations page. Please refer to section 3.3.3.2 of
the user guide
Q: The PDF file I tried to upload, failed with an error saying it might be corrupt?
A: The PDF file that failed to upload may be a secure PDF file. Currently Hub cannot extract sentences
from a secured PDF file. Please include only PDFs in your training that are not secured with a password.
Q: How can I ensure skipping the alignment and sentence breaking step in MT Hub, if my data is already
sentence aligned?
A: MT Hub skips sentence alignment and sentence breaking for .tmx files and for text files with .align
extension. .align files give users an option to skip MT Hubs sentence breaking and alignment process
for the files that are perfectly aligned, and need no further processing. We do recommend using .align
extension only for files that are perfectly aligned.
If the number of extracted sentences does not match the two files with the same base name, the Hub
will still run the sentence aligner on .align files.
Q: Is there a way to upload a TMX file and get it machine translated on the server side?
A: The machine translations can be viewed via the test console or can be retrieved via an API. We do not
currently offer a direct TMX translation utility. Many commercial TM tools offer TMX translation.
Q: Why the results from the Test Translation page of Microsoft Translator Hub differ from the one
returned by the Microsoft Translator API with Hub? Is it the difference from the two content types of
"text/plain" and "text/html"?
A: Yes the web interface in the Hub uses contentType=text/plain. In plain text, tags that look like <one
letter><number> are left untouched and move with the word they are next to. This may result in tag
ordering that would be illegal in XML. Tags of other format will not be treated as tags. The Hub forces all
tags it sees in the sample documents into the <one letter><number> format, but the API wont.
In text/html proper HTML processing is done, tags will be in legal order and legal nesting. However, you
must pass balanced HTML, and self-closing tags will be expanded in the process. You will want to use
text/plain for most content, except when you have balanced HTML, or balanced XML that you can
transform HTML. In contentType=text/html you may also exclude any span of text from translation by
using the notranslate attribute.
When using HTML, the engine does a better job at positioning the tags properly. If you use plain text and
have tags in there, you will need to ensure the correct tag placement yourself.
Q: How does BLEU work? Is there a reference for the BLEU score? Like what is good, what the range is,
etc.
A: BLEU is a measurement of the differences between an automatic translation and one or more human-
created reference translations of the same source sentence. The BLEU algorithm compares consecutive
phrases of the automatic translation with the consecutive phrases it finds in the reference translation,
and counts the number of matches, in a weighted fashion. These matches are position independent. A
higher match degree indicates a higher degree of similarity with the reference translation. Intelligibility
and grammatical correctness are not taken into account. BLEUs strength is that it correlates well with
human judgment by averaging out individual sentence judgment errors over a test corpus, rather than
attempting to devise the exact human judgment for every sentence.
A more extensive discussion of BLEU scores is here: https://youtu.be/-UqDljMymMg.
All that being said, BLEU results depend strongly on the breadth of your domain, the consistency of the
test data with the training and tuning data, and how much data you have available to train. If your
models have been trained on a narrow domain, and your training data is very consistent with your test
data, you can expect a high BLEU score. Please note that a comparison between BLEU scores is only
justifiable when BLEU results are compared with the same Test set, the same language pair, and the
same MT engine. A BLEU score from a different test set is bound to be different.
Q: Do the corpora need to be perfectly aligned at sentence boundaries? Though the corpora are aligned
by segment, they do not always match at the sentence level. For example, a given segment might be
one sentence in English, but two sentences in the target language.
A: Instances where a given segment might be one sentence in English, but two sentences in target
language, you should include them in one line and upload it as .align file. Sentences in .align file are
not broken by sentence end punctuation like . or ;. Hence you can safely manage such cases via
.align files. In .align files, enter key from keyboard is considered the end of the line / sentence.
Q: Uploading a gz file gives an error: The document has no extension. Please upload a document with a
supported file extension.
A: Certain version of gz files is not supported by MT Hub gz extractor. The workaround is to create a new
gz file in 7Zip.
Documentation for the Translator APIs can be found at the MSDN Translator API article.
The same API is available for customized translation system. Please see the MT Hub API Guide for
further details on how to use Microsoft Translator APIs to access your translation system.
5.3 Glossary