0% found this document useful (0 votes)
13 views18 pages

TT Mid Notes

The document provides an overview of translation technology, defining key terms such as Machine Translation (MT), Fully Automatic High-Quality Translation (FAHQT), and the distinctions between Human-Aided Machine Translation (HAMT) and Machine-Aided Human Translation (MAHT). It discusses the evolution of translation theories and their application to machine translation, highlighting the roles of various stakeholders in the translation process. The text emphasizes the importance of technology in modern translation practices while addressing challenges and the future of translation technology.

Uploaded by

khaanum101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views18 pages

TT Mid Notes

The document provides an overview of translation technology, defining key terms such as Machine Translation (MT), Fully Automatic High-Quality Translation (FAHQT), and the distinctions between Human-Aided Machine Translation (HAMT) and Machine-Aided Human Translation (MAHT). It discusses the evolution of translation theories and their application to machine translation, highlighting the roles of various stakeholders in the translation process. The text emphasizes the importance of technology in modern translation practices while addressing challenges and the future of translation technology.

Uploaded by

khaanum101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Chapter 1:

Definition of Terms in Translation Technology

Machine Translation (MT)

Machine translation (MT) refers to the use of computers to translate text


between languages. Initially, the goal was to achieve fully automatic high-
quality translation (FAHQT) without human involvement. However, scholars
like Bar-Hillel (1960) argued that this goal was unattainable due to the
complexities of natural language. Modern MT systems prioritize fit-for-purpose
output—translations that may not be perfect but are useful for practical
applications. Examples of successful MT systems include SPANAM and
ENGSPAN, which are widely used for translating between English and Spanish.

Fully Automatic High-Quality (Machine) Translation (FAHQT/FAHQMT)

FAHQT refers to the ideal goal of producing high-quality translations without


human intervention. Despite extensive research, this remains an unrealistic
objective due to the nuanced and context-dependent nature of human language.

Human-Aided Machine Translation (HAMT)

In human-aided machine translation, the machine performs the primary


translation, while humans assist at various stages—before, during, or after the
process. Because the machine drives the translation, HAMT is classified as a
subset of machine translation rather than computer-aided translation (CAT).

Machine-Aided Human Translation (MAHT)

Machine-aided human translation involves human translators using computer


tools to enhance their work. This category is often considered synonymous with
computer-aided translation (CAT) because both emphasize human-led processes
with technological assistance.

Computer-Aided Translation (CAT)

Computer-aided translation (CAT) involves using software to assist human


translators. While some technology developers prefer the term machine-aided
translation (MAT), CAT is the more commonly used term in Translation
Studies and the localization industry. This book treats CAT and MAHT as
equivalent categories.

Machine-Aided Translation (MAT)


MAT is an alternative term for CAT. Although both terms refer to technology-
supported translation, CAT is the preferred term in academic and professional
contexts.

Distinguishing Between HAMT and MAHT

The primary distinction between HAMT and MAHT lies in who leads the
translation process. In HAMT, the machine is the main translator with human
support. In MAHT, the human translator leads the process while using machines
as supportive tools. In practice, however, the boundaries between these
categories are becoming increasingly blurred as modern systems integrate
features from both approaches.

Modern Machine Translation Systems

Machine translation systems are also classified based on their intended users:

1. Home Systems – Designed for casual users with minimal translation


expertise.

2. Online Systems – Specialized for translating web-based content and digital


documents.

3. Professional Systems – Developed for expert translators using advanced


software tools.

4. Organizational Systems – Large-scale systems used by corporations for


handling extensive multilingual tasks.

Examples of widely used commercial systems include SPANAM and


ENGSPAN, while experimental systems such as ALT-J/E and ALT-J/C focus
on advanced language processing.

Human-Aided and Machine-Aided Translation

Human-Aided Machine Translation (HAMT)

Human-aided machine translation is a process where the machine performs


most of the translation work, but human assistance is required at different
stages. This assistance happens during pre-editing, post-editing, or during the
translation process. Pre-editing involves checking the source text for errors,
ambiguous phrases, and complex language that might confuse the machine.
Post-editing is the process of refining the machine-generated translation to
improve accuracy, style, and terminology. In some cases, human intervention is
needed during the translation process to resolve ambiguous or unknown terms.
This model is especially useful for technical texts like legal documents and
manuals. Examples of HAMT systems include MaTra Pro and Lite, which
translate English to Hindi, and systems used by the United States Patent and
Trademark Office.

Pre-Editing in HAMT

Pre-editing is the first stage where a human editor reviews and corrects the
source-language text before it is processed by the machine. The aim is to
remove any problematic elements, such as unusual phrases or typographical
errors, that could cause difficulties during machine translation. Pre-edited texts
are easier for machines to handle and produce more accurate translations.

Post-Editing in HAMT

Post-editing is the final stage where a human editor reviews the machine’s
output and improves it to meet specific linguistic standards. This involves
correcting errors, improving the flow of the text, and ensuring proper use of
terminology. Post-editing is necessary because machine translations often lack
the nuance and context that human translators provide.

Machine-Aided Human Translation (MAHT)

In machine-aided human translation, human translators remain the primary


agents of translation but use various software tools to assist their work. These
tools include electronic dictionaries, terminology databases, translation
memory, and spell-checkers. Such systems help translators manage large-scale
projects while maintaining consistency in terminology. Popular MAHT systems
include Trados Translator’s Workbench, SDLX Translation Suite, Transit by
Star AG, and Déjà Vu by Atril.

Tools Used in MAHT

The main tools used in MAHT include databases of previous translations,


electronic dictionaries, glossaries, and spell-checkers. These tools help
translators access previous work, find correct terminology, and ensure
grammatical accuracy. By using these resources, translators can improve both
the speed and quality of their work.

Human Translators and Technology

Modern translators increasingly rely on computer-aided tools to meet the


growing demand for fast and accurate translations. However, some translators
are resistant to adopting new technology due to fears of machines replacing
their work or a lack of technical knowledge. Despite these concerns, many
translators who use such tools report improved productivity and translation
quality. To address this resistance, universities worldwide are offering
specialized training programs to teach future translators how to effectively use
translation technology.

Challenges and Future of Translation Technology

One major challenge in integrating technology into translation is the lack of


involvement from professional translators in the development of these tools.
This has led to misunderstandings and resistance toward machine-assisted
processes. However, as the industry evolves, translators are increasingly
recognizing the value of technology in improving efficiency and accuracy. With
continued education and adaptation, translation technology is becoming an
essential part of the modern translation process.

The Localization Industry

The localization industry plays a crucial role in translation technology.


Traditionally, it involved two main sectors: hardware and software
manufacturers and localization service providers. Today, other sectors such as
telecommunications, language service providers, and universities also
contribute to this industry as businesses aim to reach a global audience. For
more detailed discussions on localization, refer to Esselink (2000, 2003) and
Pym (2004).

Localization refers to adapting a product, its documentation, or services to make


them suitable for a particular locale—a group of people who share a language
and culture. For example, Time magazine is translated and adapted into
Portuguese and Spanish for Latin American readers, which differ from the
European versions. Localization involves changes like currency formats, date
and time formats, and other culturally specific adjustments (Esselink 2002).

From a translation perspective, localization is primarily a linguistic task, aiming


to make content as natural as possible in the target language. However, it also
involves cultural adaptation. Different locales may interpret colors, icons, and
symbols differently, making it necessary to adjust these elements for each
market. Even regions speaking the same language may have variations that lead
to misunderstandings.
For instance, in Indonesian Malay, the word “butuh” means “need”, but in
Malaysian Malay, it is a vulgar term. Additionally, both varieties of Malay use
different loanwords—“karcis” (from Dutch) in Indonesia and “tiket” (from
English) in Malaysia both mean “ticket”. Even borrowed English terms can
differ; “website” is translated as “situs web” in Indonesia and “laman web” in
Malaysia.

Conclusion

Before the 1990s, when the Internet became widely available, the four
translation types identified by Hutchins and Somers (1992) were easy to
distinguish. However, as technology advanced, the boundaries between these
categories became blurred. Modern translation tools now perform multiple
functions, making old classifications less relevant.

This book reflects these changes by discussing machine translation in Chapter 3


and computer-aided translation tools in Chapter 4, while later chapters address
new developments in translation technology.

Chapter 2

Translation Studies and Translation Technology

This chapter explores the relationship between Translation Studies and


translation technology, focusing on how translation theories apply to machine
translation. It discusses key areas such as translation theory, professional and
academic groups, and John S. Holmes’ (1988/2000) model of applied
Translation Studies, which connects translation to natural-language processing.
The chapter also examines the translation process (including pre-editing and
post-editing) and introduces the concept of controlled language—a simplified
form of language used for machine translation. Roman Jakobson’s (1959/2000)
semiotic classification is also discussed, offering another perspective on the
translation process.

Translation Theory

Translation theory is a broad and unclear concept. According to Chesterman


(2003), it includes models, beliefs, and concepts aimed at understanding how
messages are transferred between languages. The basic idea of translation is the
transfer of meaning from a source language (SL) to a target language (TL)
(Newmark 1981).
Historical Development of Translation Theories

1. Word-for-Word vs. Sense-for-Sense (Pre-20th Century)

This debate, which started with Cicero and St. Jerome, contrasts word-for-word
translation (literal meaning) with sense-for-sense translation (overall meaning).
This debate influenced early translations of major works like the Bible, Greek,
and Buddhist texts (Munday 2001).

2. Linguistic Approaches (1950s–1960s)

Translation theory shifted toward linguistics, focusing on equivalence and


translation procedures:

• Vinay & Darbelnet (1958/2000): Identified seven translation strategies,


including borrowing and adaptation.

• Catford (1965): Studied different types of equivalence between languages.

• Nida (1964): Introduced formal (literal) vs. dynamic (natural) equivalence.

3. New Dichotomies (1970s–1980s)

Scholars introduced new distinctions similar to the word-for-word vs. sense-for-


sense debate:

• House (1977): Overt (visible as a translation) vs. covert (reads like an original)
translation.

• Newmark (1981): Semantic (close to the original) vs. communicative (natural


for readers) translation.

4. Chomsky’s Influence (1957, 1965)

Noam Chomsky’s theory of transformational-generative grammar introduced


the idea of deep (meaning) and surface (form) structures. This concept
influenced early machine translation models, which used phrase structure rules
to generate text.

5. Functional Approaches – Skopos Theory (1970s Onward)

Skopos theory, developed by Hans Vermeer (1996), focuses on the purpose of a


translation. It suggests that translation strategies should depend on the intended
use of the target text. This theory is useful in machine translation, where text
quality depends on the final purpose (e.g., rough drafts vs. formal speeches).

Translation Theory and Machine Translation

• Pre-editing and post-editing are influenced by the purpose of the translation.

• Skopos theory aligns with the “fitness for purpose” approach, determining
whether to use human or machine translation.

• Chesterman (2000) argues that no single theory can explain all translation
phenomena due to its complexity. For machine translation, specialized theories
focusing on controlled language are needed (Melby & Warner 1995).

In conclusion, translation theories have evolved from simple literal vs. free
approaches to complex models influenced by linguistics and functional
approaches. These theories are essential for understanding and improving
machine translation systems.

Academic and Professional Groups in Translation

This section examines the relationship between four key groups involved in
translation—linguists, professional translators, translation theorists, and
scientists—and their different approaches to translation theory and practice.

1. Translators and Linguists

• Linguists’ Viewpoint:

• Focus on descriptive approaches—explaining how translation works rather


than prescribing how it should be done (Halliday, 2001; Crystal, 1993).

• Consider translation a sub-field of applied linguistics (Baker, 2001).

• Translators’ Viewpoint:

• Prefer a prescriptive approach—emphasizing how translations should be for


better results (Chesterman, 2000).

• View translation theory as a problem-solving tool to assist in real-world


translation tasks.
The core difference lies in describing vs. prescribing translation practices,
creating tension between the two groups.

2. Translators and Translation Theorists

• Theorists’ Focus:

• Emphasize meaning, translation methods, and linguistic problems like


metaphors and cultural terms (Newmark, 1981).

• Their primary goal is to explain translation processes rather than directly solve
practitioners’ problems (Fawcett, 1997; Chesterman, 2000).

• Translators’ Focus:

• Prioritize practical solutions for real-life translation challenges.

• Often lack the time or resources to contribute to theoretical work (Moore,


2002; Noguiera, 2002).

Theorists and translators often misunderstand each other’s work—translators


want practical guidelines, while theorists are more interested in analyzing
translation processes.

3. Linguists and Translation Theorists

• Shared Ground:

• Both fields focus on language and translation processes (Catford, 1965).

• Linguistic studies on social factors (e.g., gender, age) can inform translation
decisions.

• Tension Points:

• Linguists often view translation as a scientific process, while translation


theorists may emphasize creativity and cultural nuance (Bell, 1991).

• Linguists may be skeptical of translation theory due to its subjectivity, while


theorists criticize linguists for oversimplifying translation.
This tension arises because linguists value objectivity, while theorists recognize
the artistic and cultural aspects of translation.

4. Linguists and Scientists

• Scientists’ Viewpoint:

• Focus on machine translation (MT) and applying linguistic theories to improve


computer algorithms (Bennett, 2003).

• Prioritize computational efficiency over theoretical accuracy.

• Linguists’ Viewpoint:

• Focus on human language understanding, not on making it machine-readable.

• Remain reluctant to engage with machine simulation of brain processes due to


their complexity (Hutchins, 1979).

The divide between these groups lies in their goals—scientists seek practical
machine applications, while linguists focus on human-centered language
analysis.

Conclusion

The four groups—linguists, professional translators, translation theorists, and


scientists—each approach translation from distinct perspectives, leading to
miscommunication and disagreements. Despite their differences, their combined
insights contribute to a richer and more comprehensive understanding of
translation theory and practice.

1. Early Machine Translation Systems (1940s–1960s):

• The primary approach was direct translation, which assumed a one-to-one


correspondence between source and target language words.

• Minimal syntactic analysis (e.g., identifying nouns and verbs) was required.

• Example: Georgetown University System—produced poor-quality


translations, revealing the need for deeper linguistic analysis (Hutchins, 1979).
2. Systran and the Evolution Beyond Direct Translation (1968):

• Systran (System Translation) improved on the Georgetown model by dividing


linguistic and computational components into separate modules.

• However, it still lacked a foundation in any specific linguistic theory


(Hutchins, 1979).

3. Linguistic Approaches in Machine Translation:

• Formal Approach:

• Focuses on describing morphological and syntactic structures.

• Easier to implement computationally and has had greater influence on


machine translation (Crystal, 1993).

• Functional Approach:

• Considers the use of language and how words interact in social contexts.

• More challenging to apply to machine translation due to its pragmatic focus.

4. Chomsky’s Transformational-Generative Grammar:

• Became the dominant formal approach but proved unsuitable for machine
translation due to its complexity (Hutchins, 1979).

• Eugene Nida adapted deep structure analysis for translation purposes (Fawcett,
1997).

5. Formalisms and Rule-Based Systems:

• Formalisms—mathematical notations that describe linguistic structures—


became the basis of rule-based machine translation systems.

• Examples of constraint-based grammars include:

• Tree Adjoining Grammar (Joshi et al., 1975)

• Lexical Functional Grammar (Kaplan & Bresnan, 1982)

• Generalized Phrase Structure Grammar (Gazdar et al., 1985)

• Head-driven Phrase Structure Grammar (Pollard & Sag, 1987)


6. Lexical Functional Grammar (LFG) in Machine Translation:

• C-Structure: Represents the constituent hierarchy of a sentence.

• Example for “Jane kicks David”:

S → NP VP

NP → N (Jane)

VP → V NP (kicks David)

• F-Structure: Describes the grammatical relationships using attribute-value


matrices (e.g., subject, object, tense).

• Example: PRED (kicks) has attributes SUB (Jane) and OBJ (David).

7. Advantages of Constraint-Based Grammars:

• Reversibility Principle: Allows the same grammar to be used for both


translation directions, making it easier to build bidirectional systems.

8. Lack of Translation Theory in Machine Translation:

• Machine translation development has been independent of translation theory.

• Concepts like Catford’s translation shifts and Vinay & Darbelnet’s procedures
are not integrated into machine translation systems.

• This separation is partly due to the absence of linguists and translators in the
machine translation field (Wilss, 1999).

9. Translator Involvement in Machine Translation:

• Professional translators are often excluded until the final stages of


development (Bédard, 1993).

• Greater collaboration could result in translator-friendly systems and better


output quality.

10. Machine Translation and Translation Studies Divide:

• Many translation studies texts (e.g., Gentzler, 1993; Riccardi, 2002) neglect
machine translation.
• This division is problematic for students, as technical translation increasingly
involves translation tools.

Summary of Holmes’ Schema in Translation Studies

John S. Holmes’ seminal 1972 paper at the Third International Conference of


Applied Linguistics in Copenhagen established Translation Studies as a distinct
academic discipline. His conceptual schema, later elaborated in 1988/2000,
outlines the field through two major branches: Pure Translation Studies and
Applied Translation Studies (Holmes 1988/2000: 183). This framework remains
flexible, allowing for the integration of technological developments across both
branches.

1. Pure Translation Studies

This branch focuses on describing and theorizing translation phenomena,


answering questions like “why are translations the way they are?” and “what
effects do they have on readers?” It is divided into two main categories:

A. Descriptive Translation Studies (DTS)

DTS investigates actual translations and is further divided into three research
approaches:

• Product-oriented DTS: Analyzes existing translations through:

• Text-focused translation description (individual translations).

• Comparative translation description (multiple translations of the same text).

• Function-oriented DTS: Examines how translations affect the target culture.

• Process-oriented DTS: Explores the cognitive processes behind translation


using methods like think-aloud protocols (TAP).

Technological tools (e.g., translation memory, concordancing) play a key role in


facilitating both product- and process-oriented research.

B. Theoretical Translation Studies (TTS)

TTS aims to build general or partial theories of translation. It is classified into


six subcategories:
1. Medium-restricted theories: Analyzes oral (interpreting) vs. written
(translation) texts, including modern developments like speech-to-text systems.

2. Rank-restricted theories: Focuses on translation at different linguistic levels


(e.g., word, sentence, or text level).

3. Text type-restricted theories: Categorizes texts by function—informative


(e.g., manuals), expressive (e.g., literature), and operative (e.g., advertisements).

4. Area-restricted theories: Examines language pairs, language families, or


cultural contexts.

5. Problem-restricted theories: Investigates specific translation issues (e.g.,


grammatical errors).

6. Time-restricted theories: Studies translations across historical periods,


which may involve digitizing older texts through technologies like OCR.

2. Applied Translation Studies

This branch is practice-oriented, focusing on improving translation quality,


tools, policies, and critiques. It has four primary subcategories:

1. Translator Training: Methods for educating and training translators.

2. Translation Technology: Tools used to assist translation, including:

• Automatic translation tools: Stand-alone or networked machine translation


(MT).

• Computer-aided translation tools:

• Translation tools: Translation memory (TM) and terminology management


systems (TMS).

• Linguistic tools: Language-dependent (e.g., dictionaries) and language-


independent (e.g., OCR, concordancers).

• Localization tools: Tools for adapting software and documentation for


different markets.

3. Translation Policy: Establishes regulations and ethical guidelines for


translators.
4. Translation Criticism/Evaluation: Analyzes and critiques translation
quality and practices, including the assessment of translation technologies.

Holmes’ schema highlights the reciprocal relationship between Pure and


Applied Translation Studies—each informs and enhances the other.

Technological Integration

Modern developments in translation technology, including machine translation


and digital text analysis, have expanded Holmes’ original schema. Translation
tools are now multifunctional and increasingly integrated into comprehensive
systems, shaping both theoretical inquiry and practical application.

Summary: The Translation Process (Pre-editing, Machine Translation, and


Post-editing)

In process-oriented Translation Studies, Holmes initially focused on


investigating mental processes. However, in the context of translation
technology, the translation process refers to the stages involved in automating
translation, including pre-editing and post-editing. This model incorporates
Jakobson’s (1959) categories of intralingual translation (within the same
language) and interlingual translation (between languages).

Pre-editing

Pre-editing involves modifying the source-language (SL) text before it


undergoes machine translation (MT). This process may include correcting
errors, removing ambiguities, and restricting vocabulary and grammar to
improve the quality of the machine-generated output. Pre-editing is especially
useful when translating content into multiple languages simultaneously, such as
in the localization industry (e.g., Nokia’s N-Gage QD documentation).

Controlled languages, a subtype of pre-editing, involve using a restricted,


artificially constructed language to ensure clarity and compatibility with MT
systems. This contrasts with sublanguages, which evolve naturally within
specific fields (e.g., the Météo system translating weather forecasts). Effective
pre-editing can reduce the need for post-editing but cannot completely eliminate
it due to the inherent limitations of MT systems.

Post-editing
Post-editing refers to the process of refining and correcting MT outputs. While
human translations undergo revision, the term “post-editing” is reserved for MT
outputs. According to Allen (2003), it involves minimal intervention to preserve
the machine’s output while making the text accurate and coherent.

There are two main types of post-editing:

1. Rapid Post-editing – Focuses on accuracy rather than style, suitable for


content like meeting minutes.

2. Polished Post-editing – Aims to produce high-quality, publication-ready


translations but is only feasible if the MT output is of reasonable quality.

Post-editing remains essential due to the current limitations of MT technology.


Initiatives like the Post-Editing Special Interest Group (1999) work to develop
guidelines, provide training, and promote post-editing education. Various
organizations, including the European Commission Translation Services and
General Motors, have established their own post-editing guidelines, though no
universal standards currently exist.

Given the ongoing reliance on MT, pre- and post-editing are expected to remain
integral to professional translation practices, especially for technical texts.
Future translation training programs may incorporate these tasks to meet
industry demands.

Summary of Controlled Language

A controlled language is a restricted subset of a natural language with simplified


vocabulary, grammar, and style designed to improve translation quality by
humans or machines (Kaji 1999: 37). Its primary objective is to enhance clarity,
conciseness, and ease of translation while reducing ambiguities and errors
(Wojcik and Hoard 1997: 238). Controlled languages typically restrict
vocabulary size and impose grammatical limitations, such as limiting sentence
length to 20 words, avoiding the passive voice, and using only one instruction
per sentence (Nyberg, Mitamura, and Huijsen 2003: 247).

Historically, the concept originated in 1932 with Charles Ogden’s BASIC


English, aimed at simplifying English for scientific and commercial
communication (Macklovitch 1999: 75). Later developments included
Caterpillar Technical English (CTE) for industrial use and AECMA Simplified
English for the aerospace industry. AECMA Simplified English evolved into
ASD Simplified Technical English, which standardizes terminology and ensures
clarity for non-native English speakers.
Controlled languages also exist in other industries such as healthcare and
telecommunications and are used globally to minimize translation errors and
ensure legal compliance. For instance, ScaniaSwedish is a controlled language
used for authoring truck-maintenance manuals in Swedish, which are then
translated into other languages.

From a translation perspective, controlled language improves the quality of


machine translation by reducing ambiguity and ensuring consistency. Pre-
editing source texts using controlled language rules allows machine translation
systems to produce better-quality target texts, which are later refined through
post-editing. This process reduces translation costs, enhances comprehension,
and ensures better accuracy across different languages. However, creating a
controlled language for specialized fields can be expensive and time-
consuming, requiring years of research and specialized training (Janowski
1998).

Despite its technical focus, controlled language must be applied pragmatically


to preserve the clarity and usability of the text without over-simplification. As
the demand for machine translation grows, controlled language is becoming
increasingly essential in ensuring accurate and efficient cross-linguistic
communication.

Chapter 4:

Computer-Aided Translation (CAT) Tools and Resources – Summary

This chapter focuses on the different tools and resources that help professional
translators, especially those working with specialized texts. Key tools include
translation memory systems (TMs) and terminology management systems,
which improve translation speed and accuracy. There are also standards like
Translation Memory eXchange (TMX) that ensure compatibility between
different systems.

Workbenches

A workbench (or workstation) is an integrated system combining various


translation tools. It usually includes:

• Translation memory systems (TMs)


• Alignment tools (for matching original and translated texts)

• Tag filters (to handle text with formatting codes)

• Electronic dictionaries

• Terminology management systems (to organize and store specialized terms)

• Spell and grammar checkers

Translation Memory Systems

These systems store and retrieve previously translated text segments to help
translators reuse past work. Unlike machine translation, TMs require human
input to approve or reject suggestions. They work best with technical documents
like legal texts, manuals, and reports that contain repeated phrases.

• Language Support: TMs are language-independent and support many


international scripts, although some rare languages are not yet fully digitized.

• How It Works: TMs store aligned texts and retrieve matching or similar
phrases during new translations. These systems can also interact with
specialized terminology databases.

Key Features of Translation Memory Systems

1. Perfect Matching: When a new sentence is exactly the same as a stored


sentence, it is called a perfect match. This allows translators to reuse the old
translation without any changes.

• Example:

• Old: “Close the filler cap.” → “Cierre el tapón.”

• New: “Close the filler cap.” → “Cierre el tapón.”

2. Fuzzy Matching: When a new sentence is similar but not identical to a


stored sentence, it is a fuzzy match. Translators must make small changes.

• Example:

• Old: “How to assemble the appliance.” → “Cómo ensemblar el aparato.”

• New: “How to operate the appliance.” → “Cómo operar el aparato.”


• Threshold Setting: Users can adjust the similarity percentage to control how
closely the system matches new and old segments.

• High threshold (90%): Suggests only very similar matches.

• Low threshold (10%): Allows even loosely related matches.

• Handling Ambiguous Words: For words with multiple meanings (e.g.,


“bow” – a ribbon, a weapon, or the front of a ship), the system provides several
options, and the translator must choose the correct one.

Additional TM Features

1. Filters: Filters help translate texts in different formats (e.g., HTML web
pages) without accidentally altering important codes. This preserves the
document’s structure during translation.

2. Segmentation: The system divides the text into smaller segments (usually
sentences) for easy matching. This process works well for most languages but
requires special handling for languages like Chinese and Thai which lack clear
word boundaries.

3. Alignment: Translators can add old translations to the memory through text
alignment, which pairs original and translated segments. This process improves
the database for future use.

Conclusion

Translation memory systems are powerful tools for improving translation


efficiency and quality. They work best for technical and repetitive content and
allow human translators to remain in control. These systems, combined with
other tools like terminology databases and filters, significantly enhance
professional translation workflows.

You might also like