TT Mid Notes
TT Mid Notes
The primary distinction between HAMT and MAHT lies in who leads the
translation process. In HAMT, the machine is the main translator with human
support. In MAHT, the human translator leads the process while using machines
as supportive tools. In practice, however, the boundaries between these
categories are becoming increasingly blurred as modern systems integrate
features from both approaches.
Machine translation systems are also classified based on their intended users:
Pre-Editing in HAMT
Pre-editing is the first stage where a human editor reviews and corrects the
source-language text before it is processed by the machine. The aim is to
remove any problematic elements, such as unusual phrases or typographical
errors, that could cause difficulties during machine translation. Pre-edited texts
are easier for machines to handle and produce more accurate translations.
Post-Editing in HAMT
Post-editing is the final stage where a human editor reviews the machine’s
output and improves it to meet specific linguistic standards. This involves
correcting errors, improving the flow of the text, and ensuring proper use of
terminology. Post-editing is necessary because machine translations often lack
the nuance and context that human translators provide.
Conclusion
Before the 1990s, when the Internet became widely available, the four
translation types identified by Hutchins and Somers (1992) were easy to
distinguish. However, as technology advanced, the boundaries between these
categories became blurred. Modern translation tools now perform multiple
functions, making old classifications less relevant.
Chapter 2
Translation Theory
This debate, which started with Cicero and St. Jerome, contrasts word-for-word
translation (literal meaning) with sense-for-sense translation (overall meaning).
This debate influenced early translations of major works like the Bible, Greek,
and Buddhist texts (Munday 2001).
• House (1977): Overt (visible as a translation) vs. covert (reads like an original)
translation.
• Skopos theory aligns with the “fitness for purpose” approach, determining
whether to use human or machine translation.
• Chesterman (2000) argues that no single theory can explain all translation
phenomena due to its complexity. For machine translation, specialized theories
focusing on controlled language are needed (Melby & Warner 1995).
In conclusion, translation theories have evolved from simple literal vs. free
approaches to complex models influenced by linguistics and functional
approaches. These theories are essential for understanding and improving
machine translation systems.
This section examines the relationship between four key groups involved in
translation—linguists, professional translators, translation theorists, and
scientists—and their different approaches to translation theory and practice.
• Linguists’ Viewpoint:
• Translators’ Viewpoint:
• Theorists’ Focus:
• Their primary goal is to explain translation processes rather than directly solve
practitioners’ problems (Fawcett, 1997; Chesterman, 2000).
• Translators’ Focus:
• Shared Ground:
• Linguistic studies on social factors (e.g., gender, age) can inform translation
decisions.
• Tension Points:
• Scientists’ Viewpoint:
• Linguists’ Viewpoint:
The divide between these groups lies in their goals—scientists seek practical
machine applications, while linguists focus on human-centered language
analysis.
Conclusion
• Minimal syntactic analysis (e.g., identifying nouns and verbs) was required.
• Formal Approach:
• Functional Approach:
• Considers the use of language and how words interact in social contexts.
• Became the dominant formal approach but proved unsuitable for machine
translation due to its complexity (Hutchins, 1979).
• Eugene Nida adapted deep structure analysis for translation purposes (Fawcett,
1997).
S → NP VP
NP → N (Jane)
VP → V NP (kicks David)
• Example: PRED (kicks) has attributes SUB (Jane) and OBJ (David).
• Concepts like Catford’s translation shifts and Vinay & Darbelnet’s procedures
are not integrated into machine translation systems.
• This separation is partly due to the absence of linguists and translators in the
machine translation field (Wilss, 1999).
• Many translation studies texts (e.g., Gentzler, 1993; Riccardi, 2002) neglect
machine translation.
• This division is problematic for students, as technical translation increasingly
involves translation tools.
DTS investigates actual translations and is further divided into three research
approaches:
Technological Integration
Pre-editing
Post-editing
Post-editing refers to the process of refining and correcting MT outputs. While
human translations undergo revision, the term “post-editing” is reserved for MT
outputs. According to Allen (2003), it involves minimal intervention to preserve
the machine’s output while making the text accurate and coherent.
Given the ongoing reliance on MT, pre- and post-editing are expected to remain
integral to professional translation practices, especially for technical texts.
Future translation training programs may incorporate these tasks to meet
industry demands.
Chapter 4:
This chapter focuses on the different tools and resources that help professional
translators, especially those working with specialized texts. Key tools include
translation memory systems (TMs) and terminology management systems,
which improve translation speed and accuracy. There are also standards like
Translation Memory eXchange (TMX) that ensure compatibility between
different systems.
Workbenches
• Electronic dictionaries
These systems store and retrieve previously translated text segments to help
translators reuse past work. Unlike machine translation, TMs require human
input to approve or reject suggestions. They work best with technical documents
like legal texts, manuals, and reports that contain repeated phrases.
• How It Works: TMs store aligned texts and retrieve matching or similar
phrases during new translations. These systems can also interact with
specialized terminology databases.
• Example:
• Example:
Additional TM Features
1. Filters: Filters help translate texts in different formats (e.g., HTML web
pages) without accidentally altering important codes. This preserves the
document’s structure during translation.
2. Segmentation: The system divides the text into smaller segments (usually
sentences) for easy matching. This process works well for most languages but
requires special handling for languages like Chinese and Thai which lack clear
word boundaries.
3. Alignment: Translators can add old translations to the memory through text
alignment, which pairs original and translated segments. This process improves
the database for future use.
Conclusion