The augmented interpreter research ZHAW applied linguistics

The augmented interpreter: The future of simultaneous interpreting?

Simultaneous interpreting, rendering a speech live from one language into another, is a tough job. Topics are often complex and require not only specialised knowledge but also specific terminology. In order to translate technical terms accurately, interpreters tend to use printed or hand-written glossaries or on-screen ones on their booth laptops. In a pilot study, a team of three researchers at the ZHAW is exploring whether augmented reality technology can support interpreters in this extra effort of having to look up terms.

By Anne Catherine Gieshoff

Take an international gathering with experts from all over the world. Such meetings are often interpreted into several languages thanks to a team of conference interpreters. The interpreters’ job is to deliver a highly accurate and fluent rendition and to adopt the language and terminology of the delegates. In preparation, they have therefore compiled a glossary that they can consult during their assignment. Whenever a delegate uses an unfamiliar term, the interpreters quickly look down at the glossary on their laptops in search of the correct equivalent. However, retrieving the term interrupts the visual contact with the delegate and, if only for a brief moment, disturbs the delicate balance between comprehending the source text and producing the target text. As a result, additional effort is needed to refocus attention on the delegate who is presently speaking.

Technical terms, numbers, proper names: Visual support for problem triggers

The example above does not only illustrate that interpreters actively make use of visual information, but also how switching between different types of visual information, for instance to look up unfamiliar terms, can momentarily destabilise the interpreter. To support interpreters, computer-assisted interpreting technologies have been developed which use speech recognition to suggest translations of technical terms, numbers or proper names uttered by the speaker. These tools, however, do not solve the problem of attending to visual information. As project leader Anne Catherine Gieshoff puts it, “Regardless of whether the interpreter uses a glossary or a CAI-tool, they still need to redirect their visual attention because the glossary or the CAI output is displayed on a computer screen which is usually much lower in the interpreter’s field of vision than the speaker is.”

The augmented interpreter: The future of simultaneous interpreting?

Defaulttext aus wp-youtube-lyte.php

Augmented reality: between reality and virtuality

One possibility to better integrate the visual input from glossaries or CAI-tools with other relevant visual information is the use of augmented reality technology. Augmented reality enhances the physical world with virtual content. Contrary to virtual reality, the physical environment is still perceptible in an augmented reality paradigm. “Augmented reality is not the same as virtual reality, where the physical world completely disappears. You can still interact with objects in the real world, but additionally, you can also interact with virtual objects that would otherwise not be available. This is what makes augmented reality particularly useful for tasks that require a high degree of immersion such as interpreting,” explains Martin Schuler, team member and head of the usability lab of the School of Applied Linguistics. Anne Catherine Gieshoff adds, “We believe that this may considerably improve the interpreters’ ergonomics since they would no longer need to switch their visual attention back and forth between speaker and laptop.”

3 researchers augmented interpreter zhaw applied lingiuistics
The project team: Zaniyar Jahany, Anne Catherine Gieshoff and Martin Schuler.

The augmented interpreter: A pilot study on augmented reality technology in interpreting

In order to test the potential of augmented reality technology in simultaneous interpreting, the project team is in the process of conducting a pilot study with professional conference interpreters. The participants interpret a technical talk wearing augmented reality glasses. During the interpretation, they are presented with suggestions for technical terms, numbers or proper names, which pop up in the target language whenever they are uttered by the speaker in the source language. For this purpose, Zaniyar Jahany, team member and mixed reality expert, designed a specific application. “The words appear in a semi-transparent blue box. The box can be positioned anywhere in the room. So, you can ‘grasp’ it and place it wherever it is convenient for you. You can also change the font size if you like. At this stage, the application is still quite simple. It does not use speech recognition, but it imitates the features necessary to test the potential of augmented reality for simultaneous interpreting.” The results of the study are expected in fall 2023.

The project is supported by internal funds – ZHAW digital / Digital Futures Fund.


ZHAW_angewandte-linguistik_IUED_Image

The IUED Institute of Translation and Interpreting is the ZHAW competence centre for multilingualism and language mediation. It is actively engaged in conducting research, offering degree programmes and continuing education courses, and in providing services and consulting in these fields.

The BA in Multilingual Communication and the specialisations in Professional Translation and Conference Interpreting within the MA in Applied Linguistics are practice-focused degree programs for the communication experts of tomorrow.

The IUED has a strong international reputation. It is a member of prestigious international networks, such as CIUTI and EMT, and it has close ties (link in German only) with institutes and universities in Switzerland and abroad.


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert