CMU Researchers Help Interpreters Find “Le Mot Juste”

Friday, February 15, 2019 - by Scott Fybush

The Language Technologies Institute’s Computer-Aided Interpreter team (l to r): Graham Neubig, assistant professor in LTI; Craig Stewart and Nikolai Vogler, Ph.D. candidates in LTI. (Photo by Kevin O’Connell)

Think about all the challenges that go into interpretation:  a human sitting in a booth, listening to live speech in one language and somehow smoothly repeating it in another language, sometimes for hours at a time. Then imagine our interpreter is working at a specialized conference where the speaker is using arcane terminology, forcing them to come up with rarely used words and phrases at a moment’s notice.

Add a computer into the mix, and our interpreter’s job gets easier, right? Not so fast, said Graham Neubig, an assistant professor in the Language Technologies Institute (LTI) who’s been leading a research project to determine how machine translation can help with live interpretation — and just as critically, where and how  it gets in the way of smooth translation.

Neubig has been fascinated by the intersection of language and computers since he was a foreign exchange student in Japan while studying computer science at the University of Illinois in the early 2000s. He later returned to Japan to teach English, and brought his work on natural language processing to CMU’s LTI in 2016.

The project Neubig’s team is working on pairs human interpreters with a computer assistant that listens along with them, offering up just enough on-screen help with specialized terminology and staying silent when its services aren’t required.

The idea, said Neubig, is to let both human and computer do what they do best — and the challenge is to find how to strike that balance. Humans, he said, excel at understanding the sort of nuances that are still insurmountable obstacles for machine translation.

“I think human interpreters are incredibly good at dealing with adverse situations,” Neubig said. The precise choice of words can be sensitive, he notes, and the wrong choice of idiom can create an unwelcome incident. “If you interpret in a particular way, it would be very rude in some cultures, for example. So human interpreters know that, and they can adjust accordingly. All of these things would be huge problems if you tried to build a fully automatic system.”

Computers, of course, have their own strengths, especially when it comes to instantly retrieving an almost infinite list of stored words and phrases, no matter how obscure. How do you say “556” in Italian? Numbers, too, are something machine-assisted translation can do well.

 

Building a model

At this past summer’s annual meeting of the Association for Computational Linguistics (ACL), Neubig and his team presented some of the initial findings from that work with real-world translators. One key piece of that work, it turns out, focuses on the importance of listening to the output from the interpreter using the quality of their work to help the computer determine how much help to provide and, just as importantly, when to leave them alone.

“The idea is that if people are struggling, then we should potentially be giving them more help, but if they’re doing fine, we don’t need to distract them by putting lots of stuff on the screen,” said Neubig.

So Neubig and his team started with an existing machine-learning model designed to assess the quality of automated output, then customized it with additional criteria to analyze the output of a human interpreter.

“We added other things, like how many pauses is the person making, how many times are they saying things like ‘um’ or ‘uh,’ and what is the difference between the number of words in the output and the number of words in the input,” he said. “Because if the output is much shorter than the input, then that’s an indicator that things are not going well.”

Whether the interpreter is struggling or cruising along smoothly, Neubig’s model still has plenty of work to do behind the scenes to determine what help it should or shouldn’t be putting on screen. For example, he said, the system has to know which words are common enough that they never need to be translated on screen.

“In order to learn this model,” said Neubig, “we have a database of interpreted speech, and we have the terms on the input side, and then we also have annotations of whether the interpreter actually got that term correct or not. This gives us yes/no labels about whether the interpreter is going to need help with that term. Then we train our model to try to accurately predict those yes/no labels.”


We’re going to need a bigger database

Putting together that database has been quite a challenge, Neubig said.

“It’s actually very difficult to get data. There are some databases of interpreter speech, but these are highly curated and they took a lot of time and effort to create. And they’re still, from a machine-learning perspective, very small. So getting something to learn what we want to learn properly from these relatively small collections of data has been the biggest challenge.”

Neubig and his team at CMU are already teaming up with outside partners, including  the Department of Interpretation and Translation at Italy’s University of Bologna, to attempt to build  a larger database — and then, perhaps, to try out  his system in the real world with interpreters learning their craft.

“I think some form of what we’re building could become a commercial product relatively soon,” Neubig said. “The technology is kind of there, but we’re still working out the kinks.”

Because the system is listening to its human interpreter and assessing how well they’re doing, Neubig said it may eventually provide a side benefit, giving interpretation providers immediate feedback on how well (or how poorly) the interpreters they hire are doing their jobs for their clients. Several companies have already contacted Neubig’s team to express interest in that aspect of the technology, though he said no formal work on that part of the project is yet underway.

Along those same lines, Neubig said the technology could have huge benefits in training new interpreters, providing immediate feedback as they practice.

“They go along and interpret,” he said, “and then post facto, it goes in and says, ‘Maybe in   this particular part of your interpretation you weren’t doing as well. Would you like to go back and review that and see if there was any way you could do better? Maybe these parts are the things that you should concentrate on.’ So I think that might be another direction that we want to pursue in the future.”

Working with interpretation schools has also allowed Neubig’s team to sharpen its focus, dropping some potential features that the schools said wouldn’t be of any use to their students. 

For More Information, Contact:

Bryan Burtner | bburtner@cs.cmu.edu | 412-268-2805