Why Google’s investment into artificial intelligence will not change how we approach professional translation

Last week, Google acquired London-based artificial intelligence (AI) company DeepMind Technologies for a reported $400 million.

The online giant made its motives clear: to improve and speed up its search and translation function.

Language translation has long been considered the holy grail of AI. In an article for Atlantic magazine published last year, computer programmer James Somers provides an in-depth history of machine translation. The initial method involved bringing together professional linguists in a room as developers tried to “translate” their knowledge into a set of rules that computer program could understand.

This rule-based approach inevitably failed, language being “too big and too protean; for every rule obeyed, there’s a rule broken.”

IBM developed a more successful approach to machine translation in the late-1980s with a project called Candide, using a technique known as “machine learning,” which has gone on to become the cornerstone of AI.

Essentially, you feed the machine data – or in this case millions of sentences, both in the source and target language – assign the right translations to every word, develop an algorithm that processes how often a certain word follows another and test it over and over again. For every mistake the machine makes, corrections are input and the algorithm is adapted.

Google Translate essentially runs on the same technique, only feeding the machine virtually unfathomable amounts of data. (As Somers points out, who, after all, owns more data than Google?) Until now, this seems to have worked well enough – old machine learning techniques matched with newly adapted algorithms and literally trillions of word and sentence combinations have given us the Google Translate we know today – which is, well, sensible but by no means of a professional standard. Phrases translated with the aid of Google’s translations algorithms still have knack for jumping off the page.

The search giant’s acquisition of DeepMind Technologies signals its intention to adapt new machine learning algorithms for its search and translation functions, drawing faster and more accurate results. Google said of its work on language, translation and speech processing:

“In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, and we apply new algorithms to generalise from that evidence to new cases of interest.”

AI is still based on behavioural predictions, which works in games and simulations, but language is as much behavioural as it is a product of intelligence and brain processing. Language, after all, is malleable: new metaphors and idioms are conceived every day in every language, meanings shift and nuances rarely sound as resonant in other languages.

Google translate will no doubt improve over the coming years, with demand for quick and free translation ever rising. However, based on interviews with Google Translate developers, Somers writes that the more machine translation improves and edges closer to the level of a professional translator, the steeper the road becomes.

Despite the advancements made in AI, it remains overcome by the human mind’s capacity for intelligence, comprehension and imagination. Our reliance on the internet may have made us a lazier or even dumbed us down, but for the professional linguist language, writing, and the ability to decipher hidden meaning and articulate emotions remains an art form.

Don’t get me wrong, industries will benefit from new technologies and innovations, but professional human translation will remain the heart and soul of the language industry.