How Facebook’s ConvNet AI Is Redefining Online Translation

Published on:

Facebook has settled on ConvNet as the foundation for future translation efforts. There is no doubt that it will play a positive role in the way users communicate on their social network.

Facebook’s ConvNet AI translation method is nine times faster, more accurate, and able to localize translations better than the current leading AI method using Recurrent Neural Networks (RNN).

According to Facebook Artificial Intelligence Research (FAIR), language translation is important for supporting Facebook’s mission to create a more open and connected world. For the 800 million Facebook users who use media and marketing translation every month, this can only be good news. Sharing content with friends and family across the world just got a whole lot easier, thanks to their ConvNet research and development.

Convolutional Neural Networks VS Recurrent Neural Networks

Facebook headed down a different path with language recognition AI technology when they chose NYU professor Yann LeCun to run their new artificial intelligence lab FAIR back in 2014. LeCun has a history working with convolutional neural networks (ConvNet or CNN) and continued to pursue the ways these could be used for a variety of AI applications when many others in the industry felt they had failed to deliver on the promise of faster processing of visual information.

While Facebook and LeCun focused on developing ConvNet, many others in the tech vertical put their energies into artificial recurrent neural networks (RNN), a form of AI which uses internal memory to process arbitrary sequences of inputs. Due to this sequence of inputs, in terms of language translation, RNN’s artificial intelligence system handles one word at a time in a sentence. It translates the first word of a sentence before moving on to translate or predict the next word for the target language.

This system was initially thought to be a better choice for language recognition and marketing translation as it gave a higher level of accuracy than its underdeveloped sister, ConvNet.

ConvNet: A Better Fit For GPUs

ConvNet makes the best use of a computer’s GPU – the part of a computer system that has been described as the faster and more powerful workhorse – because it can work on a number of different tasks in parallel. This makes it the perfect partner for ConvNet AI translation, which processes all the words in a sentence at the same time. This enables it to capture complex relationships in data which, in this case, are natural speech and typed messages.

Because RNN relies on working in a linear way – from left to right, or vice versa, translating one word at a time – it cannot make full use of GPU parallelization. Instead, it is more reliant on your CPU: the slower, yet ‘smarter’ component of your computer.

Thanks to this better management of data processing, Facebook and LeCun’s ConvNet development is far superior for scaling up translation between languages and handling all 6,909 living languages around the world.

ConvNet: The Future of Translation

It is easy to see why Facebook has settled on ConvNet as the foundation for future translation efforts. There is no doubt that it will play a positive role in the way users communicate on their social network and will potentially impact other companies that they hold an interest in – WhatsApp, Instagram, and possibly even Oculus VR.

An additional benefit of the ConvNet translation on Facebook is its ability to study language in real time. Each time you ask for a translation of your Spanish-speaking friend’s post, you’re teaching the algorithm how to translate more accurately and helping it to understand current language use.

Thanks to Facebook’s inclination to share their AI research, the ConvNet language translation research they have completed has been made available to others via their open-source code website. This allows diverse users to make use of these groundbreaking discoveries in AI translation.

 

__________

Sharing is caring!