The number of connected wearable devices worldwide will pass the 1 billion mark this year, and wearable translation devices are a key part of that growth. Let’s take a look at some of the more interesting devices on the market, and explore the technology that makes them viable.
As any “Star Trek” aficionado worth their salt will recall, Captain Kirk used a futuristic-looking handheld translator to communicate with the Companion in the year 2267. The good news is that recent developments in the application of machine translation and language interpretation to the wearables market mean that we might get our hands on wearable technology that can perform accurate translations discreetly and in real time – almost 250 years earlier than the crew of the Starship Enterprise.
The translation processes of the latest generation of handheld devices use speech recognition, machine translation, and machine learning to render your words into a recognizable form in another language. Speech recognition technology is already widely used in voice search applications, virtual assistants, and speech-to-text applications. The engines that power it are constantly improving and improving themselves over time, making better and more accurate “on-the-fly” translations possible.
Here are just a few of the systems that show the most promise.
Fujitsu’s newest translation device has been specifically designed to allow people like first responders and policemen to communicate when their hands are occupied, and the technology is flexible enough to be adapted to the tourism and public services, and healthcare markets. The technology differentiates speakers by using small omnidirectional microphones, an ingenious modification of the shape of the sound channel, and improved speech detection technology that is highly resistant to background noise.
Similar in look and function to a hearing aid, Waverly Labs’ Pilot uses automatic speech recognition, machine translation, and speech synthesis to instantly translate speech using paired devices, allowing a user to have an “almost fluid” conversation with anyone speaking another language. The heart of the process is Waverly’s app, which both partners in a multilingual conversation need to download onto their phones (it’s free on both iOS and Android). Users then “sync” their conversation through a matching QR code on the app, press a button, and talk into the earpiece’s microphone to record what they want to say. The user’s voice is then piped through Waverly’s machine translation software, which converts it to text on the second user’s app. If that user also has their own earpiece, they will hear a translated version of what the first user said, albeit via a computerized voice.
Google’s Pixel Buds use a Bluetooth connection to team with its Pixel handset in order to offer real-time translation in over 40 languages via the Google Translate app. All you have to do is say “Google, help me speak (language)” to launch conversation mode in the app. At that point, you’ll hear the translated message directly through the Pixel Buds. When it comes time to respond, you can use the Google Translate app to speak to another user in their language.
Designed specifically for travelers, Logbar’s Ili uses voice-activation without the need for an internet connection and can repeat phrases back to a user in English, Japanese, Mandarin, or Spanish in as little as 0.2 seconds. The user simply pushes and holds a button, then speaks into the built-in microphone after a beep. After speaking, the button is released and the input is translated into one of the supported languages. Because the Ili is aimed at the travel sector, it’s designed for translation during typical travel activities such as shopping, dining, sightseeing, and riding in a taxi. It cannot translate more detailed conversations, technical words, or slang.
For something really outside the box, SignAloud’s award winning gloves are designed to translate American Sign Language into text and speech. The gloves are equipped with sensors that record hand position and movement and send data wirelessly over Bluetooth to a computer, which then uses sequential statistical regressions to look at the gesture data similar to the way an AI-powered neural network would. If the data matches a gesture, an associated word or phrase is spoken through a speaker. In the future, SignAloud hopes that the gloves could also be commercialized for use in other fields, like medical technology and enhanced dexterity in virtual reality.
For years, the general consensus has been that there is still a way to go before wearables can translate speech with the kind of accuracy and immediacy needed for high-level business meetings or medical emergencies. While the technology involved is still in relative infancy, and most devices still struggle with complexities like cultural nuances, there’s no denying that the last few years alone have seen an exponential expansion in what wearable translation devices can do.
The Argos way
At Argos Multilingual, we’ve been at the forefront of translation technology since our founding in 1996, and we’ve always made sure that the translation technologies we create and adopt make life easier for our translators as well as for our clients. We’re well positioned to take a leadership role in the wearable translation movement, and we believe that our accumulated expertise, innovative mindset, and talented people will all play a key role in making the technology viable. As a matter of fact, our team has already been working with leaders in the natural language processing (NLP) field to provide quality data that leads to the improvement of voice recognition and automated translation technologies. This includes projects that train computers to understand challenging accents or individuals with speech impediments. Visit us to see why we’re confident in our ability to impact the future of translation technology – whatever it may be.