Cars

A Lincoln That Gets You: How SYNC® Understands Your Commands

Lincoln’s in-vehicle communications and entertainment system with voice control is continuously developed and updated to increase your productivity and connectivity. SYNC’s voice command recognises commands in different languages and accents thanks to its language model and decoder software. User-generated data from SYNC 3’s over-the-air diagnostics and analytics help engineers tailor updates on streamlining the operation of voice command.

When the intuitive SYNC® infotainment system launched 13 years ago, voice-controlled features revolutionised the way we interacted with our vehicles. Today, as SYNC 3, the innovative communications and entertainment system has evolved to allow more and more people globally to enjoy its growing number of advanced features.

Currently, SYNC supports voice commands for over 25 languages, with Lincoln and Ford’s Core Speech Technology team based in Dearborn, Michigan, leading innovations for users around the globe. The team is headed by Yvonne Gloria who has been involved with the system’s innovation since SYNC 3 debuted.

A software engineer by trade, Gloria experienced first-hand the need to simplify the usability of systems like SYNC.

“Not all users of our software are engineers. Just because I developed the software to a specific task, the customer shouldn’t be forced to see it that way. This led me to studying how people use computers and learn software, which made me think like a customer rather than an engineer,” says Gloria.

 

 

How Does SYNC Know What I’m Saying?

SYNC’s voice-activated system has a speech engine which acts kind of like a speech recognition “brain”, with a language model and decoder software within that brain to break down, analyse, and understand verbalised commands.

Delving even deeper, the language model is a vast bank of words or commands which are paired to specific tasks. For example, the command, “Call John Doe” will be listed in over 25 languages, with a large catalogue of commands corresponding to the voice-activated features within SYNC all listed in the language model.

The decoder software takes sound characteristics of each command and matches it with the list of words in the language model. Using the same example, when “Call John Doe” is said, the decoder analyses the different sound characteristics that are spoken into the system. It will then find a similar set of characteristics within the language model.

ZiadChaaban, Engineering Supervisor, Product Development, Ford Direct Markets, said: “Of considerable importance to Lincoln’s Middle East customers was the addition of Arabic to the SYNC system in 2017. But different accents from the region needed to be taken into consideration also. With diverse dialects across the region, the language model for understanding Classical Arabic is taught, as well as Gulf, North African, and Levantine Arabic, which gives the decoder software the best chance of recognising a command.”

User-centric Development

Constant evolution has helped Core Speech Technology engineers to refine and expand SYNC’s functionality. By analysing the ways customers walk through SYNC, engineers are able to identify ways of making the system more intuitive, either by streamlining tasks or by making them easier to access.

With over-the-air diagnostics and analytics on SYNC 3, the engineers can get a steady flow of voice-recorded data showing how the customers advance through SYNC for different tasks. Engineers detect the common errors that users encounter and help streamline the tasks, rather than just leaving the user to figure it out for themselves.

“It’s a never-ending activity once the programme starts, until it goes to the end of its life cycle, because you’re constantly taking market feedback to create updates further down the road,” said Stephen Cooper, Voice Recognition Features lead, SYNC 3.

Thanks to this user-generated data, certain voice commands like searching navigation has been cut from multiple steps to one. Over 80 to 85 per cent of voice commands are now done with one step thanks to the effort of the Core Speech Technology team.

 

The Future of Voice Commands

As technology moves forward and gets better, and as the number of buttons are reduced, or even entirely eliminated, in place of bigger and more prevalent screens in future Lincoln vehicles, voice command technology will have an even bigger part to play.

The Core Speech Technology team believes the next step to be the ability to naturally interact with the system, rather than sticking to a rigid set of commands. Wake words, or phrases that can prompt the system to take notice of a command is something they are looking into incorporating with future SYNC systems.

“There is infinite room for the advancement of in-vehicle technology, especially as cars become more and more connected,” continued Chaaban. “To consider where SYNC started, to how advanced it has become, should invariably drive excitement for the future connectivity of Lincoln vehicles, and of their ability to have a deeper understanding of what to achieve from a simple command.”