News

green ribbon

Belfast Technology firm Liopa launches first commercially available automated lip reader

BELFAST-based technology specialist and Co-Fund portfolio comapny Liopa has commercially launched the world’s first automated lip reader.

It will initially be used to prevent ‘spoofing’ in facial recognition systems, where there is a threat of compromise from images or videos of the subject being presented by an imposter.

The initial application, called LipSecure, is currently in commercial trials with a number of facial recognition companies.

Liopa’s technology is based on visual speech recognition (VSR) and analyses a speaker’s lip movements to decipher what they are saying. It uses advanced automated intelligence-based techniques and can be used on any device with a standard camera.

Liam McQuillan, chief executive at the former Queen’s University spin-out company, says: “There has been considerable research done in the VSR area over the last decade. This has grown exponentially lately, largely due to the increased use of voice to drive applications such as Siri, Cortana and Alexa.

“VSR has been shown to perform well in a lab setting, but getting it to work accurately in the real world, and on a mobile device, is incredibly difficult.

“Our technology can cope with speaker head movement, varying lighting conditions, poor resolution – all things that will happen in everyday use. Being the first to launch a VSR service is a fantastic achievement.”

Liopa’s technology is the product of over 50-man years of research into visual speech recognition conducted at Queen’s University.

The company was only incorporated in November 2015 and the following year was named best early stage company at the regional final of the InterTradeIreland Seedcorn investor readiness competition.

Mr McQuillan added: “The initial LipSecure application only needs to support a very small vocabulary. As we extend this vocabulary we can do a lot more.

“We are launching a communication aid for patients who have impaired speech early next year and hope to be embedding our technology in lots of voice-driven applications throughout 2019.”