Blog
19th May 2017

By Karl Roberts – Head of Propositions, GCI.

Biometric security has been in the news today following a BBC investigation which appeared to show weaknesses in the Voice ID authentication process at a leading bank. A BBC reporter teamed up with his twin brother and the non-account holder was able to access (but not withdraw money) from his brother’s account using the Voice ID function. One of the more interesting facets of the story is that the ‘fraudulent’ twin attempted to access the account seven times via voice and each time was declined… however on the eighth attempt he was successful.

Does this mean that voice recognition is now no better than the scene from the 1992 Robert Redford film ‘Sneakers’ my voice is my passport? No, of course not. Biometrics systems such as Voice ID authentication are safe and secure provided such systems are designed with the necessary steps (and combinations) in the process to handle unique risk factors. In this case, twins typically have higher than normal thresholds so special considerations should be made around such scenarios.

The key lesson is that a single, or dual means of authenticating a user is never enough. Firms should always use a multi-layered fraud approach. This includes meta & network voice fingerprinting, behavioural characteristics, multi-factor authentication as well as biometrics. This ensures that no single factor is relied upon. This needs to be backed-up with process intervention with limits on ‘attempted account access’ in place. More simply, in this case the suspicious activity should have been flagged to a live agent (a person) before the 8th successful attempt.

Some of these methods should be invisible to the actual user. So, for example, the bank should know the number you usually call from. It should also have an idea as to what time of day you usually call. Looking beyond just Voice ID, it is possible to track devices that the user commonly uses. So, if users log-in from a different machine, or a different country, there will be a further level of authentication that needs to take place.

With voice, it is also possible to apply analytics to assess if a caller is stressed… or is potentially giving false representation. Either way, in the BBC example, the caller should have been either locked-out of the system or forced to provide additional authentication. In this case, it appears that it was the deployment of the technology that was at fault (the process) rather than the technology itself.

It’s important to take a step back from the BBC report and recognise the importance biometrics can and should play in our never ending fight against fraud. Traditional defence methodology is simply not enough. Banks need another powerful tool in the armoury against fraud and Voice ID is one such solution. No technology is 100% foolproof, but when used in combination with other methods it can be. Bear in mind that up to 15 million customers in the BBC report are successfully using the service… and that cases of a ‘biological twin imposter’ breaching the security are going to be incredibly rare.

Looking ahead, Voice ID uses machine learning. Simply put, this means it gets better every single day and whilst occasional slip ups do occur with every technology, these will be gradually eliminated as processes and technologies steadily evolve.

 

 

Karl Roberts was a panel speaker at UC Expo in London on 18th May 2017. He has designed and implemented Voice ID Solutions for some of the UK’s leading organisations. He is now Head of Propositions at GCI.

For more information please contact enquiries@gcicom.net