21-05-2019 | | By Paul Whytock
Be in no doubt, you are being watched. Londoners on average will be CCTV'd 300 times a day and across the UK there are about 4.3 million cameras in operation.
When it comes to crime and terrorist prevention this is a good thing, although there are of course plenty of whingeing, bleeding-heart lawyers that will maintain it is infringing our civil liberties.
But having those lawyers around my not be a bad thing because one thing is clear and that is the facial recognition opportunities that camera surveillance provides are flawed - a fact that has seen legislators of the city of San Francisco ban the use of facial recognition systems by its police force, the first city in the USA to do so.
The algorithms at the heart of facial recognition, and there are 15 of them, are still not good enough at accurately identifying a known face.
Research and development work to resolve these flaws has been going on since the early 1960s but despite what can be justifiably described as good progress, significant problems relating to identification reliability still exist.
Variable ambient lighting plays a major but at times deceptive part in facial recognition. The same goes for the human face. It is capable of a multitude of expressions that actually alter facial dimensions. And we are not talking here about the extreme distortions of that weird hobby face gurning (pictured) but everyday ones that express happiness, surprise horror etc.
But commercial pressure is on to resolve these inaccuracies in facial recognition algorithms because estimates put the market value for such systems at around £6 billion and growing fast.
Briefly, facial recognition works by comparing a picture of your face with stored images. The software reads the geometry of your face. Key factors include the distance between your eyes and the distance from forehead to chin. The software identifies facial landmarks, up to about 80 of them and then determines whether your face map matches that of a face image stored on a system database.
Regarding the way in which ambient lighting can confuse facial recognition, algorithms that work with 3D recognition sensors are helping with this. Because 3D recognition is able to look at a face from a variety of angles including side profile. The sensors work by projecting structured light onto the face and up to 20 of these image sensors can form part of a CMOS-based processor with each sensor capturing a different part of the light spectrum.
All that sounds pretty good but two things can still confuse recognition systems and they are facial expressions and skin colour and textures.
One particular technique called STA (skin texture analysis) turns the facial lines and patterns and other features like spots and freckles on a person’s skin into a mathematical map.
STA does this by taking a picture of a section of skin and then that section is subdivided into smaller sections. This is then subjected to analysis by an algorithm which creates the mathematical data that can be used to search for a match with skin data already stored on the system.
This all sounds great but in reality, it doesn't work too well. Personally, I'm not too surprised about that. Let's face it (sorry), skin spots come and go, so do rashes and skin irritations and then there is of course those much publicised anti-wrinkle creams that can change skin textures.
To prove the point a recent paper published in the USA made it clear that when it comes to accurately recognising dark skin facial recognition systems fail miserably.
In this paper American scientist, Ms. Buolamwini analysed the accuracy of three well-known recognition systems from Microsoft, IBM and Megvii by checking how well they identified the gender of people with different skin tones. Each of these companies provides facial recognition technology that has a gender recognition facility integrated in them.
The test studied nearly 1300 faces and these consisted of people from African nations with dark skin and light skinned people from Scandinavia. The results showed the way in which recognition systems can be seriously inadequate when it comes to accurately identifying dark skin tones.
Microsoft achieved an error score of 21% for dark skinned women, IBM’s and Megvii’s inaccuracy scores were close to 34%. Amazingly, when it came to identifying fair-skinned males the error rates show massive improvement and dropped as low as 1%.
Over here in the UK there have been plenty of trials of automated facial recognition technology and in particular these have been at very crowded events in England and Wales.
Back in 2017 at the UEFA Champions League final in Cardiff, South Wales Police deployed facial recognition technology using images provided by UEFA. The trial was an outstanding failure with more than 2000 incorrect facial matches being made that would have implicated innocent people.
OK, so we know that a lot more work has to been done to get those facial recognition algorithms working properly but one thing that also needs to be controlled is the security risk of having your face on the databases which the recognition systems use to compare captured images.
The question here is simple. Will hackers really want to steal your face? If your facial data can be used to commit fraud or turn a profit, the answer is yes.
Can they do it? The answer is yes. Back in 2012 an app was developed that could put a false face onto a moving video image of another persons face.
Hackers only have to turn to the Internet or Facebook for images of faces and information about the people they belong to.
Going back to that 3D technology, researchers have already demonstrated how 3D rendering using digital 3D facial images based on publicly available photos and displayed using virtual reality (VR) technology can fool facial recognition systems.
All this means that Big Brother may not be quite so eagle-eyed as we are led to believe and, furthermore, given today's visual imaging technology and the hackers criminal ingenuity, and perhaps your face is not necessarily your own.