This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Is ‘Big Brother’ really watching?

18 October 2019

Adrian Timberlake, Technical Director of Seven Technologies Group (7TG) and specialist in military, defence and law enforcement security solutions, examines the use of facial recognition, its flaws and how developers can eliminate bias.

Shutterstock image

Criticisms of facial recognition include concerns over privacy, data usage and bias in the technology potentially putting certain groups of people, such as people of colour, at risk of unfair treatment.

The King’s Cross ‘scandal’ has resulted in campaign groups and politicians calling for a halt to facial recognition trials. The technology has been widely, almost exclusively, criticised in the media as an invasion of privacy and even as a tool to enable the creation of surveillance states, where ‘Big Brother’ is always watching.

But is there really any need to be concerned?

Facial recognition systems are widely used in China, as payment applications, surveillance and even housing security systems, and are generally accepted by Chinese citizens. Facial recognition technology has the potential to enhance security and public safety; it is another eye, with an eternal memory, that could provide the faces of criminals unseen or forgotten by police.

But what will it take for Britain to accept facial recognition technology?

The types of facial recognition

There are two types of facial recognition technology: targeted and general.

Targeted facial recognition

This type of facial recognition is used in military, defence and law enforcement operations. Its purpose is to catch known criminals or suspects. This type of facial recognition is programmed to only provide alerts when it recognises persons of interest. It still scans faces of passers-by, to match them against the list in the database, but it doesn’t record a face unless it believes it’s a match.

How does the technology know who to target?

The technology is programmed by military or law enforcement personnel, and the data fed into the technology already exists in the system. It could come from existing criminal records or investigations, for example.

What if the technology makes a mistake?

The technology is still evolving, and it may be necessary to accept that to reap the benefits of new technologies such as facial recognition, it first has to be developed and improved over a period of time. The problem with calling for a complete shut down on use of the technology, in case it makes a mistake, is that it is difficult to develop the technology without it being tested on a true userbase.

Shutterstock image

The media has reported that facial recognition appears ‘racist’, ‘sexist’, ‘biased’ and has issues identifying people of certain demographics. The reason for this is lack of data and that facial recognition trials have been limited and short-lived.

The technology uses deep learning, which could be compared to the way that human children learn; for the technology to be able to better distinguish between facial features, it needs a wide pool of data to make comparisons with. This would ideally be a large amount of data on male and female faces, from every race, so that the technology can learn the intricacies of facial features and, additionally, how to recognise facial hair and make-up and still provide an accurate result. As Caucasian males make up the majority of the existing data that can be used to develop facial recognition, the technology has more success in correctly identifying that particular demographic.

Restricting the use of facial recognition technology and trials severely limits access to the data needed to develop and improve it.

General facial recognition

This type of facial recognition camera is programmed to analyse everyone within range and identify objects. This is the type of facial recognition that has been hitting the news, the type that many argue is the real ‘Big Brother’, or at least the beginning of our demise into a totalitarian state.

However, there are arguments in support of this type of facial recognition technology. Because of its potential to improve public safety and security and to be a deterrent for crime, it’s no surprise that some police forces have been involved in facial recognition technology trials.

We’ve all seen pleas from police forces to the public to identify dangerous criminals circulating in the news. CCTV is lacking in that even if it picks up a clear image of perpetrators’ faces, they still need to be identified. They may be unknown to police, who would then have to appeal to the public; but there is no guarantee that a member of the public who could identify a perpetrator would come forward, for fear of being targeted in revenge.

If facial recognition technology could identify perpetrators instantly, as well as cutting out the need for appeals, there would also be less risk of them having time to commit additional crimes before they are caught.

We need to think about facial recognition in terms of what it can do for societies

China is a world leader in technology such as artificial intelligence and facial recognition. While the majority of British citizens appear to consider facial recognition a science-fiction horror, Chinese citizens are used to innovation and change. But the difference in attitude to facial recognition may also lie in societal norms.

Communities and ‘community spirit’ are generally considered to hold more value in Eastern nations than they do in the West, where societies are more ‘individually focused’. If British people were to consider what facial recognition technology could do to improve safety and security for society as a whole, rather than how it could impact individual privacy, then facial recognition technology may be accepted and even welcomed.

The potential benefits of facial recognition on public security and safety are far greater than the impact of a mistake. In today’s mindset, if a facial recognition mistake led to an innocent person being stopped and searched by police, then released without arrest, it would be considered an outrage. However, the technology could also locate a killer on the run, and if it did, it would be called a triumph.

As long as those who use facial recognition technology are aware of any flaws or prejudices and take them into account when considering the results, we believe that facial recognition can be used ethically and for the benefit of society. Having ‘Big Brother’ watching over us may not be such a bad thing.


Print this page | E-mail this page