18 Nov TECH UPDATE 2020: FACE RECOGNITION
vs Common-Sense Ethics!
You’ve probably got some understanding of how ‘classical’ programming works – maybe you’ve done some yourself. You get some input data, add some rules, and you use those rules to generate answers. AI, or ‘machine learning’ works differently. In this case you have the same input data, and some sample answers, and the ‘machine learning’ works out the rules (or tries to). Then the machine uses AI to come up with some valuable outputs (for example other answers).
A good example is your online photo library, which probably once prompted you to name various family members, and then had a stab at naming people in some other photos – maybe new ones as you put them in. If you’ve used this you might have seen ‘false positive’ errors, where one child looks a bit like a parent, and people get mislabelled.
Early machine learning failure
A well-known early failure of machine learning when applied to photos was when medical research companies tried to use the technology to identify cancerous skin tissue from various photos (having been given some samples to start with). It turns out that dermatologists often put a ruler in the photo, to give a sense of scale, and in some cases the ‘machine’ ended up mis-identifying photos showing rulers as cancer cases. They’d built a ruler-recogniser, rather than a cancer-recogniser.
Interesting but concerning
Of course skin colour is something that can be identified too, and this has led to concerns in various parts of the world. A New York Times article this June explained how Robert Williams, who works at an auto supply company in Michigan was ‘wrongfully accused by an algorithm’. He received a call from his local police station asking him to ‘report to be arrested’. They sent a photo of him attached to a ‘felony warrant’. In the event he was detained overnight and had his fingerprints and DNA taken. Even though he was nowhere near the crime scene at the time. In this case, a combination of poor technology and poor police work led to the the wrongful arrest of an innocent black man. This happened because a ‘probe image’ – a selected still from CCTV footage – was uploaded to a database of around 50 million photos and incorrectly found what looked like a match. Research has shown that Asian and African-American faces are between 10-100 times more likely to be mis-identified by the 100 most popular facial recognition systems in use by US police departments.
This is both interesting and concerning, of course. One worries what happens when bad technology is in the hands of good (but busy) people and, of course, what would happen when good technology is in the hands of bad people.
We bump into this problem increasingly often. One of our developer clients has trailed facial recognition technology (in conjunction with the Metropolitan Police) on one of their larger schemes – and caught the attention of the Information Commissioners Office.
‘Liveness’ Vs ‘Likeness’
SMC have long used biometrics (personal biological information) in access control systems – retina, then iris scans, as well as fingerprint recognition. Facial recognition is more attractive because today’s high resolution CCTV cameras can capture sufficient detail, without a subject needing to stand still in front of a reader for 2 or 3 seconds. Staff members could be identified if they were in the wrong part of a property, and unknown characters persistently ‘lurking’ near a property can be spotted too. But all this needs to be accurate and work reliably for it to prove useful. So how do the best systems work? Like the facial recognition on your phone, say, that unlocks the device almost instantly?
Well, in all cases they use a ‘liveness’ test (not just ‘likeness’!) to see if you are a real person, not just a photo held up in front of an entryphone! They typically use more than one image, taken consensually (perhaps with glasses and without) and normally are searching a small but quality-assured database of known faces. In this way you can minimise false negatives and avoid false positives.