Despite these concerns about accuracy, official standards for human-computer collaborations have yet to be developed. Nicole Spaun, a former FBI image examiner who has published several leading studies on forensic facial identification, has been surprised by how many police departments have installed face recognition systems without also instituting the proper training to go along with them. To this end, Spaun is currently working at MorphoTrak, a biometrics vendor, to develop training so that the users of any computer system are a priority rather than an “afterthought.” “There’s a lot of people being thrown at face recognition systems who may know how to use a camera but don’t know anything about the science of imaging,” she explained. While the temptation to make a definitive identification is strong, Spaun says that most of her courses involve explaining to people that certainty is rare. “Looking at my own driver’s license photo I barely see the moles that I know are there,” she said. “What I find myself saying to a lot of people is, ‘No you’re not going to be able to positively identify a person.’” According to an FBI spokesperson, FAVIAU does not plan to replace humans with automated facial recognition searching, but it may use systems to locate “better potential candidate matches than the current subject of the examination, which could provide support for eliminating the current subject from consideration.” But even with automated facial recognition technology in place, false matches will, according to Jain, continue to be “a valid concern.” “Biometrics systems can make errors so we should be open to someone complaining that they are put in the wrong place at the wrong time,” he said. “The ‘fingerprints don’t lie’ attitude has to change. If someone is claiming that they have a perfect system, that attitude needs to be corrected.” Jennifer Lynch, an attorney at the Electronic Frontier Foundation who works on face recognition, is concerned that human biases may even exacerbate the errors of technology. The seemingly unassailable combination of human expertise and technology may create legal situations where the burden of proof shifts onto the defendant, she explained. False matches end up forcing citizens to prove that they aren’t who examiners (and, increasingly, their algorithmic partners) say they are. In other words, what happened to Steve Talley could happen to others again and again."

The entire article can be found at: