Police responding to the mass shooting at the Capital Gazette newspaper in Annapolis faced a perfect storm of problems when they took the suspected gunman into custody: the man had no identification, he wouldn’t speak to investigators, and a fingerprint database was not returning any matches.
But they had a backup plan: investigators ran his photo in Maryland’s state-of-the-art facial recognition database. The system quickly returned a match.
Maryland police were able to identify Jarrod Ramos, the man who murdered five Capital Gazette staff members, by feeding his picture into the Maryland Image Repository System (MIRS).
The system uses algorithms to scan for a match across tens of millions of images from driver’s licenses, offender photos and mug shots from an FBI database.
The Annapolis case represents a highly successful deployment of the controversial technology, saving investigators critical time as they scrambled to identify a suspect.
It could also boost arguments from law enforcement in favour of facial recognition at a time when systems such as Maryland‘s have fallen under intense criticism from privacy advocates and civil rights groups who say they could be used to surveil innocent people or reinforce racial profiling.
Proponents of the technology are pointing to to it as a compelling example of the value such systems could offer police departments.
“This sensational case will probably awaken and create an awareness that will bring a lot more attention” to facial recognition systems among law enforcement agencies that haven’t adopted them, said Tom Joyce, a former lieutenant commander with the New York City Police Department’s cold case squad.
But critics of the technology say not every case is as straightforward as this one. They also note that facial recognition systems tend to misidentify African-Americans more often than whites and could allow police to conduct real-time surveillance against people not suspected of crimes.
“While the method seems to have performed well in Ramos’s case, there are still significant civil liberties concerns around the way police use MIRS,” wrote Russell Brandom of The Verge.
“Police are supposed to remove people who were arrested but found innocent, but since the system is rarely audited, it’s hard to say if that’s actually happening. There are also racial justice concerns, given racial disparities in rates of arrests, compounded by higher error rates for African-Americans in many facial recognition algorithms,” Mr Brandom added.
Samuel Sinyangwe, a prominent racial justice activist and data scientist, pointed out there are few checks on how police use systems such as Maryland’s.
“Note that half of all US adults are in facial recognition databases and there is very little oversight, testing for accuracy, or limits on how police use this software,” he said.
Even if flawless facial recognition technology were to be achieved it would bring major privacy concerns, said the Electronic Frontier Foundation (EFF), a digital rights organisation.
“Let’s say facial recognition improves-that it produces correct matches 100% of the time. Then what? Well, it means we can’t walk around without the government knowing who we are, where we are, and who we’re talking to,” added EFF’s Jen Lynch.