On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN)Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result:28 members were mistakenly matched with 28 suspects.
The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smithposted an unusual plea on thecompany’s blog asking that the development of facial-recognition systems not be left up to tech companies.
Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”
But we may not get new laws anytime soon. The nuances are complex, while Congress remains as reluctant as ever to regulate privacy. We may find ourselves stuck struggling to agree on norms well after the technology has redefined everything from policing to marketing.
My face is my passport
The underlying problem is simple: theproliferation of connected cameras, databases of facial images and software linking the two has made this technique not just cheap but increasingly unavoidable.
Or asNicol Turner-Lee, a fellow at the Brookings Institution in Washington, put it: “We live in an economy of images.”
Of course, this can be good. Encrypted, on-device facial-recognition systems such as Apple’s (AAPL)Face ID and Microsoft’sWindows Hello let you sign into a phone or laptop easily without putting your facial characteristics in a cloud-hosted database.
Or it can be beyond your control. Maybe your state shares its driver’s-license database with other government agencies—a2016 study by the Georgetown Law Center’s Center on Privacy and Technology found that at least 26 states had opened those databases to police searches, covering the faces of more than 117 million American adults. Or the passport-control checkpoint at an international border—whereyou already have minimal rights—may demand you look into an automated camera.
What rules should we have?
The problem is you can’t assume to know when a camera and its software identify you, that their recognition algorithms are accurate or that the underlying databases are always secure.
Facial recognition is often done clandestinely. Its accuracy is iffy, especially among non-white populations. Some 39% of the false matches in the ACLU test involved legislators of color, who only account for 20% of Congress. What’s more, companiescan’t seem to stop data breaches from happening.
As Microsoft’s Smith noted, those conditions usually lead to government intervention. What should that look like?
Turner-Lee and Michigan State University professorAnil K. Jain agreed on some basic principles.
Neither a police department nor a private company should use facial-recognition software to put names to random faces passing by. As Jain said, “comparing you to social media is a no-no.”
Using the same software to look for specific people could be okay. But a police department should need level of judicial permission, while a company should get your opt-in.
That commercial opt-in shouldn’t be a blanket approval—both Turner-Lee and Jain, for instance, agreed that Facebook (FB) should let users choose between having the social network deploy facial recognition to spot fake accounts and also employing that to find you in friends’ photos.
(Ina December corporate blog post, Facebook deputy chief privacy officer Rob Sherman wrote that “most people would find it easier to manage one master setting.”)
You should see some notice that you’re in an area where facial-recognition technology can be used—although your ability to spot that in a distraction-filled business such asa casino is another thing.
But those principles still allow for enormous flexibility. For instance, does a government or company have to identify what databases it uses in its facial-recognition regime? An announcement last week from Gov. Andrew Cuomo (D.-N.Y.) offacial-recognition scanning at some New York bridges and tunnels said nothing about that.
And how long should a camera feed be kept around? Some police departments already set surprisingly short limits for footage not classified as crime evidence–30 days in New York, 10 business days in Washington. And for now, processing power limits long-term archival; as Jain said, “the cost of storing is not that much, but the cost of searching could become prohibitive.” But history suggests those limits will keep expanding.
And who will write them?
Because the consequences of a false positive can be so much higher in a law-enforcement context, we may first only see rules for government use.
That Georgetown Law Center study, for instance, proposedmodel legislation that would, among other things, require that arrest-photo databases remove images of those found innocent, then demand court approval for police queries of driver-license databases or real-time searches of a camera feed.
That proposed bill, however, doesn’t cover commercial facial recognition.