Citi Newsroom

California could become first to limit facial recognition technology

California could become first to limit facial recognition technology

A routine traffic stop goes dangerously awry when a police officer’s body camera uses its built-in facial recognition software to misidentify a motorist as a convicted felon.

Guns are drawn. Nerves fray. At best, lawsuits are launched. At worst, tragedy strikes.

That imaginary scenario is what some California lawmakers are trying to avoid by supporting Assembly Bill 1215, the Body Camera Accountability Act, which would ban the use of facial recognition software in police body cams – a national first if it passes a Senate vote this summer and is signed by Gov. Gavin Newsom.

State law enforcement officials here do not now employ the technology to scan those in the line of sight of officers. But some police officials oppose the bill on the grounds that a valuable tool could be lost.

The tug of war over high-tech surveillance methods comes in the wake of this tech hub’s City Council banning all forms of facial recognition software last month. Oakland and Berkeley council members are considering similar bans.

California’s AB 1215 reflects growing concerns nationwide about the darker side of tech – when the same software that allows iPhone X users to unlock their devices with a glance could wrongfully finger you as a criminal or keep tabs on you for Big Brother.

“There’s been an increased focus on privacy issues generally by state legislatures this year,” says Pam Greenberg of the National Council of State Legislatures.

Lawmakers in Massachusetts, New York and Washington are considering bills that scrutinize and curtail the use of biometric and facial recognition systems on grounds that the still flawed technology presents an Orwellian threat to civil liberties.

Congress also is weighing in. After hearings on the technology on May 22 and June 4, a bipartisan U.S. House Oversight and Reform Committee unanimously agreed to push for a nationwide facial recognition ban while more legal and regulatory guidance was sought.

“Even if this tech were to one day work flawlessly, do we want to live in a society where the government knows who you are, where you’re going, the expression on your face?” says Matt Cagle, tech and civil liberties attorney with the ACLU of Northern California.

Congress also is weighing in. After hearings on the technology on May 22 and June 4, a bipartisan U.S. House Oversight and Reform Committee unanimously agreed to push for a nationwide facial recognition ban while more legal and regulatory guidance was sought.

“Even if this tech were to one day work flawlessly, do we want to live in a society where the government knows who you are, where you’re going, the expression on your face?” says Matt Cagle, tech and civil liberties attorney with the ACLU of Northern California.

“Consider also that the history of surveillance is one of it being turned against the most vulnerable communities,” Cagle adds.

San Francisco City Supervisor Aaron Peskin, who was instrumental in the city’s ban, says no possible benefit of facial recognition systems “outweighs its demonstrable harms, and the fact that we have seen this spread to other cities and to congressional hearings so rapidly is evidence of the emerging consensus around that fact.”

‘Rolling surveillance cameras’

Assembly member Phil Ting (D-San Francisco), sponsor of AB 1215, sees fundamental freedoms being encroached if police use acial recognition tech.

“If you turn on facial recognition, you have rolling surveillance cameras,” he says. “And I don’t think anyone in America wants to be watched 24/7.”

What’s more, AB 1215 supporters say facial recognition would undermine the very reason body cams were introduced in the wake of police shootings, which is to build trust with community members through accountability.
“Adding facial recognition is a perversion of the purpose of body cams,” says Harlan Yu, executive director of Upturn, a Washington, D.C.-based advocacy group promoting justice in technology. “And it doesn’t help that this software often has a harder time differentiating faces when it comes to people of color.”

While acknowledging that the tech is still in its infancy, some police officials say a wholesale ban is premature.

“Facial recognition could be a valuable tool for us, helping identify felons or even abducted children,” says Detective Lou Turriaga, director of the Los Angeles Police Protective League, which represents 10,000 officers.

“I understand trying to seek a balance between civil liberties and law enforcement, but a wholesale ban doesn’t help us protect anybody,” he says. “Why remove that tool from law enforcement? It just doesn’t make sense.”

Amazon says no to police on scan tech: Amazon shareholders reject banning sale of facial recognition software to law enforcement

Also opposed to AB 1215 as written is the California Police Chiefs Association, although the organization declined to specify why.

Two police department have made waves recently by testing facial recognition software in limited ways.

In Florida, the Orlando Police Department remains in a “test phase” of Amazon’s Rekognition software, and the technology is “not being used for investigative or public purpose, says Sgt. Eduardo Bernal, the department’s public information officer.

And in Washington County, Oregon, the sheriff’s department uses its facial recognition software to compare photos of people suspected of crimes with a database of jail booking photos. The department does not use the tech for live or mass surveillance.

“We are beyond the piloting phase, but continue to make sure we’re using the technology as responsibly as possible, and we’ve made small tweaks to our practices over time,” says public information officer Deputy Brian van Kleef of the Washington County Sheriff’s Office.

Careful use of facial recognition is admirable, but it doesn’t prevent such accumulated data from being hacked by a third party, says the ACLU’s Cagle.

“With just a few lines of code and a connection to those cameras, you could potentially turn all that against the community,” he says.

‘Not ready for prime time’

Of greater concern to some privacy watchdogs is what Assemblymember Ting calls the “not ready for prime time” nature of facial recognition tech.

While an iPhone X’s camera works well repeatedly scanning the same face for the same topographical features in order to unlock the smartphone, that’s different from asking the technology to match a real face with a two-dimensional photo.

Last year, the ACLU conducted an experiment in which it used Rekognition software to compare photos of current members of Congress with 25,000 mugshots. The result was that 28 congressional members were falsely flagged as criminals.

There are a growing number of examples that both laud and damn facial recognition software.

On the one hand, Google Photos once labeled two African-Americans as “gorillas.”On the other, law enforcement used the tech last year to identify the perpetrator in the shooting deaths of five at the Capital Gazette in Maryland, and in India, about 3,000 missing children were identified in four days using the software.

If there is one cautionary tale that surfaces repeatedly in discussions of this technology, it is the case of China’s zealous policing through an array of cameras equipped with facial recognition software of the Uighurs, a largely Muslim minority in the western part of the country.

“This all is a lot bigger than police body cams, it’s cameras in buildings and on streets, in drones, we’re reaching a critical mass now and haven’t been paying attention,” says Brian Hofer, chairman of the City of Oakland’s privacy commission, which is pushing council members to adopt a ban.

“There’s something visceral about facial recognition, something creepy,” he says. “We have seen the horrors of using the system to target a population, as in China, and yet we have this ridiculous belief surveillance will be used in a friendly manner.”

Problems with accuracy

Many privacy advocates note that Microsoft’s president, Brad Smith, recently cast doubt on the state of facial recognition tech, which his company develops.

The Redmond, Washington, company declined to sell its tech to an unnamed California police agency, which wanted to use it to scan faces via cameras in cars and on officers.

“We said this technology is not your answer,” Smith said at a Stanford conference in Northern California on artificial intelligence in April, noting that the software largely was trained on mostly white and male photos.

Researcher Joy Buolamwini of the Massachusetts Institute of Technology echoed as much after her experiments earlier this year showed that the technology had particular difficulties accurately identifying women of color.

And last summer, Rick Smith, CEO of body camera manufacturer Axon, said he wasn’t ready to make the tech available for body cameras because the “accuracy thresholds” aren’t where they need to be.

Such concerns from technologists themselves give many privacy advocates pause, warning that if actions aren’t taken by lawmakers on a city, state or national level soon, it could be too late.

Which is why tackling the issue now is crucial, says Robin Feldman, director of the Institute for Innovation Law at University of California-Hastings College of the Law.

“Police body cams touch on information integrity in two ways,” she says. “One, if I use the tech, can I trust the information I am getting? And two, when the tech uses my information, can I trust what happens to it? And in both cases here, it’s problematic.”

Feldman says humans always have a tendency to romanticize the benevolent and magical powers of technology, at our own peril.

“The great computer in the sky is not foolproof,” she says. “I don’t think it’s possible to put the tech genie back in the bottle. Which is why when it comes to facial recognition software, we need sound policies right now over how it is used.”

Source: citifmonline.com

Original Story on: Citi Newsroom
Scroll to Top