Facial recognition firm sued for scraping 3 billion faceprints

By | 3:12 AM Leave a Comment

New York facial recognition startup Clearview AI – which has amassed a huge database of more than three billion images scraped from employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and Venmo – is being sued in a potential class action lawsuit that claims the company gobbled up photos out of “pure greed” to sell to law enforcement.

The complaint (posted courtesy of ZDNet) was filed in Illinois, which has the nation’s strictest biometrics privacy law – the Biometric Information Privacy Act (BIPA).

The suit against Clearview was just one chunk of shrapnel that flew after the New York Times published an exposé about how Clearview has been quietly selling access to faceprints and facial recognition software to law enforcement agencies across the US, claiming that it can identify a person based on a single photo, revealing their real name and far more. From the New York Times:

The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

Clearview told the Times that more than 600 law enforcement agencies have started using Clearview in the past year, and it’s sold the technology to a handful of companies for security purposes. Clearview declined to provide a list of its customers.

Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the newspaper that the “weaponization possibilities” of such a tool are “endless.”

Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.

The secretive company “might end privacy as we know it,” the Times predicted in its headline. From the report:

Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable – and his or her home address would be only a few clicks away. It would herald the end of public anonymity.

The complaint claims that Clearview’s technology gravely threatens civil liberties.

Constitutional limits on the ability of the police to demand identification without reasonable suspicion, for instance, mean little if officers can determine with certainty a person’s identity, social connections, and all sorts of other personal details based on the visibility of his face alone.

The lawsuit claims that Clearview isn’t just selling this technology to law enforcement: it’s also allegedly sold its database to private entities including banks and retail loss prevention specialists; has “actively explored” using its technology to enable a white supremacist to conduct “extreme opposition research”; and has developed ways to implant its technology in wearable glasses that private individuals could use.

Clearview thus joins Facebook and Vimeo in being accused of violating BIPA by amassing biometric data without people’s consent.

Outrage

Representatives of Facebook, YouTube, Twitter, Instagram and Venmo told the Times that their policies prohibit this type of scraping. Twitter said that it’s explicitly banned use of its data for facial recognition. Last week, Twitter also sent a cease-and-desist letter to Clearview, telling it to stop collecting its data and to delete whatever data it now has.

In interviews with the Times, Clearview founder Hoan Ton-That shrugged at the notion that scraping data violates site policies:

A lot of people are doing it. Facebook knows.

US lawmakers have expressed outrage. Senator Ron Wyden said on Twitter that Clearview’s possible use of its technology to suppress media interest was “troubling”:

Senator Edward J. Markey echoed the Times’s “end of privacy as we know it” prediction, sending a letter to Clearview on Thursday in which he suggested that its technology could “facilitate dangerous behaviors and effectively destroy individuals’ ability to go about their lives anonymously”.

Clearview’s product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.

Markey called on Clearview to provide a list of all the law enforcement or intelligence agencies that Clearview has talked to about acquiring its technology and which ones are currently using it.

Not the first time

This is far from the first time that facial recognition has threatened the end of anonymity, mind you. In May 2019, we heard about a programmer who claimed to have cross-referenced 100,000 faces of women appearing in adult films with photos in their social media profiles.

Three years before that, porn actresses and sex workers were being outed to friends and family by people using a Russian facial recognition service to strip them of anonymity. Users of an imageboard called Dvach in early April 2016 began to use the “FindFace” service to match explicit photos with images posted to the Russian version of Facebook, the social network VK.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.


from Naked Security https://ift.tt/38PfxdF

0 comments:

Post a Comment