New Facebook AI fools facial recognition

By | 4:39 AM Leave a Comment

Facebook is both embroiled in privacy struggles over its use of facial recognition, working to spread it far and wide, and coming up with ways to flummox the technology so it can’t match an image of a person to one stored in image databases.

On Sunday, Facebook Research published a paper proposing a method for using a machine learning system for de-identification of individuals in videos by subtly distorting face images so they’re still recognizable to humans, but not to machines.

Other companies have done similar things with still images, but this is the first technology that works on video to thwart state-of-the-art facial recognition systems.

Here it is in action, with before and after videos of celebrity faces that many of us will recognize but that automatic facial recognition (AFR) systems can’t identify:

This, from the holder of the world’s biggest face database?

Why would Facebook do this, when it’s been so keen to push facial recognition throughout its products, from photo tag suggestions on to patent filings that describe things like recognizing people in the grocery store checkout lines so the platform can automatically send a receipt?

An approach that’s resulted in bans of facial recognition in Europe and Canada, and at least one, $5 billion class action lawsuit?

Facebook last month turned off the default setting for tag suggestions – the feature that automatically recognizes your friends’ faces in photos and suggests name tags for them – while also expanding facial recognition to all new users.

In the de-identification paper, researchers from Facebook and Tel-Aviv University said that the need for this type of artificial intelligence (AI) technology has been precipitated by the current state of the art when it comes to the adoption and evolution of facial recognition. That state of the art is a mess, given the growing number of governments that use it and other AI to surveil their citizens, and the abuse of the technology to produce deep fakes that adds to the confusion over what’s real and what’s fake news.

From the paper:

Face recognition can lead to loss of privacy and face replacement technology may be misused to create misleading videos.

Recent world events concerning the advances in, and abuse of, face recognition technology invoke the need to understand methods that successfully deal with deidentification. Our contribution is the only one suitable for video, including live video, and presents quality that far surpasses the literature methods.

Venture Beat spoke with one of the researchers, Facebook AI Research engineer and Tel Aviv University professor Lior Wolf. Wolf said the AFR fooler works by pairing an adversarial autoencoder with a classifier network.

It enables fully automatic video modification at high frame rates, “maximally decorrelating” the subject’s identity while leaving the rest of the image unchanged and natural looking: that includes the subject’s pose and expression and the video’s illumination. In fact, the researchers said, humans often recognize identities by nonfacial cues, including hair, gender and ethnicity. Therefore, their AI leaves those identifiers alone and instead shifts parts of the image in a way that the researchers say is almost impossible for humans to pick up on.

This could be used to create video that can be posted anonymously, Wolf said:

So the autoencoder is such that it tries to make life harder for the facial recognition network, and it is actually a general technique that can also be used if you want to generate a way to mask somebody’s, say, voice or online behavior or any other type of identifiable information that you want to remove.

It’s comparable to how face-swapping apps work: the de-identification AI uses an encoder-decoder architecture to generate both a mask and an image. To train the system, an image of a person’s face is distorted by rotating or scaling it before the image is then fed into the encoder. The decoder outputs an image that’s compared with the initial, undistorted image. The more obfuscation, the less natural looking the face.

A Facebook spokesperson told Venture Beat that the company currently has no plan to apply this AFR-roadblock technology to any of Facebook’s apps, but that methods like this could enable public speech that remains recognizable to people but not to AI systems.

Where could this come in handy?

I can think of at least two scenarios in which face de-identification would come in handy when it comes to government use of facial recognition technology: it might have potential to replace the facial recognition-enhanced police bodycams that recently got outlawed statewide (for three years) in California. The states of Oregon and New Hampshire already had similar laws on the book, as do various US cities.

It could also conceivably help the forgetful who fail to scrub the faceprints off their agency’s files when privacy experts come calling with their Freedom of Information Act (FOIA) requests, as happened when the New York Police Department (NYPD) handed over non-redacted files in April… and then had to ask to get them back.

Whether those are pluses or minuses vis-a-vis privacy rights is a worthy discussion. But there’s one case that comes to mind in which use of face de-identification technology could, imaginably, lead to undeniable privacy harm: namely, the theft of women’s images to create deep fake porn.

These days, you can churn those things out fast and cheap, with little to no programming skills, thanks to open-sourcing of code and commodification of tools and platforms. For all the women, particularly celebrities, whose likenesses have been stolen and used in porn without their permission, there are no quick, easy-to-use, affordable tools to spot their faceprints and identify deep fakes as machine-generated.

Would de-identification technology make it even tougher to find out when you’ve been unwillingly cast in nonconsensual porn? Is there any reason why deep-fake creators couldn’t, or wouldn’t, get their hands on this so they can keep cashing in on their fakery work?


from Naked Security https://ift.tt/2pqtDBc

0 comments:

Post a Comment