Friday, 24 January 2020

Troubles Mount For Clearview AI, Facial Recognition Firm

According to a report by The Verge, Clearview AI is facing challenges to both its credibility and the legality of the service it provides.

On the heels of reports, originally covered by the New York Times, that Clearview AI has amassed more than three billion photos, scraped from social media platforms and millions of websites—and has incurred Twitter’s ire in the process—it appears the company has not been honest about its background, capabilities or the extent of its successes.

A BuzzFeed report points out that Clearview AI’s predecessor program, Smartcheckr, was specifically marketed as being able to “provide voter ad microtargeting and ‘extreme opposition research’ to Paul Nehlen, a white nationalist who was running on an extremist platform to fill the Wisconsin congressional seat of the departing speaker of the House, Paul Ryan.”

Further hurting the company’s credibility is an example it uses in its marketing, about an alleged terrorist that was apprehended in New York City after causing panic by disguising rice cookers as bombs. The company cites the case as one of thousands of instances in which it has aided law enforcement. The only problem is that the NYPD said they did not use Clearview in that case.

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a spokesperson for the NYPD told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

That last statement, regarding “lawfully possessed arrest photos,” is particularly stinging as the company is beginning to face legal pushback over its activities.

New York Times journalist Kashmir Hill, who originally broke the story, cited the example of asking police officers she was interviewing to run her face through Clearview’s database. “And that’s when things got kooky,” Hill writes. “The officers said there were no results — which seemed strange because I have a lot of photos online — and later told me that the company called them after they ran my photo to tell them they shouldn’t speak to the media. The company wasn’t talking to me, but it was tracking who I was talking to.”

Needless to say, such an Orwellian use of the technology is not sitting well with some lawmakers. According to The Verge, members of Congress are beginning to voice concerns, with Senator Ed Markey sending a letter to Clearview founder Ton-That demanding answers.

“The ways in which this technology could be weaponized are vast and disturbing. Using Clearview’s technology, a criminal could easily find out where someone walking down the street lives or works. A foreign adversary could quickly gather information about targeted individuals for blackmail purposes,” writes Markey. “Clearview’s product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.”

The Verge also cites a recent Twitter post by Senator Ron Wyden, one of the staunchest supporters of individual privacy, in which he comments on the above disturbing instance of Clearview monitoring Ms. Hill’s interactions with police officers.

“It’s extremely troubling that this company may have monitored usage specifically to tamp down questions from journalists about the legality of their app. Everyday we witness a growing need for strong federal laws to protect Americans’ privacy.”

—Ron Wyden (@RonWyden) January 19, 2020

Ultimately, Clearview may well provide the impetus for lawmakers to craft a comprehensive, national-level privacy law, something even tech CEOs are calling for.

The post Troubles Mount For Clearview AI, Facial Recognition Firm appeared first on WebProNews.



from WebProNews https://ift.tt/38GUp9H

No comments:

Post a Comment