Tech for Evil Case Study: Clearview AI

You may have heard about Clearview AI in the news, especially around facial recognition. But who is this company, and what do they actually do?

Setting the Stage

First a bit of background to this post. Four years ago I wrote about Tech for Evil during the Standing Rock protests against the Dakota Access Pipeline. Law enforcement used drones and LRAD – long range acoustic devices – against peaceful protesters – all new tech that has evolved rapidly in the last 6 years.

Democracy now covering the Standing Rock protests.

Here we are, four years later. People seem to have forgotten about Standing Rock, if they even heard about it when it happened. Now, we’re in the midst of a global outcry against police violence against black people, that ironically is being met with police violence.

How has law enforcement innovated their use of technology to suppress dissent since the Standing Rock protests? Let’s do a review of one of the most popular surveillance technology being used by law enforcement across the US: Clearview AI.

Competitive Analysis of Clearview AI

If I put my product marketing hat on, there are three general things I try to know about a product to put it into market:

  1. What is the product?
  2. What will this product do for customers?
  3. What does the product do (technical details)

These are the questions that would also guide me as I researched competitors. So let’s start there.

What is the Product?

This is the product description from Clearview.AI’s home page. I emphasized what I found interesting.

Clearview AI is a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes. Clearview AI’s technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers.

It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud.

Using Clearview AI, law enforcement is able to catch the most dangerous criminals, solve the toughest cold cases and make communities safer, especially the most vulnerable among us.

Clearview.AI main page

What Will the Product Do for Customers?

Clearview AI’s primary customer law enforcement. It has been sold to 2,200 law enforcement agencies in 27 countries. Other markets segments they have sold to is retail, including big chains such as Macy’s, Walmart, and Best Buy (source), and banking.
Interestingly, it is sold in the typical SaaS way: via a trial. So Clearview AI gives law enforcement agencies access to the product for free, for 60 days (source). Now think about this, their stated product goal is to help solve heinous crimes. If a case is being built upon a search done with their tool, it seems probable that they will need access longer than 60 days. That’s a pretty sweet way to achieve stickiness.

Here is how I would write a message map based on their marketing:

  1. Clearview AI identifies perpetrators of crimes. The examples of perpetrators are pedophiles, terrorists, sex traffickers. Dangerous criminals. In fact, the customer quote they use is from a sex crimes unit in Canada:

    The product touts itself as the tool law enforcement needs to take the really bad guys off the streets.
  2. Clearview AI identifies victims of crimes: victims of child sex abuse and financial fraud, cold cases. The company is telling us that this product was made to protect the most vulnerable among us.
  3. Clearview AI is Compliant. The company tells law enforcement that they can use this product to “accurately, reliable, and lawfully identify subjects”, and that the product has been independently tested for accuracy and legal compliance by “nationally recognized authorities”.

What Does the Product Do (Technical Details)

Clearview AI is facial recognition software. According to founder Hoan Ton-That, the product is a search engine for faces (source). Anyone in law enforcement can upload a picture, and find any other picture in their database that matches that uploaded face. Their vision is to help law enforcement solve crimes.

The reason given for their success is the vast amount of data they have for matching. Billions and billions of photos from millions of public websites and rarely false positives. They do this via an algorithm that is trained on millions of examples of faces in different angles, lighting, poses, facial hair, etc. The algorithm “learns” a person by focusing on features that will remain stay the same across years, weight gain or loss, etc. instead of measuring and comparing distances between features like traditional facial recognition software.

Because the algorithm is taught to learn and compare features, typical ways to trick measurement-based facial recognition software won’t work. And since they scrape billions of photos from millions of public websites, the algorithm has plenty of data with which to work.


If I were to work on competitive talking points for Clearview.AI, I would focus on the testing claims.

Clearview AI claims to be independently tested. This doesn’t mean they conform to any regulations, or even the usual SaaS security certifications (from what I can see on their site). They can’t conform to regulations because there aren’t any yet. So what are these “independent test” references?

  • Paul Clement, former Solicitor General of US Compliance under George W. Bush provided a legal opinion on the legality of the app (statement here, part of a NY Times expose of Clearview AI). He seems to set up legality of the app under the 4th Amendment of the US constitution, which protects US citizens’ “expectation of privacy”.
    There is a glaring inaccuracy in the opinion and how the software works (as described by the founder). The opinion states (emphasis mine):

Clearview does not itself create any images, and it does not collect images from any private, secure, or proprietary sources. Clearview links only to images collected from public-facing sources on the Internet , including images from public social media, news media, public employment and educational websites, and other public sources.”

Paul Clement legal opinion for Clearview AI via NY Times

This is not how the software works. Clearview AI scrapes billions and billions of images from millions of websites. They create copies of these images, put them in a database, and feed them to their algorithm. If Mr. Clement’s understanding of how the software works is flawed, so is his legal opinion.

  • Clearview claims to have stellar results from Megaface at University of Washington (a widely used facial recognition benchmark). But Clearview isn’t listed on the Megaface website, and according to this Buzzfeed article Ton-That wouldn’t say if they have even submitted their results. But a Megaface representative verified that Clearview AI’s accuracy metric has not been validated by Megaface.
    In other words, even though they claim to have a better rating than Tencent or Google, they don’t have the proof to back up the claim.
  • Clearview claims to have achieved a 100% accurate rating in a review based on methodology used by the ACLU. The ACLU says “The report is absurd on many levels and further demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hand” (via Buzzfeed).

It is pretty clear that any claims made by Clearview AI need to be scrutinized thoroughly by law enforcement officials making the purchasing decisions.

Real Talk

Clearview AI is pushing the limits of the 1st and 4th amendments, and according to the CEO they are looking forward to testing this in court. In the meantime, US officials are pressing the company to disclose if law enforcement is using the tool to identify protesters at the huge protests after George Floyd’s death.

The concern is that the police can take pictures of the crowd, run them through Clearview, and after the protests intimidate individual protesters, which is a clear infraction of 1st amendment rights. In fact, just the fact that police are photographing crowds and have access to such a tool is a probably violation of that right.

Taking this back to Standing Rock, the tools available to law enforcement have just become more sophisticated and targeted. While it would be nice if the tools were only used to target the most heinous criminals and protect the most vulnerable in our communities, it seems as though they are being used more as tools to control communities in general.

What do you think about this? How concerned should we be about tech being used for evil? And let me be plain, Clearview is a small disruptor, there are bigger, scarier tools available to law enforcement. Should I do a competitive review of those as well? Let me know in the comments!

2 thoughts on “Tech for Evil Case Study: Clearview AI

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

A note to our visitors

This website has updated its privacy policy in compliance with changes to European Union data protection law, for all members globally. We’ve also updated our Privacy Policy to give you more information about your rights and responsibilities with respect to your privacy and personal information. Please read this to review the updates about which cookies we use and what information we collect on our site. By continuing to use this site, you are agreeing to our updated privacy policy.