When the iPhone X debuted last month, Apple introduced much of the American public to a new technology: facial recognition. The company’s flagship device includes a feature, called Face ID, that allows users to unlock the phone by looking at it.
Although Face ID is more secure than competing systems, the underlying concept has been around for years. Facebook already uses facial recognition to tag photos, and is prototyping facial recognition-based account recovery. Microsoft has developed an algorithm that can recognize a user’s face and try to guess their mood. Walmart has tested anti-theft facial-recognition systems. But one of the biggest customers is not in the private sector: American law enforcement.
Law enforcement has a compelling interest in the development of facial recognition. It allows the police to identify criminals caught on camera, or review security footage of terrorist attacks in order to identify the perpetrators. It is a remarkably powerful tool.
But with this power comes the potential for abuse. Unlike technologies that target cell phones or computers, facial recognition is difficult to avoid. You can’t leave your face at home, nor can you turn it off. Facial recognition operates surreptitiously: because it can scan your face without your knowledge or consent, you cannot know where—or when—you are being watched. Facial recognition is also far cheaper than alternative methods of surveillance. Traditionally, law enforcement would have to invest significant time and resources to monitor someone’s activity, which narrowed the scope of surveillance that law enforcement could practically conduct. However, “the automation of [surveillance] allows it to happen for even minor crimes,” explained Alvaro Bedoya, Executive Director of the Center on Privacy and Technology at Georgetown Law, and an expert on facial recognition. It extends surveillance, he says, even to “trespassing, obstructing the entrance to a building, disorderly conduct—the sort of things that happen at a peaceful protest.”
Especially under the current administration, even attending a peaceful protest could land you with felony charges. As the technology’s capacities rapidly advance, facial recognition could be used to unmask protestors, activists, or journalists en-masse and in real time, chilling free speech and expression. It could also be used to monitor the activity of entire communities deemed dangerous to the state. “Democracy by its nature relies on anonymity of some activities and interactions,” writes Jake Laperruque. Facial recognition threatens to eliminate it.
The data structures needed for these abuses are largely in place. Roughly half of American adults already have their faces stored in a facial recognition database, and the government stores not only mugshots, but driver’s licenses, visa pictures, and employment documents. Facial recognition searches of these databases are run routinely at all three levels of government.
Although there are situations when facial recognition could be used legitimately, no legal or statutory restrictions limit it to such circumstances. The public knows almost nothing about it, and Congress has just begun to grapple with the implications of its increasingly pervasive use.
Sign up for updates.
The Technology
Facial recognition works in three phases. In the first phase, law enforcement feeds the system an image of an unidentified face, known as a “probe shot.” This is the person police are trying to identify. The image could be taken by an undercover officer, extracted from a video feed, or pulled from a social media post. The system recognizes that a face is present, and begins the analysis.
In the second phase, the system extracts information from the probe shot. Most law enforcement systems superimpose a nodal map onto the unidentified profile in order to measure facial features, such as the distance between the eyes, the width of the chin, and the height of the brow. This is called a “face print.” When facial recognition is used on crime shows like CSI, it’s usually the face print that makes it on-screen.
In the final stage, the extracted profile is compared against previously acquired profiles stored in a database. These comparisons are run for one of two purposes: verification or identification. For a verification check, the system runs a “one-to-one” comparison between the unidentified face and a known profile. This is how Apple’s Face ID works. The user takes a series of self-portraits, and the iPhone calculates a mathematical representation of the face. When someone tries to unlock the phone, it compares the user’s face with the face it has in storage; if the two profiles match, the phone will unlock. Law enforcement might use facial recognition to verify the identity of someone already in custody. For instance, customs and Border Patrol are piloting a system that allows them to validate U.S. electronic passports with the travelers’ faces.
But more often, law enforcement uses it to identify an unknown individual. In this case, the facial-recognition system will perform a “one-to-many” comparison between the unidentified probe shot and multiple profiles stored in a collection, currently estimated to include roughly half of American adults. If law enforcement wants to identify an unknown suspect caught on a CCTV camera, for example, their system will compare footage of the suspect’s face with photos stored in a database. The system will return a list of most-likely matches, known as a candidate list. Candidates are ranked by probability based on how similar their facial profile is with the person from the video feed.
Databases, Databases, Databases
In order to run effective identification searches, law enforcement must have access to a lot of facial profiles. There are a number of different federal, state, and local databases that store these photos. In total, law enforcement has access to around 411 million images held at databases across the country. This figure refers to photos, not individual people, which explains why this figure exceeds the population of the United States.
Some state and local law enforcement agencies have facial-recognition systems of their own, and others work in collaboration with federal authorities at the FBI. Depending on the database, these searches may be conducted by federal, state, or local authorities.
At the FBI, The Facial Analysis, Comparison, and Evaluation (FACE) Services Unit is responsible for overseeing facial recognition searches. For this purpose, the FBI has assembled a series of databases known as the Next Generation Identification (NGI) system.
Within NGI, the FACE Unit can directly search the Interstate Photo System (NGI-IPS), a federal database network that includes over 30 million photos. One partition of NGI-IPS includes photos of people who have been subjected to the criminal justice system, but not necessarily convicted of a crime. It includes mugshots and probation photos. Another subsection of NGI-IPS, the Unsolved Photo File (UPF), is populated with law enforcement photos taken during an investigation of an unidentified individual. In 2010, NGI-IPS was quietly expanded to include photos taken pursuant to government background or employment checks. These photos, undisputedly civil in nature, account for approximately 20 percent of the database.
The FACE Unit conducts hundreds of searches of NGI-IPS per day. Seven states can also search NGI-IPS using software known as the Universal Face Workstation.
There are a number of other databases accessible to federal law enforcement, including a large database of visa and passport photos held by the State Department. In the aggregate, however, the largest photo repositories available to law enforcement are held by state Departments of Motor Vehicles (DMVs).
According to a report issued by the Government Accountability Office (GAO) in May of 2016, at least sixteen states grant the FBI access to their DMV databases, which hold photos from driver’s licenses and other state IDs. In total, these databases hold over 235 million images. The same report stated that the FBI was negotiating Memorandums of Understanding (MOUs) with eighteen additional states in order to gain access to their DMV photos. Then something strange happened. The FBI denied that they were negotiating the MOUs. The agency told Diane Maurer, the Director of Homeland Security and Justice Issues at the GAO and the author of the GAO report, that FBI employees had merely provided information about the federal program to state officials, and were not seeking data-sharing agreements. Whatever the case, the GAO removed those states from their report and reissued it in August. The true status of these negotiations remains unclear.
The inclusion of the DMV databases into the NGI system was controversial for a number of reasons. From a civil-liberties perspective, allowing law enforcement to run searches of civil databases is a dangerous precedent. Without adequate oversight, law enforcement could search for people engaging in constitutionally protected activity. When the GAO report was released in May of 2016, ACLU Vermont sued, alleging that the Vermont DMV’s program violated a state law prohibiting “biometric identifiers” from being used to identify Vermonters. The ACLU won their case, and had the program shut down.
It is also controversial because of the way DMV facial-recognition systems were designed. The databases are optimized for “one-to-one” comparisons, in order to prevent identity theft or fraud, not the “one-to-many” comparisons run by law enforcement. This means that there is a higher chance the system will return a list of entirely innocent candidates.
Furthermore, searches of DMV databases are not conducted by facial recognition experts. The FBI says members of its FACE Unit receive extensive training in the use and limits of state-of-the-art facial recognition technology. But these experts are not permitted direct access to state DMV databases. Rather, state and local authorities are responsible for conducting searches and returning these results to the FBI. These authorities are unlikely to have the expertise, training, or manpower to properly conduct and interpret the searches. The person in charge of the search “could be completely untrained, somebody who spent two hours with the system,” said Clare Garvie, an Associate at the Georgetown Law Center on Privacy & Technology.
State and local authorities also lack the expertise required to review the candidate list. At the federal level, at least two trained facial examiners review every match returned by the FBI’s system. The standards for this review process are set by the Facial Identification Scientific Working Group (FISWG), a working group of academics and practitioners studying the use of facial recognition by law enforcement. They endorse morphological analysis, or the comparison of mathematical facial measurements between subjects, a technique meant to prevent human reviewers from relying on shortcuts, like glasses or hairstyle, when assessing matches. At the state and local level, such a sophisticated review is not required. The entirety of the candidate list, even candidates that could have been eliminated by human review, may be passed on to law enforcement. Police could easily misidentify someone from this faulty line-up, and then arrest them. In such a case, an erroneous computer determination would be solely to blame for a serious constitutional violation.
A Lack of Oversight
The rationale required to run a search in the NGI system is as overbroad as its scope. To run a bulk search of millions of Americans’ faces, the FBI must merely allege that they may find information indicating “potential criminal activity.” This is because the FBI and Department of Justice take the position that Americans have no Fourth Amendment protection for what they display in public. Since you display your face in public, the argument goes, they may search it. Although the FBI have offered no further explanation, they don’t have to: no federal law explicitly regulates facial recognition, and no major court decision limits it.
Patchy search protocols are not the only cause for concern. The NGI system is also insulated from external oversight. When the FBI began integrating facial profiles into the NGI-IPS database in 2010, it should have filed a System of Records Notice (SORN) and a Privacy Impact Assessment (PIA). This was required under the Privacy Act of 1974, a federal law meant to prevent the creation of secret databases about American citizens. The FBI failed to file these documents for over five years, but faced no penalty from the Department of Justice. “The FBI hasn’t even gotten a slap on the wrist for not having been transparent,” said Lee Tien, a lawyer at the Electronic Frontier Foundation.
In 2015, Senator Al Franken commissioned the aforementioned GAO report on the FBI’s use of facial recognition. The report, titled “Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy,” recommended that the FBI assess why the SORNs and PIAs were not completed on time, conduct an audit of its biometric systems, and conduct tests to ensure that facial recognition is accurate enough to be used by law enforcement. The Department of Justice dismissed these recommendations, which they said represented a needless “checkbox approach” to privacy. Internal oversight was sufficient.
Less than a year later, on the same day the FBI filed its first SORN and PIA, the agency moved to exempt the NGI system from the Privacy Act entirely, on the grounds that the law “interfered with their role as a law enforcement agency.” Public outcry was swift and intense. “This is an extraordinarily broad proposal,” read a letter from forty-four different privacy, civil liberties, and immigrants’ rights organizations, “and the system it affects is extraordinarily sensitive—particularly for the communities it may affect the most.” Yet even if required to comply with it, the Privacy Act was written decades before the creation of facial recognition, and contains no explicit provisions regarding its use. “The entire Privacy Act is riddled with problems,” acknowledged Tien. “The saga of the NGI database is like a cautionary tale, a failure in non-compliance.” In any case, these blind spots in the law are now moot: the Department of Justice approved the FBI’s request on August 31, 2017.
Even members of Congress have struggled to get information on law enforcement’s policy on facial recognition. In March of 2017, Kimberly Del Greco, the Deputy Assistant Director of the Criminal Information Services Division of the FBI, told Senators that “Law enforcement has performed photo lineups and manually reviewed mugshots for decades… We only search the criminal mugshots that we have in our repository.” This is misleading to the brink of outright falsehood. Although the FBI can only directly search NGI-IPS and a database from the State Department, their system is designed to give the FBI access, through intermediaries, to millions of other photos. Several members of Congress were outraged by Del Greco’s mischaracterization. Senator Elijah Cummings (D-Maryland) reprimanded Del Greco for her dishonesty: “I usually don’t do this, but it kind of left me not feeling very good.” Representative Stephen Lynch (D-Massachusetts) told Del Greco that he has “zero confidence” in the ability of the FBI and Department of Justice to keep the system in check.
The Future of Facial Recognition
Current facial-recognition systems are somewhat hobbled by inaccuracy. If the probe shot (the photo of the unidentified person) lacks sufficient resolution or field-of-view, law enforcement might not be able to use it in a search. Even if the photo is clear, the pose of the subject, the illumination, and their expression (known as PIE) can all undermine accuracy. After the Boston Marathon bombings, for example, facial recognition failed to identify either suspect, despite the fact that their photos were in the database. False positives are also a problem. In 2011, the FBI found that, when its system was calibrated to return a candidate list of 50 potential matches, it returned a list of entirely innocent candidates about 14 percent of the time. A technology that fails 14 percent of the time should not be trusted by agencies capable of subjecting citizens to the criminal justice system. Furthermore, false positives have tarnished biometric evidence in the past, both on hair samples and fingerprints. These supposedly foolproof technologies have led to numerous false arrests and convictions, particularly among people of color. There is no reason to believe that facial recognition will be any different.
At the same time, highly accurate facial-recognition systems pose risks of their own, and they are on the horizon. Cities are fastreplacing their old CCTV cameras, which produce grainy or dark footage, with state-of-the-art high-definition systems. And the government is rapidly learning from the private sector, which has developed more sophisticated facial-recognition systems for a variety of marketing and entertainment purposes. Over the past decade, facial recognition technology has made bigleapsforward.
The most powerful and potentially dangerous development is the growing capability of facial-recognition systems to search ever larger databases in real time—on the spot, using live video, and with immediate results. Unlike conventional, verification-driven facial recognition, real-time systems can extract faces from a live video feed and compare them against the people that law enforcement is looking for. In other words, real-time technologies run “one-to-many” searches in rapid-fire succession, continuously scanning and probing. “Real-time face searches are dragnet searches,” explained Bedoya. This technology has already been purchased and implemented by severalpolice departments and fusion centers.
Were such technologies to become sufficiently complex, law enforcement could use them to identify every face in a crowd, rather than searching for an individual suspect. This could give authorities “arrest-at-will” power, or the ability to quickly single out and arrest individuals with outstanding warrants, even for petty offenses. These warrants are common, especially in communities of color. In 2015, for example, the Department of Justice found that three-quarters of the residents of Ferguson, Missouri had outstanding arrest warrants. The vast majority of these were for minor offenses like “High Grass and Weeds” or “Barking Dog” violations.
The miniaturization of real-time technologies is also concerning. In Russia, a smartphone app called FindFace allows users to snap a photo of a stranger in order to find their profile on VKontakte, a Russian social-networking site. This technology is now being used on CCTV cameras to conduct real-time surveillance in Moscow. Ntech, the lab that developed the software, told The Intercept that it’s already compatible with police body cameras; and an American company known as FaceFirst already sells this technology to companies in the United States. “By placing [the technology] on a standard SLR camera,” boasts an advertisement from 2010, “the user is able to photograph people, and literally identify everyone in an entire crowd.”
Body-camera manufacturers have taken notice. In March, a Justice Department-funded survey found that nine of thirty-eight body camera manufacturers offer facial recognition capacities, or are capable of integrating them. Taser, the leading manufacturer of body cameras, has openly acknowledged their desire for real-time surveillance capabilities.
If real-time technology is integrated with the databases held at the FBI and elsewhere, its power will be multiplied exponentially. Law enforcement will be able to use a miniaturized, real-time system to track entire communities of people as they walk down public streets. This will be a watershed moment for the ongoing privacy debate in America, in that the question will no longer be whether the benefits to law enforcement outweigh the costs to anonymity and privacy. “If that happens,” said Bedoya, “you have to start asking yourself a much more fundamental question. Does that look like America?”
Harrison recently graduated from Middlebury College, where he studied political science and Arabic. At Middlebury, he was an editor of the student-run newspaper, The Campus; he also volunteered through the college’s chapter of Amnesty International. A Massachusetts native, Harrison has previously worked as a research intern at the Council on American Islamic Relations in Boston as well as a security researcher at the Arab Institute for Security Studies in Jordan, where he also studied abroad. At The Century Foundation, Harrison will be working with the surveillance and privacy team on a variety of research and writing projects related to surveillance law, the targeting of vulnerable populations, and spy technologies.
The Face of Surveillance
When the iPhone X debuted last month, Apple introduced much of the American public to a new technology: facial recognition. The company’s flagship device includes a feature, called Face ID, that allows users to unlock the phone by looking at it.
Although Face ID is more secure than competing systems, the underlying concept has been around for years. Facebook already uses facial recognition to tag photos, and is prototyping facial recognition-based account recovery. Microsoft has developed an algorithm that can recognize a user’s face and try to guess their mood. Walmart has tested anti-theft facial-recognition systems. But one of the biggest customers is not in the private sector: American law enforcement.
Law enforcement has a compelling interest in the development of facial recognition. It allows the police to identify criminals caught on camera, or review security footage of terrorist attacks in order to identify the perpetrators. It is a remarkably powerful tool.
But with this power comes the potential for abuse. Unlike technologies that target cell phones or computers, facial recognition is difficult to avoid. You can’t leave your face at home, nor can you turn it off. Facial recognition operates surreptitiously: because it can scan your face without your knowledge or consent, you cannot know where—or when—you are being watched. Facial recognition is also far cheaper than alternative methods of surveillance. Traditionally, law enforcement would have to invest significant time and resources to monitor someone’s activity, which narrowed the scope of surveillance that law enforcement could practically conduct. However, “the automation of [surveillance] allows it to happen for even minor crimes,” explained Alvaro Bedoya, Executive Director of the Center on Privacy and Technology at Georgetown Law, and an expert on facial recognition. It extends surveillance, he says, even to “trespassing, obstructing the entrance to a building, disorderly conduct—the sort of things that happen at a peaceful protest.”
Especially under the current administration, even attending a peaceful protest could land you with felony charges. As the technology’s capacities rapidly advance, facial recognition could be used to unmask protestors, activists, or journalists en-masse and in real time, chilling free speech and expression. It could also be used to monitor the activity of entire communities deemed dangerous to the state. “Democracy by its nature relies on anonymity of some activities and interactions,” writes Jake Laperruque. Facial recognition threatens to eliminate it.
The data structures needed for these abuses are largely in place. Roughly half of American adults already have their faces stored in a facial recognition database, and the government stores not only mugshots, but driver’s licenses, visa pictures, and employment documents. Facial recognition searches of these databases are run routinely at all three levels of government.
Although there are situations when facial recognition could be used legitimately, no legal or statutory restrictions limit it to such circumstances. The public knows almost nothing about it, and Congress has just begun to grapple with the implications of its increasingly pervasive use.
Sign up for updates.
The Technology
Facial recognition works in three phases. In the first phase, law enforcement feeds the system an image of an unidentified face, known as a “probe shot.” This is the person police are trying to identify. The image could be taken by an undercover officer, extracted from a video feed, or pulled from a social media post. The system recognizes that a face is present, and begins the analysis.
In the second phase, the system extracts information from the probe shot. Most law enforcement systems superimpose a nodal map onto the unidentified profile in order to measure facial features, such as the distance between the eyes, the width of the chin, and the height of the brow. This is called a “face print.” When facial recognition is used on crime shows like CSI, it’s usually the face print that makes it on-screen.
In the final stage, the extracted profile is compared against previously acquired profiles stored in a database. These comparisons are run for one of two purposes: verification or identification. For a verification check, the system runs a “one-to-one” comparison between the unidentified face and a known profile. This is how Apple’s Face ID works. The user takes a series of self-portraits, and the iPhone calculates a mathematical representation of the face. When someone tries to unlock the phone, it compares the user’s face with the face it has in storage; if the two profiles match, the phone will unlock. Law enforcement might use facial recognition to verify the identity of someone already in custody. For instance, customs and Border Patrol are piloting a system that allows them to validate U.S. electronic passports with the travelers’ faces.
But more often, law enforcement uses it to identify an unknown individual. In this case, the facial-recognition system will perform a “one-to-many” comparison between the unidentified probe shot and multiple profiles stored in a collection, currently estimated to include roughly half of American adults. If law enforcement wants to identify an unknown suspect caught on a CCTV camera, for example, their system will compare footage of the suspect’s face with photos stored in a database. The system will return a list of most-likely matches, known as a candidate list. Candidates are ranked by probability based on how similar their facial profile is with the person from the video feed.
Databases, Databases, Databases
In order to run effective identification searches, law enforcement must have access to a lot of facial profiles. There are a number of different federal, state, and local databases that store these photos. In total, law enforcement has access to around 411 million images held at databases across the country. This figure refers to photos, not individual people, which explains why this figure exceeds the population of the United States.
Some state and local law enforcement agencies have facial-recognition systems of their own, and others work in collaboration with federal authorities at the FBI. Depending on the database, these searches may be conducted by federal, state, or local authorities.
At the FBI, The Facial Analysis, Comparison, and Evaluation (FACE) Services Unit is responsible for overseeing facial recognition searches. For this purpose, the FBI has assembled a series of databases known as the Next Generation Identification (NGI) system.
Within NGI, the FACE Unit can directly search the Interstate Photo System (NGI-IPS), a federal database network that includes over 30 million photos. One partition of NGI-IPS includes photos of people who have been subjected to the criminal justice system, but not necessarily convicted of a crime. It includes mugshots and probation photos. Another subsection of NGI-IPS, the Unsolved Photo File (UPF), is populated with law enforcement photos taken during an investigation of an unidentified individual. In 2010, NGI-IPS was quietly expanded to include photos taken pursuant to government background or employment checks. These photos, undisputedly civil in nature, account for approximately 20 percent of the database.
The FACE Unit conducts hundreds of searches of NGI-IPS per day. Seven states can also search NGI-IPS using software known as the Universal Face Workstation.
There are a number of other databases accessible to federal law enforcement, including a large database of visa and passport photos held by the State Department. In the aggregate, however, the largest photo repositories available to law enforcement are held by state Departments of Motor Vehicles (DMVs).
According to a report issued by the Government Accountability Office (GAO) in May of 2016, at least sixteen states grant the FBI access to their DMV databases, which hold photos from driver’s licenses and other state IDs. In total, these databases hold over 235 million images. The same report stated that the FBI was negotiating Memorandums of Understanding (MOUs) with eighteen additional states in order to gain access to their DMV photos. Then something strange happened. The FBI denied that they were negotiating the MOUs. The agency told Diane Maurer, the Director of Homeland Security and Justice Issues at the GAO and the author of the GAO report, that FBI employees had merely provided information about the federal program to state officials, and were not seeking data-sharing agreements. Whatever the case, the GAO removed those states from their report and reissued it in August. The true status of these negotiations remains unclear.
The inclusion of the DMV databases into the NGI system was controversial for a number of reasons. From a civil-liberties perspective, allowing law enforcement to run searches of civil databases is a dangerous precedent. Without adequate oversight, law enforcement could search for people engaging in constitutionally protected activity. When the GAO report was released in May of 2016, ACLU Vermont sued, alleging that the Vermont DMV’s program violated a state law prohibiting “biometric identifiers” from being used to identify Vermonters. The ACLU won their case, and had the program shut down.
It is also controversial because of the way DMV facial-recognition systems were designed. The databases are optimized for “one-to-one” comparisons, in order to prevent identity theft or fraud, not the “one-to-many” comparisons run by law enforcement. This means that there is a higher chance the system will return a list of entirely innocent candidates.
Furthermore, searches of DMV databases are not conducted by facial recognition experts. The FBI says members of its FACE Unit receive extensive training in the use and limits of state-of-the-art facial recognition technology. But these experts are not permitted direct access to state DMV databases. Rather, state and local authorities are responsible for conducting searches and returning these results to the FBI. These authorities are unlikely to have the expertise, training, or manpower to properly conduct and interpret the searches. The person in charge of the search “could be completely untrained, somebody who spent two hours with the system,” said Clare Garvie, an Associate at the Georgetown Law Center on Privacy & Technology.
State and local authorities also lack the expertise required to review the candidate list. At the federal level, at least two trained facial examiners review every match returned by the FBI’s system. The standards for this review process are set by the Facial Identification Scientific Working Group (FISWG), a working group of academics and practitioners studying the use of facial recognition by law enforcement. They endorse morphological analysis, or the comparison of mathematical facial measurements between subjects, a technique meant to prevent human reviewers from relying on shortcuts, like glasses or hairstyle, when assessing matches. At the state and local level, such a sophisticated review is not required. The entirety of the candidate list, even candidates that could have been eliminated by human review, may be passed on to law enforcement. Police could easily misidentify someone from this faulty line-up, and then arrest them. In such a case, an erroneous computer determination would be solely to blame for a serious constitutional violation.
A Lack of Oversight
The rationale required to run a search in the NGI system is as overbroad as its scope. To run a bulk search of millions of Americans’ faces, the FBI must merely allege that they may find information indicating “potential criminal activity.” This is because the FBI and Department of Justice take the position that Americans have no Fourth Amendment protection for what they display in public. Since you display your face in public, the argument goes, they may search it. Although the FBI have offered no further explanation, they don’t have to: no federal law explicitly regulates facial recognition, and no major court decision limits it.
Patchy search protocols are not the only cause for concern. The NGI system is also insulated from external oversight. When the FBI began integrating facial profiles into the NGI-IPS database in 2010, it should have filed a System of Records Notice (SORN) and a Privacy Impact Assessment (PIA). This was required under the Privacy Act of 1974, a federal law meant to prevent the creation of secret databases about American citizens. The FBI failed to file these documents for over five years, but faced no penalty from the Department of Justice. “The FBI hasn’t even gotten a slap on the wrist for not having been transparent,” said Lee Tien, a lawyer at the Electronic Frontier Foundation.
In 2015, Senator Al Franken commissioned the aforementioned GAO report on the FBI’s use of facial recognition. The report, titled “Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy,” recommended that the FBI assess why the SORNs and PIAs were not completed on time, conduct an audit of its biometric systems, and conduct tests to ensure that facial recognition is accurate enough to be used by law enforcement. The Department of Justice dismissed these recommendations, which they said represented a needless “checkbox approach” to privacy. Internal oversight was sufficient.
Less than a year later, on the same day the FBI filed its first SORN and PIA, the agency moved to exempt the NGI system from the Privacy Act entirely, on the grounds that the law “interfered with their role as a law enforcement agency.” Public outcry was swift and intense. “This is an extraordinarily broad proposal,” read a letter from forty-four different privacy, civil liberties, and immigrants’ rights organizations, “and the system it affects is extraordinarily sensitive—particularly for the communities it may affect the most.” Yet even if required to comply with it, the Privacy Act was written decades before the creation of facial recognition, and contains no explicit provisions regarding its use. “The entire Privacy Act is riddled with problems,” acknowledged Tien. “The saga of the NGI database is like a cautionary tale, a failure in non-compliance.” In any case, these blind spots in the law are now moot: the Department of Justice approved the FBI’s request on August 31, 2017.
Even members of Congress have struggled to get information on law enforcement’s policy on facial recognition. In March of 2017, Kimberly Del Greco, the Deputy Assistant Director of the Criminal Information Services Division of the FBI, told Senators that “Law enforcement has performed photo lineups and manually reviewed mugshots for decades… We only search the criminal mugshots that we have in our repository.” This is misleading to the brink of outright falsehood. Although the FBI can only directly search NGI-IPS and a database from the State Department, their system is designed to give the FBI access, through intermediaries, to millions of other photos. Several members of Congress were outraged by Del Greco’s mischaracterization. Senator Elijah Cummings (D-Maryland) reprimanded Del Greco for her dishonesty: “I usually don’t do this, but it kind of left me not feeling very good.” Representative Stephen Lynch (D-Massachusetts) told Del Greco that he has “zero confidence” in the ability of the FBI and Department of Justice to keep the system in check.
The Future of Facial Recognition
Current facial-recognition systems are somewhat hobbled by inaccuracy. If the probe shot (the photo of the unidentified person) lacks sufficient resolution or field-of-view, law enforcement might not be able to use it in a search. Even if the photo is clear, the pose of the subject, the illumination, and their expression (known as PIE) can all undermine accuracy. After the Boston Marathon bombings, for example, facial recognition failed to identify either suspect, despite the fact that their photos were in the database. False positives are also a problem. In 2011, the FBI found that, when its system was calibrated to return a candidate list of 50 potential matches, it returned a list of entirely innocent candidates about 14 percent of the time. A technology that fails 14 percent of the time should not be trusted by agencies capable of subjecting citizens to the criminal justice system. Furthermore, false positives have tarnished biometric evidence in the past, both on hair samples and fingerprints. These supposedly foolproof technologies have led to numerous false arrests and convictions, particularly among people of color. There is no reason to believe that facial recognition will be any different.
At the same time, highly accurate facial-recognition systems pose risks of their own, and they are on the horizon. Cities are fast replacing their old CCTV cameras, which produce grainy or dark footage, with state-of-the-art high-definition systems. And the government is rapidly learning from the private sector, which has developed more sophisticated facial-recognition systems for a variety of marketing and entertainment purposes. Over the past decade, facial recognition technology has made big leaps forward.
The most powerful and potentially dangerous development is the growing capability of facial-recognition systems to search ever larger databases in real time—on the spot, using live video, and with immediate results. Unlike conventional, verification-driven facial recognition, real-time systems can extract faces from a live video feed and compare them against the people that law enforcement is looking for. In other words, real-time technologies run “one-to-many” searches in rapid-fire succession, continuously scanning and probing. “Real-time face searches are dragnet searches,” explained Bedoya. This technology has already been purchased and implemented by several police departments and fusion centers.
Were such technologies to become sufficiently complex, law enforcement could use them to identify every face in a crowd, rather than searching for an individual suspect. This could give authorities “arrest-at-will” power, or the ability to quickly single out and arrest individuals with outstanding warrants, even for petty offenses. These warrants are common, especially in communities of color. In 2015, for example, the Department of Justice found that three-quarters of the residents of Ferguson, Missouri had outstanding arrest warrants. The vast majority of these were for minor offenses like “High Grass and Weeds” or “Barking Dog” violations.
The miniaturization of real-time technologies is also concerning. In Russia, a smartphone app called FindFace allows users to snap a photo of a stranger in order to find their profile on VKontakte, a Russian social-networking site. This technology is now being used on CCTV cameras to conduct real-time surveillance in Moscow. Ntech, the lab that developed the software, told The Intercept that it’s already compatible with police body cameras; and an American company known as FaceFirst already sells this technology to companies in the United States. “By placing [the technology] on a standard SLR camera,” boasts an advertisement from 2010, “the user is able to photograph people, and literally identify everyone in an entire crowd.”
Body-camera manufacturers have taken notice. In March, a Justice Department-funded survey found that nine of thirty-eight body camera manufacturers offer facial recognition capacities, or are capable of integrating them. Taser, the leading manufacturer of body cameras, has openly acknowledged their desire for real-time surveillance capabilities.
If real-time technology is integrated with the databases held at the FBI and elsewhere, its power will be multiplied exponentially. Law enforcement will be able to use a miniaturized, real-time system to track entire communities of people as they walk down public streets. This will be a watershed moment for the ongoing privacy debate in America, in that the question will no longer be whether the benefits to law enforcement outweigh the costs to anonymity and privacy. “If that happens,” said Bedoya, “you have to start asking yourself a much more fundamental question. Does that look like America?”
Tags: surveillance, privacy, facial recognition, iPhone X