At least two US law enforcement departments — and Motorola, which sells equipment to the government — have already purchased access to Amazon’s “Rekognition” system. This technology combines facial recognition and artificial intelligence to identify people and track their movements, including in crowds.

Among the many civil-liberties implications of programs like these is the real possibility that people in the United States facing imprisonment or deportation will never learn about law enforcement’s use of such systems during investigations, thanks to the U.S. government practice known as “parallel construction.” This means the constitutionality of such activities could go unchallenged by defendants and unexamined by judges, who are essential to providing checks on police powers.

As Human Rights Watch noted in a report published earlier this year, parallel construction involves deliberately concealing  an investigative source or method by creating an alternative explanation for how a case began or evidence was found. Although the nature of parallel construction makes instances difficult to document conclusively, Human Rights Watch concluded that available information suggests the government routinely engages in this practice.

Several past revelations point to a connection between parallel construction and technologies or surveillance the government does not wish to publicize, but that have likely been the sources of information in prosecutions. In just one example, an FBI non-disclosure agreement included language directing local police to find “additional and independent investigative means and methods” to avoid revealing the use of cell-site simulators (often known as “Stingrays”), which behave like cell-phone towers and can capture location and other data from nearby phones.

Where facial recognition is concerned, Georgetown Law’s Center on Privacy & Technology reported in 2016 that although Pinellas County, Florida had searched a face-recognition database of people’s photographs thousands of times, the local public defender’s office had never received any disclosure of the technology in response to motions for information that could assist defendants (which the government must turn over under Brady v. Maryland). While the lack of disclosures does not prove that parallel construction took place, it further illustrates the risk of a gap between the technology the government may use and the investigative measures that are actually revealed to defendants.

Someone steeped in Fourth Amendment doctrine might argue that facial-recognition systems such as “Rekognition” don’t carry out “searches” for the purpose of that amendment—after all, many of us freely show our faces without seeming to demonstrate much of an expectation of privacy. And a police officer can often match a person to a photo without any special equipment. Based on this, some might contend that facial-recognition programs are clearly constitutional and that it does not matter if the government decides not to disclose them to defendants.

But such a view allows the executive branch to usurp the judge’s role. In a criminal prosecution, it is for the court, not the government, to decide whether a measure is legal—after hearing from defense as well as the government. And the case for the constitutionality of police uses of facial-recognition software is not necessarily clear-cut.

For example, software could attempt to detect subtle indicators of mood or draw inferences about non-visible personal characteristics such as sexual orientation (although Amazon does not claim Rekognition can do these things)—a technological “assist” that could have Fourth Amendment privacy implications in addition to creating other rights risks. It could combine facial-recognition databases with other, perhaps even more sensitive, data. Like a polygraph, it its accuracy could also be open to debate, especially where images of people of color and women are concerned; an MIT study found that these groups were underrepresented in data used to train facial-recognition AI and assess its reliability.

Questions about new technology have long prompted the evolution of search-and-seizure and evidentiary doctrines. This happened as early as the 1967 Katz decision, when the Supreme Court found that police need a warrant to wiretap a call from a public phone booth, and has continued through more recent judgments such as Jones (requiring a warrant for attaching a GPS tracking device to a vehicle) and Riley (imposing a warrant requirement for searches of cell phones). The forthcoming Carpenter decision will similarly address the warrantless gathering of cell-phone records.

The European Court of Human Rights has found that collecting data in a public place, or concerning a person’s public movements, can interfere with privacy rights in some circumstances. It seems possible that U.S. courts could someday adopt a similar view, including where systems like “Rekognition” are concerned.

Government’s use of facial recognition technologies could also harm people’s rights to free expression and freedom of association. For example, law enforcement might view someone as suspicious simply for appearing in a photo with friends or family members who have been caught up in one of the country’s error-ridden gang databases. Police could also begin monitoring peaceful protests through drones equipped with facial recognition and then decide to investigate attendees. Indeed, a pending bill in Illinois concerning police use of drones has the potential to bring this one step closer to reality.

Thus, thanks to parallel construction, people could receive unfair trials in which they never learn that their cases involved constitutionally questionable—or plainly illegal—methods. This is particularly worrisome since under the longstanding “fruit of the poisonous tree” doctrine, whereby courts are generally supposed to exclude evidence that police obtained through unlawful conduct.

Parallel construction can also create inequities in the justice system as a whole, and a recent case we discovered thanks to a public defender in the US southwest shows how this can happen. In the spring of 2016, a woman driving with her children was pulled over by a local sheriff’s deputy who was a member of a Department of Homeland Security task force, ostensibly for changing lanes unsafely. The deputy began pressing the woman about whether she had any illegal firearms, urging her to tell him where “it” was, and she eventually told him she was bringing a pistol to her cousin that the cousin had bought from her husband. She was arrested and indicted, and as an undocumented immigrant, faced deportation.

However, the woman’s public defender was aware of the practice of parallel construction and prepared a motion contending that the deputy seemed to have known in advance that the woman had the gun and asking the court to order the government to disclose any information it had had about her before the deputy pulled her over. Such information, the motion suggested, could include not only tips from human sources, but National Security Agency reports or the products of other surveillance.

Although the government never revealed whether it had such advance information, the prosecutors dismissed the charges after the defense attorney shared a copy of this motion with them. But other defendants, especially those whose lawyers are less familiar with parallel construction, may not be so fortunate. In this way, an unfair practice could lead to an unfair justice system.

For courts to play their vital role in ensuring that any government investigative measures—including sophisticated emerging technologies—are lawful, both judges and the defense need to know what law enforcement is doing. Congress should require the government to disclose complete information about the methods used to obtain evidence—and in the meantime, judges should strongly consider doing the same. The digital age, with its unprecedented capabilities to catalogue intimate details about our lives, is no time to relax our vigilance in defending rights.