What the No Fly List Teaches Us About Big Data

Shirin beat me to the punch in her excellent discussion of the court’s order in the first-ever no fly list case to be decided on the merits (an issue I previously discussed here).  But while I agree with her analysis, I think she possibly overlooks the key import of the case: what it tells us about the stickiness of errors and the risks of big data.

In this regard, there are two things worth noting:

First, as we knew previously, the Terrorist Screening Center, which maintains the No Fly List and underlying Terrorist Screening Database, reviews the complaints of those who have gone to the airport with ticket in hand and prohibited from boarding a plane.   But what we did not know until yesterday is that in the course of this review the TSC “does not undertake additional fieldwork.”  They base their review on the information already in their possession and “may (or may not)” contact the nominating agency for further derogatory information.  As a result, it’s not even clear how an error like the one that occurred in Ms. Ibrahim’s case – caused by checking the wrong box on a form – is ever detected.

Second, while we don’t know for sure whether or how the 2004 error that landed her on the No Fly List continues to haunt her, the implication is that it does.  While the government reportedly corrected the error with respect to the No Fly List back in 2005, it appears from the opinion that the (erroneous) derogatory information worked its way into multiple databases, and may even be contributing to her ongoing visa troubles some eight years later.  Expressing a “reason to doubt that the error and all of its echoes have been traced and cleansed from all interlocking databases,” the court has ordered that a long overdue purging process take place now.

A worthwhile read for anyone thinking about surveillance, error, and big data. 

About the Author(s)

Jennifer Daskal

Associate Professor at American University Washington College of Law Follow her on Twitter (@jendaskal).