In the wake of the mass shootings in Dayton, Ohio, and El Paso, Texas, the Trump Administration floated the creation of a new governmental agency named HARPA, the Health Advanced Research Projects Agency, modeled after DARPA, the Defense Advanced Research Projects Agency, that could explore novel ways of curtailing gun violence. For an administration unwilling to entertain serious legislation to address the problem of gun violence in the United States, HARPA offered a way to appear to be doing something about gun violence. HARPA, advocates maintained, could house a project called SAFEHOME, an acronym for “Stopping Aberrant Events by Helping Overcome Mental Extremes.” SAFEHOME would use “breakthrough technologies with high specificity and sensitivity for early diagnosis of neuropsychiatric violence”; the proposal would draw on data from Apple Watches, Fitbits, Amazon Echo, and Google Home to predict when someone might be on the cusp of mass violence (Alemany 2019). The guiding assumption of SAFEHOME is that surveillance of this biophysical data, combined with extant surveillance of textual messaging, search patterns, social networking sites, and discussion boards would alert law enforcement officials to a prospective shooter. Think Minority Report (2002, Steven Spielberg) with digital surveillance technology playing the role of psychic precogs. SAFEHOME is probably (hopefully) a nonstarter in serious conversations about gun violence, given the tenuous link between mental health, physical disposition, and violence; the inevitability of data-profiling being articulated to minoritized subjects and false positives (imagine the first time SAFEHOME flags a SWAT team on someone having sex) and obvious concerns about such an invasive surveillance regime. But the very fact that a program like SAFEHOME is posed as a potentially credible solution points to a dimension of surveillance that complements this forum’s discussion of ubiquity: granularity.