We’re updating our website on March 1st! Click here to see how this might affect you.

Excerpts

An Excerpt from Predatory Data

Anita Say Chan

February 11, 2025

Share

To shape the future of tech, we must look to the past and draw from the traditions of the researchers, artists, and activists who have envisioned futures that defy probability, writes scholar Anita Say Chan.

For years, activists, academics, and tech industry insiders have warned the public of the insidious elements underlying tech. We have seen widespread instances of algorithm-driven discrimination and techno-surveillance that disproportionately affect marginalized communities worldwide. It may often feel that technological advancements are inevitable and that we must simply adapt to this new state of the world. However, just as the negative parts of tech have their roots in the past, so too is there a rich history of people rising up and resisting harmful societal norms.

In her new book, Predatory Data, feminist and decolonial scholar Anita Say Chan uncovers the connection between the anti-immigration and eugenics movements of the nineteenth century and today’s technological advancements. In this excerpt, she encourages readers to refuse the dominant narrative about the universality of tech and remain hopeful by believing in alternative worlds and potential futures rooted in justice.

◊◊◊◊◊

Tactic 5: Defending Improbable Worlds

Despite all odds, improbable worlds exist all around us. They are the statistically or politically minoritized contexts, conditions, and outcomes that in their emergence and existence defy probability and the metrics of scale. In their minoritized status, however, they find a means to thrive in the face of given, dominant systems, drawing support from unlikely and unpredictable resources and allies and cultivating new solidarities for such ends. Although they might exist improbably—with other outcomes more likely by numerical or political measure to emerge—they are not less valuable or meaningful. Whether the emergence of unlikely outcomes such as planet Earth in a universe that’s largely hostile to life or the thriving of minoritized communities when dominant forces might condition assimilation or incorporation, improbable worlds powerfully shape the heterogeneity and plurality of possible ways of life and being.

This has been harder to notice, however, in a world increasingly defined by digital systems’ amplified projections of probable outcomes and futures. This, after all, has been the impact of new prediction-driven AI systems as they have grown to become mundane, loudly self-signally incorporations into everyday environments. The ever-more prolific real-time recommendations such systems deliver provide their assessments to users based on their calculations around a given dataset and the most probable solution sought by users at scale or (less often) a particular user over time. They are delivered via numerous mapping and consumer platforms, large language models and social media, and digital identification and self-driving technologies, among many other AI-based prediction systems that now operate across varied everyday ecologies. Probable world solutions, for this reason, bias toward the reproduction of dominant worldviews and what has been or can be statistically most represented in a dataset.

While such recommended outcomes can, in some cases, provide users with recommendations that appear to be the safest bet, there are many others for which generating probable outcomes as projected futures empirically fails when compared to real-world outcomes. In other situations, such probability-based outcomes would be undesired for reproducing majoritarian worldviews, biases, and discriminatory hierarchies. The over- or underrepresentation of either majority or minoritized populations can lead AI systems to over- or underpredict real-world outcomes, for instance. This was the case when the AI system COMPAS, used by judges in several US states, was found to wrongly overpredict Black individuals’ and underpredict White individuals’ likelihood to commit future crimes (Angwin et al. 2016). This was the case, too, when students of color and with visual impairments at the University of Illinois were found to be overflagged as cheating by the facial recognition and online proctoring platform Proctorio (Flaherty 2021). AI systems’ probabilistic readings of user tastes in music and arts-based platforms have led creative producers to even critique how systems are encouraging more formulaic, predictable approaches to composition that have narrowed the possibilities for artistic expression as producers are nudged toward designing for tastes that have been measured at scale (Jax 2023). In the meantime, creative producers are pressed to grapple with the numerous other possible forms of expression and creation that are being extinguished through the quiet work of automated prediction.

Such increasingly narrow, monoculturalist terms for inclusion, legibility, and existence within predictive, AI-driven platforms are among the new pathologies publics now navigate as technological evolution and intelligence return to the majoritarian, probable world terms of techno-eugenicists. However, then, as now, accepting such terms is far from inevitable. There continue to be signs and spaces that indicate just how deeply a defense of improbable worlds that enable and multiply minoritized worldviews would be embraced. I have also argued that such inflated cries of existential crisis and xenophobic paranoia are not only age-old strategies used to justify authoritarian practices and resecure majority populations’ dominance. The book signals, too, the rising influence of a powerful new generation of techno-eugenic promoters whose darkly cast depictions of present technological decline now operate alongside the more familiar forms of celebratory hype that for decades had made industry enthusiasm the dominant force in public framings of technology. Both, however, depend on the spread of a probable world and the continued empowerment of dominant classes as the outcome of AI systems. Little wonder that growing publics have come to call for resistance to such systems for increasing bias and limiting the creative possibilities for an independent, unprescribed future.

This book is a reminder, then, of the vast ecologies of multivalent, multitemporalized forms of data work, practice, and studies that resist the monofuturist projections of AI and big data temporalities through explicitly data pluralist practices in defense of improbable worlds. The diversity of relationalities represented across their multisited, multimethod approaches not only defend data pluralism as a vibrantly active feature of research practices that exceed the norms of knowledge and innovation centers, but work to retemporalize and diversify dominant data regimes. Across such spaces we’ve seen researchers, artists, and activists cultivate local data relations within a multiplicity of transnational sites, interfacing diverse epistemologies and representing pluriversal possibilities. Responding to local needs, projects can take on a variety of aspects and forms. And bringing data together requires the patience and careful labor of committed relationship building across lines of difference that defies big data’s restless adherence to an urgent, production- and extraction-demanding innovation time.

Data pluralist commitments emerge from the recognition of the irreducibly varied data methods, formats, tempos, and histories long cultivated and still sustained by practitioners across local worlds. Calling out the false conceit of technological revolution’s—and now big data and AI’s—projected universalism, they take seriously not only the violence enacted in attempting to deny or disguise the full diversity of data, information, and knowledge possible through probable world readings and reductions. They also remind us of the situated nature of alternative justice-oriented data practices and the varied improbable worlds they support. They remind us that seeing data from below and grounded within local contexts, and rejecting what Donna Haraway called the “god’s eye view from nowhere” (1988), is a necessary ethical stance. It may indeed be our best bet for enabling relations of accountability and collaborative being to be centered in data work and diverse local worlds.

 

Excerpted from Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future by Anita Say Chan, published by University of California Press. © 2025 by Anita Say Chan.

 

About the Author

Anita Say Chan is a feminist and decolonial scholar of Science and Technology Studies and Associate Professor of Information Sciences and Media Studies at the University of Illinois, Urbana-Champaign.

Learn More

We have updated our privacy policy. Click here to read our full policy.