reflection about film and media

Description

pls check doc for more details

Don't use plagiarized sources. Get Your Custom Assignment on
reflection about film and media
From as Little as $13/Page

Unformatted Attachment Preview

3–4 pages (double-spaced, standard fonts and formats, etc.).
How “They” “See” “Us” was the stated theme of Week 4, but it just as well
applies to many of the topics, texts, and discussions from Week 5 and Week 6.
For this Reflection, unpack the terms used here and how they relate to the
materials we’ve examined during these three weeks. I’m looking not only for
an explanation of the role of surveillance in datafication and digitization
processes, but some effort to describe and assess how each of these terms -“They,” “See,” and “Us” — become slippery, confused, harder to pin down as
algorithms come to play an ever larger role in shaping our identity,
communities, and cultures.
Be as specific as you can. To get to 10 points, you should draw on 2–3 ideas
developed in lecture and/or class discussion (4 points possible), quote directly
a passage from each of 2–3 course readings (4 points possible), and
elaborate a common thread, central thesis, or main takeaway of all this (4
points possible).
all of which culminates in one main takeaway for you, expressed as a
concluding statement regarding the above material—consider, e.g., What’s
the common thread across this material? What do you think is the most
important thing you’ve learned these few weeks? The most
urgent theoretical problem? the most illuminating theoretical framework or
motif? Why? How would you like to see it developed? (3 pts possible)
Please do not plagiarize and use any AI tools for this analysis. Citing every
source you used. Very important!!!!!!!!!!!
Week 4 / How “They” “See” “Us”
1. Shoshana Zuboff, “Big Other: Surveillance Capitalism and the Prospects of an Information
Civilization.” (2015, journal article)
2. Thomas Brewster, “Meet the Secret Surveillance Wizards Helping the FBI and ICE
Wiretap Facebook and Google Users,” (2022, newspaper)
https://www.forbes.com/sites/thomasbrewster/2022/02/23/meet-the-secretive-sur
veillance-wizards-helping-the-fbi-and-ice-wiretap-facebook-and-google-users/?sh=4
de022133f0f
3. Michael Steinberger, “Does Palantir See Too Much,” (2023, newspaper)
https://www.nytimes.com/interactive/2020/10/21/magazine/palantir-alex-karp.htm
l
4. Kate Crawford, “Classification,” Atlas of AI (2021, monograph chapter) only read chapter
4 classification
Week 5 / “The Human Use of Human Beings”
1. Wendy Hui Kyong Chun, “Algorithmic Authenticity,” Discriminating Data (2021,
monograph chapter) only read chapter 3 Algorithmic Authenticity in doc
2. Sarah Roberts, “Your AI is a Human,” Your Computer Is On Fire (2021, anthology chapter)
only read chapter 2 Your AI is a Human
3. Josh Dzieza, “AI Is a Lot of Work” (2023, magazine article)
https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-sca
le-surge-remotasks-openai-chatbots
Week 6 / AI: Normativity and Racialization
1. Halcyon Lawrence, “Siri Disciplines,” Your Computer Is On Fire (2021, anthology chapter)
only read chapter 8 Siri Disciplines
2. Media object/intervention: Lior Zalmanson, Excess Ability (2014, video art)

3. Safiya Umoja Noble, “Your Robot Isn’t Neutral,” Your Computer Is On Fire (2021,
anthology chapter) only read chapter 9 Your Robot Isn’t Neutral
Atlas of AI
This page intentionally left blank
Atlas of AI
Power, Politics, and the Planetary Costs
of Artificial Intelligence
K AT E C R AW F O R D
New Haven and London
Copyright © 2021 by Kate Crawford.
All rights reserved.
This book may not be reproduced, in whole or in part, including
illustrations, in any form (beyond that copying permitted by
Sections 107 and 108 of the U.S. Copyright Law and except by
reviewers for the public press), without written permission from
the publishers.
Yale University Press books may be purchased in quantity for
educational, business, or promotional use. For information, please
e-­mail [email protected] (U.S. office) or [email protected]
(U.K. office).
Cover design and chapter opening illustrations by Vladan Joler.
Set in Minion by Tseng Information Systems, Inc.
Printed in the United States of America.
Library of Congress Control Number: 2020947842
ISBN 978-­0-­300-­20957-­0 (hardcover : alk. paper)
A catalogue record for this book is available from the British
Library.
This paper meets the requirements of ANSI/NISO Z39.48-­1992
(Permanence of Paper).
10 9 8 7 6 5 4 3 2 1
For Elliott and Margaret
This page intentionally left blank
Contents
Introduction 1
one. Earth 23
t wo. Labor 53
t h ree. Data 89
f ou r. Classification 123
five. Affect 151
si x. State 181
C onc lu sion . Power 211
C oda. Space 229
Acknowledgments 239
Notes 245
Bibliography 269
Index 315
Introduction
The Smartest Horse in the World
A
t the end of the nineteenth century, Europe was
captivated by a horse called Hans. “Clever Hans”
was nothing less than a marvel: he could solve math
problems, tell time, identify days on a calendar, differentiate musical tones, and spell out words and sentences.
People flocked to watch the German stallion tap out answers
to complex problems with his hoof and consistently arrive at
the right answer. “What is two plus three?” Hans would diligently tap his hoof on the ground five times. “What day of the
week is it?” The horse would then tap his hoof to indicate each
letter on a purpose-­built letter board and spell out the correct
answer. Hans even mastered more complex questions, such as,
“I have a number in mind. I subtract nine and have three as a
remainder. What is the number?” By 1904, Clever Hans was an
international celebrity, with the New York Times championing
him as “Berlin’s Wonderful Horse; He Can Do Almost Everything but Talk.”1
Hans’s trainer, a retired math teacher named Wilhelm
von Osten, had long been fascinated by animal intelligence.
2Introduction
Von Osten had tried and failed to teach kittens and bear cubs
cardinal numbers, but it wasn’t until he started working with
his own horse that he had success. He first taught Hans to
count by holding the animal’s leg, showing him a number, and
then tapping on the hoof the correct number of times. Soon
Hans responded by accurately tapping out simple sums. Next
von Osten introduced a chalkboard with the alphabet spelled
out, so Hans could tap a number for each letter on the board.
After two years of training, von Osten was astounded by the
animal’s strong grasp of advanced intellectual concepts. So he
took Hans on the road as proof that animals could reason.
Hans became the viral sensation of the belle époque.
But many people were skeptical, and the German board
of education launched an investigative commission to test Von
Osten’s scientific claims. The Hans Commission was led by
the psychologist and philosopher Carl Stumpf and his assistant Oskar Pfungst, and it included a circus manager, a retired
schoolteacher, a zoologist, a veterinarian, and a cavalry officer.
Yet after extensive questioning of Hans, both with his trainer
present and without, the horse maintained his record of correct answers, and the commission could find no evidence of
deception. As Pfungst later wrote, Hans performed in front of
“thousands of spectators, horse-­fanciers, trick-­trainers of first
rank, and not one of them during the course of many months’
observations are able to discover any kind of regular signal”
between the questioner and the horse.2
The commission found that the methods Hans had
been taught were more like “teaching children in elementary
schools” than animal training and were “worthy of scientific
examination.”3 But Strumpf and Pfungst still had doubts. One
finding in particular troubled them: when the questioner did
not know the answer or was standing far away, Hans rarely
gave the correct answer. This led Pfungst and Strumpf to con-
Introduction3
Wilhelm von Osten and Clever Hans
sider whether some sort of unintentional signal had been providing Hans with the answers.
As Pfungst would describe in his 1911 book, their intuition was right: the questioner’s posture, breathing, and facial
expression would subtly change around the moment Hans
reached the right answer, prompting Hans to stop there.4
Pfungst later tested this hypothesis on human subjects and
confirmed his result. What fascinated him most about this
discovery was that questioners were generally unaware that
they were providing pointers to the horse. The solution to the
Clever Hans riddle, Pfungst wrote, was the unconscious direction from the horse’s questioners.5 The horse was trained
to produce the results his owner wanted to see, but audiences
felt that this was not the extraordinary intelligence they had
imagined.
The story of Clever Hans is compelling from many angles:
the relationship between desire, illusion, and action, the business of spectacles, how we anthropomorphize the nonhuman,
4Introduction
how biases emerge, and the politics of intelligence. Hans inspired a term in psychology for a particular type of conceptual
trap, the Clever Hans Effect or observer-­expectancy effect, to
describe the influence of experimenters’ unintentional cues on
their subjects. The relationship between Hans and von Osten
points to the complex mechanisms by which biases find their
ways into systems and how people become entangled with the
phenomena they study. The story of Hans is now used in machine learning as a cautionary reminder that you can’t always
be sure of what a model has learned from the data it has been
given.6 Even a system that appears to perform spectacularly in
training can make terrible predictions when presented with
novel data in the world.
This opens a central question of this book: How is intelligence “made,” and what traps can that create? At first glance,
the story of Clever Hans is a story of how one man constructed
intelligence by training a horse to follow cues and emulate
humanlike cognition. But at another level, we see that the practice of making intelligence was considerably broader. The endeavor required validation from multiple institutions, including academia, schools, science, the public, and the military.
Then there was the market for von Osten and his remarkable
horse—emotional and economic investments that drove the
tours, the newspaper stories, and the lectures. Bureaucratic authorities were assembled to measure and test the horse’s abilities. A constellation of financial, cultural, and scientific interests had a part to play in the construction of Hans’s intelligence
and a stake in whether it was truly remarkable.
We can see two distinct mythologies at work. The first
myth is that nonhuman systems (be it computers or horses)
are analogues for human minds. This perspective assumes that
with sufficient training, or enough resources, humanlike intelligence can be created from scratch, without addressing the
Introduction5
fundamental ways in which humans are embodied, relational,
and set within wider ecologies. The second myth is that intelligence is something that exists independently, as though it were
natural and distinct from social, cultural, historical, and political forces. In fact, the concept of intelligence has done inordinate harm over centuries and has been used to justify relations
of domination from slavery to eugenics.7
These mythologies are particularly strong in the field of
artificial intelligence, where the belief that human intelligence
can be formalized and reproduced by machines has been axiomatic since the mid-­twentieth century. Just as Hans’s intelligence was considered to be like that of a human, fostered
carefully like a child in elementary school, so AI systems have
repeatedly been described as simple but humanlike forms of
intelligence. In 1950, Alan Turing predicted that “at the end of
the century the use of words and general educated opinion will
have altered so much that one will be able to speak of machines
thinking without expecting to be contradicted.”8 The mathematician John von Neumann claimed in 1958 that the human
nervous system is “prima facie digital.”9 MIT professor Marvin
Minsky once responded to the question of whether machines
could think by saying, “Of course machines can think; we can
think and we are ‘meat machines.’”10 But not everyone was
convinced. Joseph Weizenbaum, early AI inventor and creator
of the first chatbot program, known as eliza, believed that
the idea of humans as mere information processing systems is
far too simplistic a notion of intelligence and that it drove the
“perverse grand fantasy” that AI scientists could create a machine that learns “as a child does.”11
This has been one of the core disputes in the history of
artificial intelligence. In 1961, MIT hosted a landmark lecture
series titled “Management and the Computer of the Future.”
A stellar lineup of computer scientists participated, including
6Introduction
Grace Hopper, J. C. R. Licklider, Marvin Minsky, Allen Newell,
Herbert Simon, and Norbert Wiener, to discuss the rapid advances being made in digital computing. At its conclusion,
John McCarthy boldly argued that the differences between
human and machine tasks were illusory. There were simply
some complicated human tasks that would take more time to
be formalized and solved by machines.12
But philosophy professor Hubert Dreyfus argued back,
concerned that the assembled engineers “do not even consider
the possibility that the brain might process information in
an entirely different way than a computer.”13 In his later work
What Computers Can’t Do, Dreyfus pointed out that human
intelligence and expertise rely heavily on many unconscious
and subconscious processes, while computers require all processes and data to be explicit and formalized.14 As a result, less
formal aspects of intelligence must be abstracted, eliminated,
or approximated for computers, leaving them unable to process information about situations as humans do.
Much in AI has changed since the 1960s, including a
shift from symbolic systems to the more recent wave of hype
about machine learning techniques. In many ways, the early
fights over what AI can do have been forgotten and the skepticism has melted away. Since the mid-­2000s, AI has rapidly
expanded as a field in academia and as an industry. Now a
small number of powerful technology corporations deploy AI
systems at a planetary scale, and their systems are once again
hailed as comparable or even superior to human intelligence.
Yet the story of Clever Hans also reminds us how narrowly we consider or recognize intelligence. Hans was taught
to mimic tasks within a very constrained range: add, subtract,
and spell words. This reflects a limited perspective of what
horses or humans can do. Hans was already performing remarkable feats of interspecies communication, public perfor-
Introduction7
mance, and considerable patience, yet these were not recognized as intelligence. As author and engineer Ellen Ullman
puts it, this belief that the mind is like a computer, and vice
versa, has “infected decades of thinking in the computer and
cognitive sciences,” creating a kind of original sin for the field.15
It is the ideology of Cartesian dualism in artificial intelligence:
where AI is narrowly understood as disembodied intelligence,
removed from any relation to the material world.
What Is AI? Neither Artificial nor Intelligent
Let’s ask the deceptively simple question, What is artificial
intelligence? If you ask someone in the street, they might
mention Apple’s Siri, Amazon’s cloud service, Tesla’s cars, or
Google’s search algorithm. If you ask experts in deep learning, they might give you a technical response about how neural nets are organized into dozens of layers that receive labeled
data, are assigned weights and thresholds, and can classify data
in ways that cannot yet be fully explained.16 In 1978, when discussing expert systems, Professor Donald Michie described AI
as knowledge refining, where “a reliability and competence of
codification can be produced which far surpasses the highest
level that the unaided human expert has ever, perhaps even
could ever, attain.”17 In one of the most popular textbooks on
the subject, Stuart Russell and Peter Norvig state that AI is the
attempt to understand and build intelligent entities. “Intelligence is concerned mainly with rational action,” they claim.
“Ideally, an intelligent agent takes the best possible action in
a situation.”18
Each way of defining artificial intelligence is doing work,
setting a frame for how it will be understood, measured, val­
ued, and governed. If AI is defined by consumer brands for
corporate infrastructure, then marketing and advertising have
8Introduction
predetermined the horizon. If AI systems are seen as more reliable or rational than any human expert, able to take the “best
possible action,” then it suggests that they should be trusted to
make high-­stakes decisions in health, education, and criminal justice. When specific algorithmic techniques are the sole
focus, it suggests that only continual technical progress matters, with no consideration of the computational cost of those
approaches and their far-­reaching impacts on a planet under
strain.
In contrast, in this book I argue that AI is neither artificial nor intelligent. Rather, artificial intelligence is both
embodied and material, made from natural resources, fuel,
human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to
discern anything without extensive, computationally intensive
training with large datasets or predefined rules and rewards. In
fact, artificial intelligence as we know it depends entirely on a
much wider set of political and social structures. And due to
the capital required to build AI at scale and the ways of seeing
that it optimizes AI systems are ultimately designed to serve
existing dominant interests. In this sense, artificial intelligence
is a registry of power.
In this book we’ll explore how artificial intelligence is
made, in the widest sense, and the economic, political, cultural, and historical forces that shape it. Once we connect AI
within these broader structures and social systems, we can escape the notion that artificial intelligence is a purely technical domain. At a fundamental level, AI is technical and social
practices, institutions and infrastructures, politics and culture.
Computational reason and embodied work are deeply interlinked: AI systems both reflect and produce social relations
and understandings of the world.
It’s worth noting that the term “artificial intelligence”
Introduction9
can create discomfort in the computer science community.
The phrase has moved in and out of fashion over the decades
and is used more in marketing than by researchers. “Machine
learning” is more commonly used in the technical literature.
Yet the nomenclature of AI is often embraced during funding application season, when venture capitalists come bearing
checkbooks, or when researchers are seeking press attention
for a new scientific result. As a result, the term is both used
and rejected in ways that keep its meaning in flux. For my purposes, I use AI to talk about the massive industrial formation
that includes politics, labor, culture, and capital. When I refer
to machine learning, I’m speaking of a range of technical approaches (which are, in fact, social and infrastructural as well,
although rarely spoken about as such).
But there are significant reasons why the field has been focused so much on the technical—algorithmic breakthroughs,
incremental product improvements, and greater convenience.
The structures of power at the intersection of technology, capital, and governance are well served by this narrow, abstracted
analysis. To understand how AI is fundamentally political, we
need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom,
and who gets to decide. Then we can trace the implications of
those choices.
Seeing AI Like an Atlas
How can an atlas help us to understand how artificial intelligence is made? An atlas is an unusual type of book. It is a
collection of disparate parts, with maps that vary in resolution from a satellite view of the planet to a zoomed-­in detail
of an archipelago. When you open an atlas, you may be seeking specific information about a particular place—or perhaps
10Introduction
you are wandering, following your curiosity, and finding unexpected pathways and new perspectives. As historian of science
Lorraine Daston observes, all scientific atlases seek to school
the eye, to focus the observer’s attention on particular telling
details and significant characteristics.19 An atlas presents you
with a particular viewpoint of the world, with the imprimatur
of science—scales and ratios, latitudes and longitudes—and a
sense of form and consistency.
Yet an atlas is as much an act of creativity—a subjective,
political, and aesthetic intervention—as it is a scientific collection. The French philosopher Georges Didi-­Huberman thinks
of the atlas as something that inhabits the aesthetic paradigm
of the visual and the epistemic paradigm of knowledge. By
implicating both, it undermines the idea that science and art
are ever completely separate.20 Instead, an atlas offers us the
possibility of rereading the world, linking disparate pieces differently and “reediting and piecing it together again without
thinking we are summarizing or exhausting it.”21
Perhaps my favorite account of how a cartographic approach can be helpful comes from the physicist and technology critic Ursula Franklin: “Maps represent purposeful endeavors: they are meant to be useful, to assist the traveler and
bridge the gap between the known and the as yet unknown;
they are testaments of collective knowledge and insight.”22
Maps, at their best, offer us a compendium of open pathways—shared ways of knowing—that can be mixed and combined to make new interconnections. But there are also maps
of domination, those national maps where territory is carved
along the fault lines of power: from the direct interventions of
drawing borders across contested spaces to revealing the colonial paths of empires. By invoking an atlas, I’m suggesting that
we need new ways to understand the empires of artificial intelligence. We need a theory of AI that accounts for the states and
Introduction11
corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of
data, and the profoundly unequal and increasingly exploitative
labor practices that sustain it. These are the shifting tectonics of power in AI. A topographical approach offers different
perspectives and scales, beyond the abstract promises of artificial intelligence or the latest machine learning models. The
aim is to understand AI in a wider context by walking through
the many different landscapes of computation and seeing how
they connect.23
There’s another way in which atlases are relevant here.
The field of AI is explicitly attempting to capture the planet
in a computationally legible form. This is not a metaphor so
much as the industry’s direct ambition. The AI industry is
making and normalizing its own proprietary maps, as a centralized God’s-­eye view of human movement, communication,
and labor. Some AI scientists have stated their desire to capture the world and to supersede other forms of knowing. AI
professor Fei-­Fei Li describes her ImageNet project as aiming
to “map out the entire world of objects.”24 In their textbook,
Russell and Norvig describe artificial intelligence as “relevant
to any intellectual task; it is truly a universal field.”25 One of
the founders of artificial intelligence and early experimenter
in facial recognition, Woody Bledsoe, put it most bluntly: “in
the long run, AI is the only science.”26 This is a desire not to
create an atlas of the world but to be the atlas—the dominant
way of seeing. This colonizing impulse centralizes power in
the AI field: it determines how the world is measured and defined while simultaneously denying that this is an inherently
political activity.
Instead of claiming universality, this book is a partial account, and by bringing you along on my investigations, I hope
to show you how my views were formed. We will encounter
12Introduction
well-­visited and lesser-­known landscapes of computation: the
pits of mines, the long corridors of energy-­devouring data
centers, skull archives, image databases, and the fluorescent-­
lit hangars of delivery warehouses. These sites are included not
just to illustrate the material construction of AI and its ideologies but also to “illuminate the unavoidably subjective and
political aspects of mapping, and to provide alternatives to
hegemonic, authoritative—and often naturalized and reified—
approaches,” as media scholar Shannon Mattern writes.27
Models for understanding and holding systems accountable have long rested on ideals of transparency. As I’ve written with the media scholar Mike Ananny, being able to see a
system is sometimes equated with being able to know how it
works and how to govern it.28 But this tendency has serious
limitations. In the case of AI, there is no singular black box to
open, no secret to expose, but a multitude of interlaced systems of power. Complete transparency, then, is an impossible
goal. Rather, we gain a better understanding of AI’s role in the
world by engaging with its material architectures, contextual
environments, and prevailing politics and by tracing how they
are connected.
My thinking in this book has been informed by the disciplines of science and technology studies, law, and political philosophy and from my experience working in both academia
and an industrial AI research lab for almost a decade. Over
those years, many generous colleagues and communities have
changed the way I see the world: mapping is always a collective
exercise, and this is no exception.29 I’m grateful to the scholars
who created new ways to understand sociotechnical systems,
including Geoffrey Bowker, Benjamin Bratton, Wendy Chun,
Lorraine Daston, Peter Galison, Ian Hacking, Stuart Hall,
Donald MacKenzie, Achille Mbembé, Alondra Nelson, Susan
Leigh Star, and Lucy Suchman, among many others. This book
Introduction13
benefited from many in-­person conversations and reading the
recent work by authors studying the politics of technology, including Mark Andrejevic, Ruha Benjamin, Meredith Broussard, Simone Browne, Julie Cohen, Sasha Costanza-­Chock,
Virginia Eubanks, Tarleton Gillespie, Mar Hicks, Tung-­Hui
Hu, Yuk Hui, Safiya Umoja Noble, and Astra Taylor.
As with any book, this one emerges from a specific lived
experience that imposes limitations. As someone who has lived
and worked in the United States for the past decade, my focus
skews toward the AI industry in Western centers of power. But
my aim is not to create a complete global atlas—the very idea
invokes capture and colonial control. Instead, any author’s view
can be only partial, based on local observations and interpretations, in what environmental geographer Samantha Saville
calls a “humble geography” that acknowledges one’s specific
perspectives rather than claiming objectivity or ­mastery.30
Just as there are many ways to make an atlas, so there are
many possible futures for how AI will be used in the world. The
expanding reach of AI systems may seem inevitable, but this is
contestable and incomplete. The underlying visions of the AI
field do not come into being autonomously but instead have
been constructed from a particular set of beliefs and perspectives. The chief designers of the contemporary atlas of AI are a
small and homogenous group of people, based in a handful of
cities, working in an industry that is currently the wealthiest
in the world. Like medieval European mappae mundi, which
illustrated religious and classical concepts as much as coordinates, the maps made by the AI industry are political interventions, as opposed to neutral reflections of the world. This
book is made against the spirit of colonial mapping logics, and
it embraces different stories, locations, and knowledge bases to
better understand the role of AI in the world.
14Introduction
Heinrich Bünting’s mappa mundi, known as The Bünting
Clover Leaf Map, which symbolizes the Christian Trinity,
with the city of Jerusalem at the center of the world. From
Itinerarium Sacrae Scripturae (Magdeburg, 1581)
Topographies of Computation
How, at this moment in the twenty-­first century, is AI conceptualized and constructed? What is at stake in the turn to artificial intelligence, and what kinds of politics are contained in
the way these systems map and interpret the world? What are
the social and material consequences of including AI and related algorithmic systems into the decision-­making systems of
social institutions like education and health care, finance, government operations, workplace interactions and hiring, com-
Introduction15
munication systems, and the justice system? This book is not a
story about code and algorithms or the latest thinking in computer vision or natural language processing or reinforcement
learning. Many other books do that. Neither is it an ethnographic account of a single community and the effects of AI on
their experience of work or housing or medicine—although
we certainly need more of those.
Instead, this is an expanded view of artificial intelligence
as an extractive industry. The creation of contemporary AI systems depends on exploiting energy and mineral resources from
the planet, cheap labor, and data at scale. To observe this in ac­
tion, we will go on a series of journeys to places that reveal the
makings of AI.
In chapter 1, we begin in the lithium mines of Nevada,
one of the many sites of mineral extraction needed to power
contemporary computation. Mining is where we see the extractive politics of AI at their most literal. The tech sector’s
demand for rare earth minerals, oil, and coal is vast, but the
true costs of this extraction is never borne by the industry
itself. On the software side, building models for natural language processing and computer vision is enormously energy
hungry, and the competition to produce faster and more efficient models has driven computationally greedy methods that
expand AI’s carbon footprint. From the last trees in Malaysia
that were harvested to produce latex for the first transatlantic
undersea cables to the giant artificial lake of toxic residues in
Inner Mongolia, we trace the environmental and human birthplaces of planetary computation networks and see how they
continue to terraform the planet.
Chapter 2 shows how artificial intelligence is made of
human labor. We look at the digital pieceworkers paid pennies
on the dollar clicking on microtasks so that data systems can
seem more intelligent than they are.31 Our journey will take us
16Introduction
inside the Amazon warehouses where employees must keep in
time with the algorithmic cadences of a vast logistical empire,
and we will visit the Chicago meat laborers on the disassembly
lines where animal carcasses are vivisected and prepared for
consumption. And we’ll hear from the workers who are protesting against the way that AI systems are increasing surveillance and control for their bosses.
Labor is also a story about time. Coordinating the actions
of humans with the repetitive motions of robots and line machinery has always involved a controlling of bodies in space
and time.32 From the invention of the stopwatch to Google’s
TrueTime, the process of time coordination is at the heart of
workplace management. AI technologies both require and create the conditions for ever more granular and precise mechanisms of temporal management. Coordinating time demands
increasingly detailed information about what people are doing
and how and when they do it.
Chapter 3 focuses on the role of data. All publicly accessible digital material—including data that is personal or po­
tentially damaging—is open to being harvested for training
datasets that are used to produce AI models. There are gigantic
datasets full of people’s selfies, of hand gestures, of people
driving cars, of babies crying, of newsgroup conversations
from the 1990s, all to improve algorithms that perform such
functions as facial recognition, language prediction, and object detection. When these collections of data are no longer
seen as people’s personal material but merely as infrastructure, the specific meaning or context of an image or a video
is assumed to be irrelevant. Beyond the serious issues of privacy and ongoing surveillance capitalism, the current practices
of working with data in AI raise profound ethical, methodological, and epistemological concerns.33
And how is all this data used? In chapter 4, we look at
I