Data-driven marketing enhances the potential of marketing campaigns

Description

“Data-driven marketing enhances the potential of marketing campaigns”. Discuss with appropriate examples.You are asked to write an essay of 4,000 words on this question, backing up your argument with concrete examples whenever possible. Be sure to discuss both sides of the argument before reaching your conclusion, and take care to cite all references carefully to avoid any possibility of inadvertent plagiarism.AI tools student disclosure and reflection statement 400-500 words (details in the docx)

Don't use plagiarized sources. Get Your Custom Assignment on
Data-driven marketing enhances the potential of marketing campaigns
From as Little as $13/Page

Unformatted Attachment Preview

ESSAY
Title
“Data-driven marketing enhances the potential of marketing campaigns”. Discuss with
appropriate examples.
Notes
You are asked to write an essay of 4,000 words on this question, backing up your argument
with concrete examples whenever possible. Be sure to discuss both sides of the argument
before reaching your conclusion, and take care to cite all references carefully to avoid any
possibility of inadvertent plagiarism.
AI tools student disclosure and reflection statement
Whether the student is doing the essay or report, each assignment should include a student
disclosure and reflection statement. This should be 400-500 words, which is included in the
4,000 total for the assignment.
There is increasing global interest in the capacities of generative artificial intelligence
technologies; ‘generative AI’ here refers to machine-learning based technologies that can
create ‘new’ content, such as text (e.g. ChatGPT) or images (e.g. Stable Diffusion). Students
producing assignments for a module, however, cannot pass off material created by generative
AI tools as their own work because this would constitute academic misconduct. But recent
guidelines adopted by college, do allow for “the ethical use of generative AI” in the context of
assessments. Staff and students in our Department are particularly well placed to think
critically about the various impacts of artificial intelligence on our lives, societies and work.
For these reasons, we are now asking that all students add a short (400-500 word) statement
to the beginning of each final assignment (the end of semester essays). In this statement, we
ask you to write critically and reflectively about either: – what AI tool(s) you used and how and
why you used them – or, how and why you decided not to use AI tools.
For your assignment, if you decided to work with specific generative AI tools (including
outcomes of large language models, AI powered translation tools, image generators) in your
statement you must be transparent and reflective about their use. Explain in your statement
which steps and processes you used for particular tools and how you worked with and adapted
the AI tool to optimise its outcome (for example how different prompts changed the results).
We also expect you to critically reflect on your usage of AI through a short explanation of how
you acknowledged and accounted for the known biases and risks of the tools, how this may
have affected your work, and what you did to mitigate these issues. We do NOT want to see
screenshots, but you may quote AI outcomes or prompts you used.
If you did not use specific generative AI tools, then you still must write this statement, but in
this case you should explain why you did not. Be honest about this: it may relate to your
personal habits and preferences, your first language, your existing knowledge/lack of
knowledge about these tools, or ethical/intellectual concerns relating to academic integrity or
problems with the tools themselves. Or you might have tried to use these tools, but were not
satisfied. If you wish, you may also reflect on how a wider range of ‘non generative’ AI (and
AIrelated) technologies may have influenced your research or writing process (for example,
search engines, social media algorithms, predictive text, translation/grammar tools).
The purpose of this statement is not to encourage you to use these tools, but rather to learn
about AI in our working lives and consider the wider impact of AI in scholarship. For example,
when reflecting on the impacts of AI we need to be especially aware of their limits and risks
of use – these are not fully reliable, neutral or unbiased tools, and they are rarely equally
available (or equally functional/safe) for different individuals and communities.
Please note that this Al declaration does count toward your assignment word count and is
considered as part of the assignment contributing to the overall grade awarded. Failure to
include this statement will lead to an automatic penalty of 10 marks. This will be applied like
the late submission policy and would not take the overall grade below the pass mark.
FURTHER INFORMATION
Russell Group principles on the use of generative AI tools
https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf
in education
Consumption Markets & Culture
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/gcmc20
Algorithmic consumer culture
Massimo Airoldi & Joonas Rokka
To cite this article: Massimo Airoldi & Joonas Rokka (2022) Algorithmic consumer culture,
Consumption Markets & Culture, 25:5, 411-428, DOI: 10.1080/10253866.2022.2084726
To link to this article: https://doi.org/10.1080/10253866.2022.2084726
Published online: 03 Jun 2022.
Submit your article to this journal
Article views: 2896
View related articles
View Crossmark data
Citing articles: 5 View citing articles
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=gcmc20
CONSUMPTION MARKETS & CULTURE
2022, VOL. 25, NO. 5, 411–428
https://doi.org/10.1080/10253866.2022.2084726
Algorithmic consumer culture
Massimo Airoldia and Joonas Rokkab
a
Department of Social and Political Sciences, University of Milan, Milan, Italy; bLifestyle Research Center, EMLYON
Business School, Écully, France
ABSTRACT
ARTICLE HISTORY
This article conceptualizes algorithmic consumer culture, and offers a
framework that sheds new light on two previously conflicting
theorizations: that (1) digitalization tends to liquefy consumer culture
and thus acts primarily as an empowering force, and that (2) digitalized
marketing and big data surveillance practices tend to deprive
consumers of all autonomy. By drawing on critical social theories of
algorithms and AI, we define and historicize the now ubiquitous
algorithmic mediation of consumption, and then illustrate how the
opacity, authority, non-neutrality, and recursivity of automated systems
affect consumer culture at the individual, collective, and market level.
We propose conceptualizing “algorithmic articulation” as a dialectical
techno-social process that allows us to enhance our understanding of
platform-based marketer control and consumer resistance. Key
implications and future avenues for exploring algorithmic consumer
culture are discussed.
Received 4 August 2020
Accepted 27 May 2022
KEYWORDS
Algorithms; consumer
culture; AI; digital platforms;
surveillance; consumer
resistance
The great festival of Participation is made up of myriad stimuli, miniaturised tests, and infinitely divisible question/answers, all magnetised by several great models in the luminous field of the code. (Baudrillard 1993, 70)
1. Introduction
Powerful machine learning applications and AI technologies increasingly filter, order and, ultimately, constitute everyday consumer experiences, as a sort of “technological unconscious” (Beer
2009). While digital technologies and the access-based economy have once been heralded by promises of technology-enabled consumer empowerment, democratization, or the triumph of “collective
intelligence,” such visions are turning out to be deceptively misleading in today’s world. Instead, it
looks like ongoing cultural production is headed in a bold new direction that is already challenging
(or at least partly replacing) human agency and intelligibility. As a result, it is worth asking if what
we buy online or eat for dinner has become less a matter of our choice and more the computational
result of digital platforms’ “production of prediction” (Mackenzie 2015). Also, to what extent is our
musical taste a mere consequence of YouTube or Spotify’s automated recommendations (Airoldi
2021a)? Are we truly becoming emancipated in the “networks of desire” that propagate automated,
calculated, and optimally triggered Instagram images (Kozinets, Patterson, and Ashman 2017)?
Despite the important resonance of such questions in the social sciences (Fourcade and Johns
2020; Benjamin 2019; Beer 2017, 2013; Mackenzie 2015; Bucher 2012a), the algorithmic mediation
CONTACT Massimo Airoldi
[email protected]
Conservatorio 7, 20122 Milano, Italy
© 2022 Informa UK Limited, trading as Taylor & Francis Group
Department of Social and Political Sciences, University of Milan, Via
412
M. AIROLDI AND J. ROKKA
of consumer culture has not yet been adequately considered in consumer and marketing research.
Regarding online platforms and digital consumption, the consumer culture literature is conflicted
on explaining the liquefying and empowering implications of digitalization on the one hand, and
the potentially disempowering implications of a “datafied” consumer culture and manipulative surveillance capitalism on the other. In the first case, processes of digitalization and the platform economy are portrayed as enabling “liquid” forms of consumption (access-based, ephemeral, dematerialized, individualized), which potentially emancipate consumers from social and geographical boundaries while creating value (Bardhi and Eckhardt 2017; Bardhi, Eckhardt, and Arnould
2012; Hoffman and Novak 2018; Kozinets, Patterson, and Ashman 2017). In evident contrast to
this view, critical research on digital marketing and big data has highlighted that consumers’
empowerment may rest on an illusion powerfully maintained and facilitated by the marketers,
while in reality platform users are increasingly under companies’ data-driven control (Thompson
2019; Darmody and Zwick 2020; Zwick and Denegri-Knott 2009).
In this paper, we argue that this theoretical dilemma, which opposes consumer empowerment to
marketers’ control in the age of platforms, can be partly overcome by examining how algorithmic
systems “articulate” (du Gay et al. 1997) consumption and production processes within digital
environments. Instead of broadly discussing the digitalization of markets (Hagberg and Kjellberg
2020), or the many applications and implications of AI technologies (Amoore and Piotukh 2016;
Neyland 2019), we focus on the machine learning algorithms ordinarily encountered by digital platform users (e.g. on social media, search engines, streaming, and e-commerce services), and theorize
them as non-human mediators that actively shape consumer culture (Airoldi 2022; Fourcade and
Johns 2020; Morris 2015). By historically contextualizing, defining, and illustrating the algorithmic
mediation of digital consumption, bridging social science and marketing literatures on the topic,
our paper aims to conceptualize the rise of a new form of “algorithmic culture” (Striphas 2015),
one that dialectically embraces both platform-based marketer “nudges” and consumer agency
(Darmody and Zwick 2020). Inspired by the “circuit” model of consumer culture (du Gay et al.
1997), we contribute to consumer and marketing research by conceptualizing algorithmic articulation as a dialectical and techno-social negotiation process, that allows us to enhance our understanding of platform-based marketer control and consumer resistance. Through examples from
existing literature, we then illustrate how algorithmic articulation works at the individual, collective,
and market level. Ultimately, we suggest that dynamic machine learning processes within digital
market infrastructures affect consumption in ways that cannot be entirely reduced to a topdown marketing manipulation, nor to a bottom-up consumer emancipation. In doing so, this
article contributes to earlier theoretical discussions by envisioning new directions in the study of
algorithmic consumer culture.
2. Rising platform economy and liquefying consumer culture?
Consumer research addressing the postmodern condition of consumer culture – now boosted by
the powerful forces of digitalization and the platform economy – has suggested several pathways
for considering the emerging logic of consumption and consumers’ role and agency1 within it.
In some contrast, recent critical commentaries have also stressed marketers’ increased power via
deployment of big data analytics, surveillance, and manipulative techniques by means of digital
and automated “intelligent” marketing systems.
Recent literature has suggested a profound shift towards liquid consumption, which is an
increasingly “ephemeral, access-based, and dematerialized” way of consuming (Bardhi and
Eckhardt 2017; Bardhi, Eckhardt, and Arnould 2012) that is enabled by digital media but also
encouraged by the global mobility of people, who seek instant and continuous access to products
The notion of agency here refers to “the physical or mental ability, skill or capability that enables actors to do something. The
actor is assumed to proceed under his or her own volition, or at least without the permission of others” (Arnould 2007, 97).
1
CONSUMPTION MARKETS & CULTURE
413
and services wherever they go. In this novel paradigm, it is argued, the solid, stable, and physical
nature and materiality (of consumers’ possessions) are replaced by inherent fluidity, immateriality,
and instantaneity – by which consumption happens (Bardhi and Eckhardt 2017; Molesworth, Watkins, and Denegri-Knott 2016). Thereby, it should be underlined that increasing liquidity eliminates
and resists not only solidity, but also sources of security, stability, and value for the consumer.
Further, this suggests that consumers’ identities and even social positions and structures are likely
to liquefy – in other words, become more ephemeral and unstable, due to the weakening of traditional institutions and traditions in the sense that they can no longer serve as “frames of reference
for human actions and long-term projects” (Bauman 2007, 1). This development is inherent not
only in Bauman’s thinking, but also in the works of Featherstone (1995), Firat and Venkatesh
(1995), and Firat and Dholakia (2006) among others, who foreground the idea of a deterritorialized,
fragmented, empowered, and (more or less) “sovereign” consumer subject (Holt 2002) that is in
“control” of his/her consumption decisions, expressions, and identity (Denegri-Knott, Zwick,
and Schroeder 2006).
A related stream of research has further strengthened the idea of the liberation and emancipation
of consumer identity, desires, and experiences that have been expanded by computer-powered networks and AI connectivity (Belk 2013; Kozinets, Patterson, and Ashman 2017; Hoffman and Novak
2018; Puntoni et al. 2021). First, the work on the extended digital self (Belk 2013) presents a host of
new means and potentiality for consumers to agentically extend their identities via new digital platforms and devices. In this context, consumer desire has been framed as energetic, connective, systemic, and innovative impulses that drive and unleash the passion to consume, which is amplified
and liberated to the extreme when re-connected to the machinic circuits of digital technologies and
platforms (Kozinets, Patterson, and Ashman 2017). Similarly, new forms of online connectivity,
exemplified by the Internet-of-Things (IoT), are seen as revolutionizing consumer experiences
by decentering them as part of “intelligent” human-nonhuman networks of objects, services, and
brands (Hoffman and Novak 2018), with new kinds of “benefits and costs” for the individual (Puntoni et al. 2021). In terms of consumers’ agency, such interpretations of consumption assemblages
are viewed primarily in terms of their capacity to enable and liberate: they provide opportunities for
self-extension and communal self-expansion through digitalized networks of smart objects and
humans.
Other prior research has shown many compelling examples of the “empowering” effects of digital technologies and practices, including selfies (Kedzior, Allen, and Schroeder 2016), or the re-balancing of consumer–brand (power) relationships (Rokka and Canniford 2016). It has been found
that digital platforms exert influence on perceived personal empowerment (Tiidenberg and
Gòmes Cruz 2015) or constructions of gender (Burns 2015). Yet, while some of these “effects”
can be perceived and experienced by consumers, it seems likely that the mechanisms of influencing,
and more specifically the opaque power of algorithms and “black-boxed” automated systems (Pasquale 2015), remain poorly understood – by consumers but also by scholars. For example, the way
the Facebook feed is dependent on what kinds of content the user has interacted with will recursively impact the kind of content he/she will see in the future (Bucher 2012b); this is likely to create
cultural “filter bubbles” (Pariser 2011), with evident impact on unfolding cultural production (e.g.
types of content being privileged) and ways of relating (e.g. types of people, ideology involved),
which are orchestrated by the algorithm.
In marketing literature, however, critical perspectives on the subject have recently emerged.
First, there is agreement that increasing control and surveillance over datafied consumers is at
the heart of new digital marketing logics (Ball 2017; Cluley and Brown 2015; Deighton 2019;
Zwick and Denegri-Knott 2009; Thompson 2019; see also Zuboff 2019). This can lead to the
risks associated with consumers’ experiences of personal data exploitation, misunderstanding, or
alienation (Puntoni et al. 2021). The literature has also exposed the idea that marketers benefit
from and actively facilitate myths of the digitally “empowered consumer,” and that “good” and
autonomous consumer decisions are “decisions designed by computational marketing analytic
414
M. AIROLDI AND J. ROKKA
systems” (Darmody and Zwick 2020, 10). Consequently, as convincingly argued by Darmody and
Zwick (2020), the digital marketing era actually rests on the contradiction that increasing marketer
control produces an autonomous and agentic consumer subject.
This critical literature has mainly focused on theorizing the extraction of “big” consumer data
through forms of “surveillance” (Deighton 2019; Ball 2017; Zwick and Dholakia 2004; Thompson
2019), benefits and costs linked with AI consumer experience (Puntoni et al. 2021), or digital marketing practices (Darmody and Zwick 2020). Yet no theories discuss the broader impacts of automated systems on the level of consumer culture. So far, only a few articles have tackled this issue
more than tangentially, such as Wilson-Barnao’s work (2017) on how algorithmic recommendations come to shape consumers’ access to art collections in Australia and New Zealand.
In what follows, we wish to consider and develop a more holistic framework, one that would
enable cultural examinations to account for both the forms of control exerted by platform algorithms on consumers, and the spaces of resistance left in algorithmic consumer culture. In doing
so, we take inspiration from du Gay et al. (1997), who advocate that a more comprehensive analysis
of consumer culture would require the examination of “articulations” of connections in-between
consumption (including consumers’ identity negotiation and representation of meanings) and production of culture (marketers’ production of means and meanings of consumption). Here, “articulation” refers to the “form of the connections that make up a unity of two or more distinct elements,
under certain conditions” (1997, 3). Thus, rather than privileging one single narrative of digital consumption – such as the optimistic illusion of consumers’ sovereignty (Holt 2002) or the myth of bigdata manipulation (Thompson 2019) – we shed light on the dialectical techno-social processes that
shape algorithmic consumer culture. The following section briefly outlines the historical grounds of
this algorithmic mediation of consumption, aiming to clarify the changing meanings attached to the
umbrella term “algorithm” (Seaver 2017), and illustrate why the sociotechnical evolution toward
platform-based machine learning systems bears enormous implications for consumer research
and the social sciences more broadly (Gillespie 2014).
3. Algorithms and consumers: a short history
Algorithms can be defined as computational recipes, that is, step-by-step instructions for transforming input data into a desired output (Gillespie 2014). Since their analog origin in the ancient
world (Chabert 1999), these mathematical procedures have seen a tremendous technological evolution and a parallel multiplication of application contexts, with important consequences for institutions, companies and, especially, consumers.
During the nineteenth century, algorithms were still executed manually by human professionals
known as “computers” (Chabert 1999), while electro-mechanical computing machines were about
to be developed, driven by a pressing scientific, administrative, and economic need for efficient
information processing. The diffusion of business accounting machines and calculators in the
early twentieth century brought algorithmic computation into ordinary people’s lives for the first
time. However, data were transformed and elaborated solely through analog means (e.g. punched
cards and paper tapes). It is only in 1946 that the modern electronic computer made its appearance,
making it possible to design algorithmic models, run them, read input data, and write output results
in digital format, as combinations of binary numbers stored as “bits” (Campbell-Kelly et al. 2013).
From that moment on, algorithms have been inextricably linked to a new discipline: computer
science. Technological innovations enormously increased the processing power of hardware, previously limited by material constraints. In the late 1970s, the development of microprocessors
and subsequent commercialization of personal computers fostered the popularization of computer
programming. Algorithms were not merely about numbers and abstract calculations anymore: the
digital storage of information, as well as its creation and circulation through novel networked channels, such as the nascent Internet-based communication technologies, were generating brand new
types of input data, deriving from the datafication of online behavior (Zwick and Denegri-Knott
CONSUMPTION MARKETS & CULTURE
415
2009). These “user-generated” data became commercially relevant starting in the mid-1990s, when
the rapid multiplication of web pages led to a pressing need for indexing solutions capable of overcoming the growing information overload experienced by Internet users. Systems for automatically
detecting “spam” emails were first developed. Page and Brin designed an algorithm capable of
autonomously finding “needles in haystacks” (MacCormick 2012, 25), which then became
known as PageRank and was used by Google Search. Meanwhile, e-commerce websites were experimenting with automated marketing strategies for targeting consumers and providing product recommendations (Ansari, Essegaier and Kohli 2000). By the 2000s, the World Wide Web had
substantially mutated into an “electronic marketspace,” that is “a vast network of consumer and
product databases” (Zwick and Dholakia 2004). The commercial Web 2.0 became populated by
active “prosumers” interacting on platforms such as MySpace, YouTube, Facebook, and Twitter.
Ads and content recommendations began to be tailored to the digital traces of consumers’ discourses and behaviors (Airoldi 2021b), automatically stored and computationally analyzed in
order to predict desires and elicit purchases or engagement (Mellet and Beauvisage 2020). Once
embedded in the networked infrastructure of the Internet, algorithms turned into “operational”
marketing devices (Mackenzie 2018): their output, i.e. predictions formulated based on platform
users’ behavioral data, were actualized in real time through the unsupervised ranking and filtering
of digital content, tacitly ordering consumer experiences (Zuboff 2019).
Later, the ubiquitous diffusion of smartphones further increased the access rate to digital platforms worldwide. Algorithms started to be fed with behavioral traces extracted from novel sources,
such as sensor data derived from IoT objects (Hoffman and Novak 2018; Deighton 2019). The
unprecedented volume of data “mined” from digitally mediated activities, together with the
increased availability of “cloud” computational power, made possible a new “socio-technological
revolution.” That is, the “harnessing of human cognitive resources” by AI systems (Mühlhoff
2020), which are ordinarily trained on vast amounts of data produced by consumers who are largely
unaware of their unpaid digital labor (Casilli 2019). Advanced machine learning methods paved the
way for the development and commercial implementation of AI technologies which, far from
simply following top-down rules designed by programmers, inductively “learn” from consumers
(Airoldi 2022). This epistemological shift, from the “symbolic deduction” of rule-following algorithms to the “statistical induction” characterizing new machine learning systems (Pasquinelli 2017),
entails unexplored implications for consumer culture: rather than simply executing pre-determined
scripts, the artificial agents employed by Amazon, TikTok, Facebook, Google Search, Instagram,
Spotify, Netflix or YouTube dynamically interact with consumers, evolving based on their data patterns, and often producing surprising or inexplicable results (Campolo and Crawford 2020). The
resulting forms of governmentality and cultural production represent major subjects of critical
inquiry in the social sciences.
4. Algorithms, culture and power
In the social sciences, a cross-disciplinary strand of literature known as “critical algorithm studies”
critically discusses the cultural roots and social implications of algorithms (Airoldi 2022; Neyland
2019). Here, we conceptualize these contributions to point towards four main controversial dimensions of the automated systems incorporated in digital services and consumer devices: their opacity,
authority, non-neutrality, and recursivity.
4.1. Opacity
Being commonly developed by private companies for business purposes, the code and computational activities of platform-based algorithms are largely opaque and “immune from scrutiny”
(Pasquale 2015, 5). Secrecy is justified by the fact that these systems represent strategic assets for
companies (Hallinan and Striphas 2016). Not only “the criteria by which algorithms determine
416
M. AIROLDI AND J. ROKKA
evaluation are obscured” (Crawford 2016, 86), but the consumer surveillance activities producing
input and training data are as well (Zuboff 2019; Mühlhoff 2020). Opacity is also linked to computational complexity. In the case of advanced artificial intelligence techniques, such as neural networks,
the behavior of the machine is not entirely understandable, not even by developers (Campolo and
Crawford 2020; Pasquinelli 2017). For this reason, scholars and practitioners launch calls for “opening
the black box” and making algorithms accountable (Noble 2018). Even the output of computation
creates forms of opacity: for instance, digital content algorithmically judged as “irrelevant” will
become, as a result, invisible to the user (Amoore and Piotukh 2016, 5; Bucher 2012b).
4.2. Authority
Whether automated machines have the capability to express or enable forms of power and authority
is another issue debated in this literature (Beer 2017). Platform-based algorithms govern consumers’ digital experiences through automated classification and recommendation practices (Mackenzie 2006; Cheney-Lippold 2011; Morris 2015; Airoldi 2021a). Such an algorithmic authority is
believed to carry disruptive implications for the notion of agency, which necessarily becomes a
techno-social interplay: “an algorithm selects and reinforces one ordering at the expense of others.
Agency, therefore, is by definition contested in and through algorithms” (Mackenzie 2006, 44).
Algorithms are widely portrayed as powerful social actors that shape possibilities and limit agency
(Beer 2013, 69). This implies that algorithms not only mediate but “constitute” human lives (Beer
2009, 987), acting as “a kind of invisible structural force that plays through into everyday life in various ways” (Beer 2013, 69). In the literature, the opacity of algorithmic governance has often steered
comparisons with Foucault’s panopticist views of society (Cheney-Lippold 2011; Bucher 2012b).
However, recent empirical works have highlighted how consumers actively attempt to make
sense, in a bottom-up way, of the obscure functioning of platform-based technologies, such as
the recommender algorithm of Spotify (Siles et al. 2020) or the advertising systems of Facebook
and Instagram (Ruckenstein and Granroth 2020).
4.3. Non-neutrality
Algorithmic authority also derives from the fact that computational outputs are presented as the
mathematical outcome of a scientific, automated, and thus allegedly objective process (Mackenzie
2006; Beer 2013; Gillespie 2014). However, critical scholars have demonstrated that, far from being
neutral technologies, algorithmic systems heavily depend on cultural assumptions inscribed in
mathematical models and datasets (Neyland 2019; Benjamin 2019), as components of complex
sociotechnical “assemblages” (Schwennesen 2019; Seaver 2017). According to Mager, “capitalist
ideology gets inscribed in search algorithms” (2012, 770), and the same happens in the case of platform-based metrics, such as “like” buttons (van Dijck 2013). The non-neutral character of algorithms is particularly evident considering the human-generated data employed to train and
calibrate machine learning systems, which are largely derived from Internet sources, or produced
by low-paid “clickworkers” – for instance, those annotating texts, images or audio files on crowdsourcing platforms like Amazon Mechanical Turk (Casilli 2019; Mühlhoff 2020). The cultural biases
inscribed in training and input data eventually end up reinforcing gender, class, and racial discriminations (Benjamin 2019). For instance, Noble (2018) painstakingly illustrates how the stereotypical
social representations of African American women historically at the root of US culture are
amplified by Google Search results.
4.4. Recursivity
The automated iteration of procedures is one foundational characteristic of algorithms (Chabert
1999, 4). When the output of a computational process itself becomes embedded in the input of a
CONSUMPTION MARKETS & CULTURE
417
new iteration, the algorithm is called “recursive” (Beer 2013, 78–79). In the case of the algorithms
embedded in online infrastructures, some scholars have stressed how such recursivity may have
broad social and cultural implications (Hallinan and Striphas 2016; Beer 2013; Fourcade and
Johns 2020; Airoldi 2022). Consider the case of YouTube videos as an example. Two YouTube
videos are likely to be “related” by the platform’s recommender system if they are co-viewed by
many users. However, related videos are also the main source of video views (Airoldi, Beraldo
and Gandini 2016). Since users largely rely on them to decide what to watch next, this is likely
to generate a “closed commercial loop” that, iteration after iteration, strengthens past consumption
patterns (Hallinan and Striphas 2016, 6), eventually “normalizing” them (Mackenzie 2015, 442).
Similar feedback loops are also established in the case of other platforms and digital services
(Beer 2013, 81; Bucher 2012b), and lie at the very core of the “extractive” processes of surveillance
capitalism (Zuboff 2019, 68).
This means that, as Kitchin and Dodge put it, “the models analyze the world and the world
responds to the models” (2011, 30). Whether the “world’s response” is intentional – i.e. SEO techniques aimed to deliberately please search engines (Mager 2012), or not – i.e. the unaware adaptation of musical taste to automated recommendations (Airoldi 2021a), the result is essentially
the same. That is, “the world starts to structure itself in the image of the capta and the code,”
and thus “a self-fulfilling, recursive relationship develops” (Kitchin and Dodge 2011, 41). The
societal result of this techno-social interplay has been referred to in the sociological literature as
an “algorithmic culture,” reduced to “the positive reminder resulting from specific information processing tasks” (Striphas 2015, 406).
However, it is important to note that computational models “respond” to the world as well. In
fact, recursivity also works the other way around: consumers iteratively influence algorithmic
behavior through their explicit or implicit feedback – i.e. their (more or less aware) datafied reactions to algorithmic outputs, such as hiding a Facebook ad or skipping a recommended song (Fisher
and Mehozay 2019; Bucher 2017). Based on such real-time inputs, machine learning systems adjust
their models, aiming to provide outputs more aligned with consumers’ expectations – e.g. more