Synthetic Minds Podcast #3: Statistical Truths (A Genealogy)

Bani Brusadin & Vladan Joler
Bani Brusadin & Vladan Joler

Bani Brusadin & Vladan Joler

 

In the third episode of the Synthetic Minds podcast we talk with artist and researcher Vladan Joler about AI as a manifestation of power. Drawing insights from his recent project, Calculating Empires, Joler reflects on classification systems, the birth of the modern manager, the prisons of the 21st century, speed and adaptation, and AI as both an optical device and a projection machine, unveiling how a centuries-long imperial project of extraction and control actually resonates in the high-technology companies of today.

Vladan Joler is an academic, researcher and artist whose work blends data investigations, counter-cartography, investigative journalism, essays, data visualization, and critical design. He is co-founder of the SHARE Foundation and professor at the New Media department of the University of Novi Sad. Vladan Joler’s work is included in the permanent collections of the Museum of Modern Art (MoMA) in New York City, the Victoria and Albert Museum and the Design Museum in London, and also in the permanent exhibition of the Ars Electronica Center (Austria). His work has been exhibited in more than a hundred international exhibitions, including institutions and events such as: MoMA, ZKM (Karlsruhe), the Triennale (Milan), HKW (Berlin), the Vienna Biennale, Transmediale (Berlin), Ars Electronica (Linz), Bienal WRO (Wroclaw), Design Society Shenzhen, Hyundai Motorstudio Beijing, MONA (Tasmania), Le Gaîté Lyrique (Paris), the Council of Europe in Strasbourg and the European Parliament in Brussels.

In a series of 5 episodes, the Synthetic Minds podcast embarks on a journey with contemporary artists and researchers who have devised unusual methodologies in order to navigate this emerging landscape. Their stories, thoughts, and research will help unearth ambiguities and reveal new strategies to reimagine and redesign a planetary society for the 21st century.

 

CREDITS

The Synthetic Minds Podcast is written and hosted by Bani Brusadin.

Sound editor Matías Rossi
Episode notes by Bani Brusadin
December 2023

Soundtrack of episode 3

Original sound: Matías Rossi

 

EPISODE NOTES

1’12”

“This generative AI will become a new operative system”

 

The idea that generative AI could become a new operating system (OS) refers to the potential shift in computing paradigms where artificial intelligence systems take on a more central and pervasive role in managing and interacting with computer systems. The shift toward generative AI in internet technology mirrors the impact of cloud computing and is likely to become even more integrated into the fabric of internet-based applications. 

 

Gargantuan companies such as Amazon are already providing the basic construction blocks to make generative AI an integral part of corporate environments exactly as they did with the cloud in the first two decades of the 21st century. See for instance, the Amazon Bedrock service: “Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources.” (Source)

1’51”

“His work blends different ingredients, data investigations, counter cartography, investigative journalism, writing, data visualization and cultural research.”

 

Vladan Joler is an academic, researcher and artist. He is SHARE Foundation co-founder and professor at the New Media department of the University of Novi Sad. He led the SHARE Lab team.

 

In 2018, in cooperation with Kate Crawford, he published Anatomy of an AI System, a visual investigation and long-form essay that dissect the Amazon Echo device as a terminal of a complex system of systems that extract and process human labor, data and planetary resources.

 

 

 

A previous study, entitled Facebook Algorithmic Factory, included deep forensic investigations and visual mapping of the algorithmic processes and forms of exploitation behind the largest social network. 

 

Overview of Facebook Algorithmic Factory - Labs.rs

 

Vladan Joler’s work is included in the permanent collections of the Museum of Modern Art (MoMA) in New York City, the Victoria and Albert Museum and the Design Museum in London, and also in the permanent exhibition of the Ars Electronica Center. His work has been exhibited in more than a hundred international exhibitions, including institutions and events such as: MoMA, ZKM, XXII Triennale di Milano, HKW, Vienna Biennale, V&A, Transmediale, Ars Electronica, Biennale WRO, Design Society Shenzhen, Hyundai Motorstudio Beijing, MONA, Glassroom, La Gaîté Lyrique, the Council of Europe in Strasbourg and the European Parliament in Brussels.

3’50”

“In 2013, a scientific project began to rewild the Northern Bald Ibis”

 

In 2013, after 11 years of pre-studies, the Waldrapp project led by Johannes Fritz began to rewild the Northern Bald Ibis in Europe. The majority of birds have been equipped with GPS trackers and are monitored in real-time. In the first 9 years, the number of rewilded birds rose from zero to almost two hundred. 

 

 

See also To Stop an Extinction, He’s Flying High, Followed by His Beloved Birds (2023)

4’19”

“In the framework of Latent Spaces - Performing Ambiguous Data”

 

Latent Spaces. Performing Ambiguous Data is an arts-based research project whose starting points are “the concept of the latent space – the (technical / conceptual) realm in which different possibilities co-exist before one (or more) is then realized – and of ambiguity – the state in which different valid readings co-exist within a system of meaning. The practice of ‘performing’ in the project's subtitle refers to the fact that knowledge is generated not only through critical reflection and academic publications but also through artistic interventions/creations in a wide range of media and formats.” (Source)

 

The project is led by Felix Stalder and is hosted and funded by the Zurich University of the Arts.

4’37”

“...in which a migratory bird is becoming wild again”
 

According to current models, at least 357 birds are necessary for the population to be self-sustainable. This number is projected to be reached by 2028. (Source)
 

6’42”

“It's some kind of really weird cyberpunk reality”
 

For a slightly different approach to birds, data and machine learning, see: https://knowingmachines.org/publications/bird-in-hand
 

7’26”

“The political and cultural ecologies of contemporary technology are notoriously hard to see at scale”

 

Scale is used here in a rather different way than technological scalability, a concept that is often used to explain the potential of aggressive technological solutions across different markets, geographies or social realities.

 

On the contrary, the notion of scale is key in order to grasp the multi-layered, interconnected nature of contemporary technology, as well as its political and aesthetic consequences. 

 

Several artists and authors delved into these issues, some explicitly tackling the notion of scale, such as Anna Lowenhaupt Tsing in “On Nonscalability: The Living World Is Not Amenable to Precision-Nested Scales” (2012), Zachary Horton in The Cosmic Zoom

Scale, Knowledge, and Mediation (2021), or Dipesh Chakrabarty in “On Scale and Deep History in the Anthropocene” (2021). By the same author see also “Conflicts of planetary proportions – a conversation between Bruno Latour & Dipesh Chakrabarty” (2020) and The Climate of History in a Planetary Age (2021).

 

The 2023 edition of the transmediale festival (co-curated by the author of these notes) also delved into the question of scales:

 

“From the spatial to the temporal, the intimate to the geopolitical – scale and its many technological manifestations have long been means for measuring and organising. As machine learning and automated tools become common, the scaling of images and representations fabricates and circulates realities. The politics of scaling contests established hierarchies of information by favouring certain representations over others.” (From the introduction of transmediale 2023)

 

On these topics see the collection of essays A Short Incomplete History of Technologies That Scale (2023), edited by transmediale and Aksioma.

 

11’02”

“Of course, it's a spectrum”

 

“Synthetic minds” cannot be imagined without also considering the cognitive mechanisms in the lives of animals, plants, microorganisms, and in general the tangled nature of symbiotic posthumanist ecologies. 

 

Questions about animal consciousness in the Western tradition have their roots in ancient discussions about the nature of human beings, culminating in the Cartesian view of animals as passive beings or automata responding only to stimuli. (For a general introduction on these topics see these encyclopedic entries on Animal behavior - Cognitive mechanisms and the historical background of (non human) animal consciousness.)

 

The observation of animals in their natural habitats by ethologists in the 19th and 20th century contributed to the recognition of cognitive complexity and emotional experiences of non-human animals. Jane Goodall's groundbreaking research on wild chimpanzees in the 1960s challenged provided extensive evidence of complex tool use, social structures, and emotions among primates. The cognitive revolution in psychology during the mid-20th century also led to a shift in focus from purely behaviorist approaches to studying internal mental processes, which influenced the study of animal cognition.

 

Ongoing scientific research continues to uncover evidence of cognitive abilities, problem-solving skills, self-awareness, and complex social behaviors in a wide range of animal species. For instance, in Are we smart enough to know how smart animals are? (2016) ethologist and primatist Frans De Waal reviews the rise and fall of the mechanistic view of animals and “opens our minds to the idea that animal minds are far more intricate and complex than we have assumed.”

 

This lineage of thought gradually produced posthumanist ecologies and a vast current of cultural critique around the notion of Anthropocene. Both share a common ground in challenging anthropocentrism and highlight the intricate connections between human and non-human elements in the context of environmental change, supporting the notion that humans are not separate from, but deeply embedded within, the environment. “Human nature - argues scholar Anna Tsing - is an interspecies relation.” On these topics, among many others, see Anna Lowenhaupt Tsing, The Mushroom at the End of the World: on the Possibility of Life in Capitalist Ruins, Princeton University Press, 2015; Donna Haraway, Staying with the Trouble: Making Kin in the Chthulucene, Duke University Press, 2016; Anna Lowenhaupt Tsing, Heather Anne Swanson, Elaine Gan, and Nils Bubandt (eds.), Arts of Living on a Damaged Planet: Ghosts and Monsters of the Anthropocene, University of Minnesota Press, 2017; Rosi Braidotti and Maria Hlavajova (eds.), Posthuman Glossary, Bloomsbury Academic, 2018.

The interconnection between the realm of animal intelligence and human technology was explored by media theorist Jussi Parikka, who in his groundbreaking work Insect Media (2010) analyzes how “insect forms of social organization—swarms, hives, webs, and distributed intelligence—have been used to structure modern media technologies and the network society”.

 

 

 

 

For recent and original perspectives on non-human cognition from outside of biology or philosophical studies, see Laura Tripaldi’s Parallel Minds (Urbanomic, 2022; Spanish translation here) as well as her blog Soft Futures

 

 

And also James Bridle’s Ways of Being. Animals, Plants, Machines: the Search for a Planetary Intelligence ( Penguin Books, 2022).

 

12’42”

Calculating Empires

 

Calculating Empires - A Genealogy of Technology and Power, 1500-2025 is an exhibition conceived by Kate Crawford and Vladan Joler that opened at Fondazione Prada in Milan on November 22, 2023. Exhibited is a cabinet of curiosities, a map room, and ephemera related to data and control spanning six centuries.

 

“The centerpiece of the exhibition is the Calculating Empires Map Room. Here the audience is immersed in a dark environment—like walking into a literal black box. Presenting itself as a codex of technology and power, Calculating Empires shows how the empires of the past 500 years are echoed in the technology companies of today. This detailed visual narrative extends over 24 meters and illustrates forms of communication, classification, computation, and control with thousands of individually crafted drawings and texts that span centuries of conflicts, enclosures, and colonizations.” (Source)

 

Exhibition view of Calculating Empires - Osservatorio Fondazione Prada, Milan
Photo: Piercarlo Quecchia – DSL Studio, Courtesy: Fondazione Prada


 

16’19”

“One map reveals…”

 

Calculating Empires includes two separate maps: Communication and Computation and Control and Classification, both presenting on the vertical axis a timeline spanning from 1500 to 2025. 

 

Communication and Computation begins with the evolution of Communication Devices (from the printing press, the microscope or the camera obscura, up to generative content systems), suggesting evolutionary paths that highlight interconnections with human memory, writing, aesthetics, and so on. 

 

 

The next sections illustrate the genealogy of human-machine Interfaces and the Communication Infrastructure that make devices and interfaces operable and, at a more recent stage, interoperable. 

 

 

In Calculating Empires Joler and Crawford suggest that Communication Infrastructures are the cornerstone of communication-based control systems. This becomes particularly clear when the map delves into Data Collection and Information Organization (as it is achieved by way of classification systems. Data Collection is the necessary step to acknowledge reality through technological mediation. Data has the potential to make the world ‘addressable’ (for a complex understanding of this notion, see the chapter about “Address Layer” in Benjamin Bratton’s The Stack). 

 

The history of the processes and technological tools for ordering data - from scientific taxonomies to AI datasets - reveals how they provide a necessary foundation for observation with a main, two-fold aim in mind: standardization and optimization. 

 

 

The following section deals with the genealogy of Algorithms, which play a key role in making the organized information readable at scale. Algorithms are able to manage large amounts of data, find similarities, and establish probability patterns. 

 

It is thanks to the algorithmic process that taxonomies become solid and objective, while trust in apparently neutral and ‘scientific’ observation is built.

 

 

This leads to that specific archeology of knowledge that coalesce around Models, as the systematized result of algorithmic processes that detect patterns and, most importantly, put them in use as management tools. Models not only detect, but also project. As actionable representations at scale, they become necessary for governance. They are governance.

 

 

The increasing intensity of human feedback and the development of artificial intelligence make the section about Human Computers an important link between Models and the two final sections: Hardware and Unconventional computing. In these areas of the map relatively simple computing systems evolve into planetary-scale computational systems; alchemy into wetware, quantum computing, and molecular informatics.

 

 

The second map, Control and Classification, explores how those technologies are woven into social practices of classification and control.

 

 

The map spans over a very wide range of spheres:

 

Time

Education

Emotions and intelligence

Human bodies

Biometrics

Medical

Prison

Policing

Borders

Bureaucracy

Colonialism

Political and economic systems

Production

Energy and Resources

Lithosphere

Hyrdrosphere and atmosphere

Biosphere

Astrosphere

Spatial representation

Architecture

Surveillance infrastructure

Military doctrine

Military systems

 

Fragments of the Control and Classification map by Vladan Joler and Kate Crawford


 

21’42”

“A Foucauldian theology of power”

 

A reference to what the philosopher Michel Foucault described as the three main types of power - sovereign power, disciplinary power, and biopower - and the different forms and institutions that express and enact them. Foucault argued that biopolitical power is a form of control that doesn’t necessarily require explicit repression, but is rather a form of “governance of the self” and of bodily functions, emotions, sexuality, etc. It doesn’t usually target single individuals, but rather large sectors of society.

 

See Foucault’s works Naissance de la clinique: une archéologie du regard médical (PUF, 1963), Surveiller et punir: naissance de la prison (Gallimard, 1975), Histoire de la sexualité (Gallimard, 1976-1984), among others.

25’03”

“AI is an optical device”

 

This topic has been inspected in detail by Vladan Joler and Matteo Pasquinelli in The Nooscope Manifested - AI as Instrument of Knowledge Extractivism (2020).

 

“The purpose of the Nooscope map is to secularize AI from the ideological status of ‘intelligent machine’ to one of knowledge instruments. Rather than evoking legends of alien cognition, it is more reasonable to consider machine learning as an instrument of knowledge magnification that helps to perceive features, patterns, and correlations through vast spaces of data beyond human reach. In the history of science and technology, this is no news; it has already been pursued by optical instruments throughout the histories of astronomy and medicine.Footnote 3 In the tradition of science, machine learning is just a Nooscope, an instrument to see and navigate the space of knowledge (from the Greek skopein ‘to examine, look’ and noos ‘knowledge’).”

 

Fragments of the Nooscope map by Vladan Joler and Matteo Pasquinelli


 

27’15”

“You don’t need some people in St. Petersburg”

 

Reference to the Internet Research Agency, an obscure Russian business based in St. Petersburg and owned by the late Yevgeny Prigozhin. The I.R.A. was accused of attempting to influence political events outside Russia, including the 2016 United States presidential election. Often described as a “troll farm”, the Internet Research Agency was first exposed by technology writer Adrian Chen in an early reportage for the New York Times, The Agency (2015). See Adrian Chen explaining the case at The Influencers festival in 2017.

30’00”

“It produced a policy paper titled The Bletchley Declaration”

 

“Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks [...] which match or exceed the capabilities present in today’s most advanced models. [...] These issues are in part because those capabilities are not fully understood and are therefore hard to predict.” (Source: the Bletchley Declaration)

30’51”

“Let’s look at AI”

 

TV excerpt taken from Literally No Place, a non-fictional video essay by Daniel Felstead and Jenn Leung (2023).
 

Co-produced by the renowned DIS collective for the DIS.art online platform, Literally No Place is a short, wildly entertaining but actually very precise exposé of both AI fetishism and hysteria, showing the ambiguities of the current utopian and doomer approaches to AI.

 

31’29”

“Nuances to this perspective were discussed at the Cosmic Brains meeting”

 

Cosmic Brains - Medialab Matadero, November 21-25, 2023

 

“Cosmic Brains brings cutting edge minds working in scifi, philosophy, AI, neuroscience, the arts, and design together in a sustained interdisciplinary conversation. We will focus on pivotal questions concerning AI, AGI, and ‘the alien’. Specifically, we probe whether alignment and communication with AI are feasible; delve into the boundaries of language, mathematics, and logic in AI development; explore the deep time of the evolution of human sapience, to search for parallel models that might assist us in the design of AI; test the multi-modal, synaesthetic and embodied approach to cognition through gesture, sound and music; and we ask what it might mean to imagine and design microworlds and pocket universes- either as ‘toy worlds’ for AI, or as alternate pathways and sanctuaries, in the eventuality that alignment with AI [and ‘the alien’] proves elusive.”

 

More information at https://www.medialab-matadero.es/en/activities/cosmic-brains 

31’50”

“Many are asking”

 

Many scholars, technologists, NGOs, journalists, activists, and even artists are raising questions about the reality and viability of Artificial General Intelligence.

 

Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face) responded to Future of Life’s “Pause Giant AI Experiments” open letter (March 2023) by arguing that:

 

“The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.” (Source: Statement from the listed authors of Stochastic Parrots on the “AI pause” letter - March 2023)

 

In a concise blog post Emily M. Bender, computational linguist at the University of Washington and co-author of the famous 2021 paper “On the Dangers of Stochastic Parrots”, gives some ideological perspective to the utopian v. doomerist debate around AGI:

 

“I was asked about the apparent ‘schism’ between those making a lot of noise about fears inspired by fantasies of all-powerful ‘AIs’ going rogue and destroying humanity, and those seeking to illuminate and address actual harms being done in the name of ‘AI’ now and the risks that we see following from increased use of this kind of automation. Commentators framing these positions as some kind of a debate or dialectic refer to the former as ‘AI Safety’ and the latter as ‘AI ethics’. In both of those conversations, I objected strongly to the framing and tried to explain how it was ahistorical.” (Source: Talking about a ‘schism’ is ahistorical - July 2023)

32’21”

“The so-called effective altruism

 

The Effective Altruism is a movement mostly associated with technology-friendly philanthropy. It consists of a vague ethical approach to large-scale, long-term world’s problems, an elitist political theory disguised as hyper-rational thinking. The movement was co-founded by philosopher Will MacAskill. Elon Musk and crypto currencies guru Sam Bankman-Fried (now in jail for fraud) repeatedly showed support to the movement.

 

The Effective Altruism advocates the “earning to give” strategy, that is trying to make as much money as one can in order to maximize one’s charitable donations in the future. Olúfẹ́mi O Táíwò and Joshua Stein sum it up in a very convincing way: “MacAskill and some other effective altruist thought leaders advocate a view called ‘longtermism’, that we ought to prioritize mitigating low-probability but catastrophic possibilities in the far future. Whatever longtermism’s intellectual merits, it is a powerful rhetorical device allowing tech billionaires to sink money into pet projects under the guise of scientifically rigorous concern for humanity.”

(Source: Is the effective altruism movement in trouble?)

 

Rather than a simple clash between fast v. slow approaches to AI, many interpreted the recent turbulences at Open AI as an expression of an internal conflict within technological and financial elites, split between anti-regulation techno-accelerationists and long-termist “effective altruists”. See for instance The Wall Street Journal’s How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI

 

See also Doom, Inc.: The well-funded global movement that wants you to fear AI on The Logic and Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’ by Timnit Gebru on Wired.


 

36’30”

“The scary direction for me is the speed”

 

For this segment we used a sound that reminds of human heartbeat. Sometimes anthropomorphic references are useful to capture phenomena that because of their abstraction, scale or ramification exceed our immediate capacity of comprehension.

37’35”

“Another famous patent that I kind of accidentally found”

 

“U.S. patent number 9,280,157 represents an extraordinary illustration of worker alienation, a stark moment in the relationship between humans and machines. 37 It depicts a metal cage intended for the worker, equipped with different cybernetic add-ons, that can be moved through a warehouse by the same motorized system that shifts shelves filled with merchandise. Here, the worker becomes a part of a machinic ballet, held upright in a cage which dictates and constrains their movement.” (Source: Kate Crawford and Vladan Joler, “Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources,” AI Now Institute and Share Lab, September 7, 2018) 

 

39’30”

“I just had a quite emotional, personal conversation with ChatGPT”

 

 

Lilian Weng is Head of Safety Systems at OpenAI (source).

43’00”

“This was Meredith Whittaker”

 

Meredith Whittaker is the President of Signal Foundation. She is the current Chief Advisor, and the former Faculty Director and Co-Founder of the AI Now Institute. Prior to founding AI Now, she worked at Google for over a decade, where she led product and engineering teams, founded Google’s Open Research Group, and co-founded M-Lab, a globally distributed network measurement platform that now provides the world’s largest source of open data on internet performance. She has advised the White House and many other governments and civil society organizations on artificial intelligence, internet policy, measurement, privacy, and security.

 

Source: What is AI? Part 1, with Meredith Whittaker | AI Now Salons

43’18”

“Mozilla released an open letter”

 

“We are at a critical juncture in AI governance. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.” Source: Joint Statement on AI Safety and Openness (October 31, 2023)

43’35”

“I cannot endorse its premise that ‘openness’ alone will mitigate…”

 

49’46”

“How could the same industry that is building AI weaponry want empathy?”

 

One prominent example among many: Palantir’s “AI-enabled technology to deter and defend”.

   

 


------------------------------

PODCASTE MENTES SINTÉTICAS: EPISODIO 1 / EPISODIO 2

Tipo de post
Blog
Autor
Medialab Matadero