Conference Description
Prediction is the new horizon of research on artificial forms of intelligence. Indeed, ChatGPT and generative AI have sparked a heated (and often alarmed) debate about their alleged intelligence starting from what is in fact their ability to predict the most appropriate or likely continuation of the conversation. A crucial aspect of this new kind of prediction is that it differs profoundly from the probabilistic forms of forecast familiar to our society since modernity. Our conference brings together leading researchers to investigate its innovative features.
The predictive ability of algorithms seems to become increasingly independent of understanding. Advanced machine learning models work with large amounts of data that they do not understand, and are often themselves incomprehensible. Yet their results have immediate practical relevance because in many cases they do not refer to populations and averages but to specific individual cases: single patients, credit applicants, risky technologies. New forms of transparency are needed, and the emerging area of Explainable AI addresses this, but we also need reliability criteria that allow us to trust the results of processes we do not understand.
From a sociological perspective, this yields urgent problems of control, regulation, management of opacity, identification of bias, and of its consequences in the implementation of results. The conference investigates these issues from an interdisciplinary perspective, combining the analysis of algorithmic prediction with current debates on its concrete impact on different social domains: On the medical, insurance, and legal fields, on policing practices, on public policy, and on the media. The goal is to provide elements for a comprehensive analysis of prediction in digital society.
By registration only.
Organized by
David Stark
David Stark is the Arthur Lehman Professor of Sociology at Columbia University where he directs the Center on Organizational Innovation. He uses a broad variety of research methods – ethnographic, network analytic, and experimental – to study processes of valuation and innovation. Stark has studied factory workers in socialist Hungary, new media employees in a Silicon Alley startup, derivative traders on Wall Street, electronic music artists in Berlin, bankers in Budapest, farmers in Nebraska, video game producers, and megachurches that look like shopping malls. His current research examines the principles of algorithmic management.
Elena Esposito
Elena Esposito is Professor of Sociology at Bielefeld University and the University of Bologna. A leading figure in sociological systems theory, she has published extensively on the theory of society, media theory, memory theory, and the sociology of financial markets. Her current research on algorithmic prediction is supported by a five-year Advanced Grant, PREDICT, from the European Research Council. Her latest book is Artificial Communication: How Algorithms Produce Social Intelligence (MIT Press, 2022; Egea 2022; Shanghai Jiaotong UP 2023).
Omar Besbes
Omar Besbes is the Vikram S. Pandit Professor of Business in Columbia Business School's Decision, Risk, and Operations Division. His primary research interests are in the area of data-driven decision-making with a focus on applications in e-commerce, pricing and revenue management, online advertising, operations management and general service systems. His research has been recognized by multiple prizes, including the 2019 Frederick W. Lanchester Prize, the 2017 M&SOM society Young Scholar Prize, the 2013 M&SOM best paper award and the 2012 INFORMS Revenue Management and Pricing Section prize. He serves on the editorial boards of Management Science and Operations Research.
Co-Sponsors & Co-Hosts
Conference Program
May 29th, 2024
Location: David Geffen Hall | 645 West 130th St. New York, NY 10027 | Room 420.
Breakfast and Lunch will be provided in Geffen Hall Board Rooms in the third floor.
Agenda
8:15am – 9:00am | Registration and coffee |
9:00am – 9:15am | Omar Besbes, Columbia Business School, provides welcoming remarks |
9:15am – 10:30am | Duncan Watts, University of Pennsylvania, "Integrating Explanation and Prediction in Computational Social Science" Discussant: Dan Wang, Columbia Business School |
10:30am – 11:00am | Break |
11:00am – 12:00pm | Elena Esposito, Bielefeld University/University of Bologna, "Prediction in Science, Prediction in Society" |
12:00pm – 1:00pm | Lucy Suchman, Lancaster University, "Militarism's Phantasmatic Interfaces" |
1:00pm – 2:00pm | Lunch |
2:00pm – 3:15pm | Donald MacKenzie, Edinburgh University, "The Material Political Economy of Prediction" Discussant: Dan Wang, Columbia Business School |
3:15pm – 3:45pm | Break |
3:45pm – 5:00pm | Jake Hofman, Microsoft Research, "An Illusion of Predictability in Scientific Results" |
5:00pm | David Stark, Columbia University, opens final session |
Speakers
Elena Esposito
Prediction in Science, Prediction in Society
Our research on the use of algorithmic forecasting in medicine, insurance, and policing has highlighted innovative uses of prediction that do not explain phenomena but nevertheless give insights to effectively act on a future event. This invites rethinking the idea that pure correlation-based forecasts do not enable action to change the outcome, unlike explanatory predictions based on underlying causal mechanisms, which can do so. My talk describes different social forms of using forecasting that combine statistical prognoses with algorithmic predictions to indicate how to act on a specific future outcome -- in medicine one speaks of actionability, in insurance of proactivity, in policing of targeted forms of prevention. When what matters is the specific case at hand, scientific replicability and generalizability of predictions are not the priority. Controlling the accuracy and effectiveness of these predictions involves different criteria, now operationalized in a multiplicity of recently developed performance metrics.
Elena Esposito is Professor of Sociology at Bielefeld University and the University of Bologna. A leading figure in sociological systems theory, she has published extensively on the theory of society, media theory, memory theory, and the sociology of financial markets. Her current research on algorithmic prediction is supported by a five-year Advanced Grant, PREDICT, from the European Research Council. Her latest book is Artificial Communication: How Algorithms Produce Social Intelligence (MIT Press, 2022; Egea 2022; Shanghai Jiaotong UP 2023).
Jake Hofman
An Illusion of Predictability in Scientific Results
In many fields there has been a long-standing emphasis on inference (obtaining unbiased estimates of individual effects) over prediction (forecasting future outcomes), perhaps because the latter can be quite difficult, especially when compared with the former. Here we show that this focus on inference over prediction can mislead readers into thinking that the results of scientific studies are more definitive than they actually are. Through a series of randomized experiments, we demonstrate that this confusion arises for one of the most basic ways of presenting statistical findings and affects even experts whose jobs involve producing and interpreting such results. In contrast, we show that communicating both inferential and predictive information side by side provides a simple and effective alternative, leading to calibrated interpretations of scientific results. We conclude with a more general discussion about integrative modeling, where prediction and inference are combined to complement, rather than compete with, each other.
Jake Hofman is a Senior Principal Researcher at and founding member of Microsoft Research in New York City, where he works in the field of computational social science. Prior to joining Microsoft, he was a Research Scientist in the Microeconomics and Social Systems group at Yahoo! Research. He holds a B.S. in Electrical Engineering from Boston University and a Ph.D. in Physics from Columbia University. He is an Adjunct Assistant Professor of Applied Mathematics and Computer Science at Columbia University and runs Microsoft's Data Science Summer School to promote diversity in computer science. His work has been published in journals such as Science, Nature, and the Proceedings of the National Academy of Sciences and has been featured in popular outlets including The New York Times, The Wall Street Journal, The Financial Times, and The Economist.
Donald MacKenzie
The Material Political Economy of Prediction
Automated, machine-learning prediction is central to the digital economy, via platforms prominent in everyday life and what the paper calls their data “icebergs.” Achieving direct observational/interviewing access to these platforms is notoriously difficult, so the research reported here examines platforms “obliquely,” via the experiences of those who advertise on them, and follows materially disruptive episodes that render platform practices more visible.
One such episode, Apple’s 2021 App Tracking Transparency changes to iPhones, has fed into, and makes visible, a major divide. On one side are material practices of data accumulation and prediction that rely on what participants call “identity resolution”: determining as far as possible which digital traces represent actions by the same human individual. On the other side are practices that materially block identity resolution (for instance seeking to preserve what Apple calls “crowd anonymity”), thus forcing prediction to be de-individualized. Fierce public controversy in 2020-21 has now been replaced by “messy,” implicit, largely subterranean, conflict between these two sets of material arrangements.
The paper draws upon 111 interviews with 88 practitioners of digital advertising and related technical specialists, along with extensive participation in sector meetings, and, e.g., a training course on the advertising of games and other apps.
Donald MacKenzie is a sociologist of science and technology at Edinburgh University. His research focuses on high-tech economic activities such as automated, high-speed financial trading and online advertising. His most recent book is Trading at the Speed of Light: How Ultrafast Algorithms are Transforming Financial Markets (Princeton University Press, 2021). His earlier books include Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance (MIT Press, 1990).
Lucy Suchman
Militarism’s Phantasmatic Interfaces
Enabled by massive investments in 'datafication', military imaginaries of command and control now materialize the 17thcentury dream of ‘a sign system that linked all knowledge to a language and sought to replace all languages with a system of artificial symbols and operations of a logical nature’ (Foucault Order of Things 1994: 63). Conjoining socio-technologies of surveillance, mapping, categorization, and enumeration, military ‘intelligence’ rests on a platform of ‘data foundations’ whose interfaces promise privileged access to warfighting’s worlds. In positioning data as the infrastructure of knowledge making, these imaginaries take for granted the translations through which worlds of interest are rendered as machine detectable signals that feed data models at the back end and data visualizations at the interface. Treating data as given enables a focus on optimizing processing and transmission, while placing outside the frame the necessary reductions, erasures, and betrayals of data-fication. This paper reports work in progress on tracking the material semiotic sleights of hand that sustain US militarism’s mythology of its own coherence and rationality, and the counter narratives that testify to war’s senseless and ungovernable injuries and offer openings to resistance.
Lucy Suchman is Professor Emerita of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up that post she was a Principal Scientist at Xerox’s Palo Alto Research Center (PARC), where she spent twenty years as a researcher. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.
Duncan Watts
Integrating Explanation and Prediction in Computational Social Science
Computational social science is more than just large repositories of digital data and the computational methods needed to construct and analyze them. It also represents a convergence of different fields with different ways of thinking about and doing science. The goal of this Perspective is to provide some clarity around how these approaches differ from one another and to propose how they might be productively integrated. Towards this end we make two contributions. The first is a schema for thinking about research activities along two dimensions—the extent to which work is explanatory, focusing on identifying and estimating causal effects, and the degree of consideration given to testing predictions of outcomes—and how these two priorities can complement, rather than compete with, one another. Our second contribution is to advocate that computational social scientists devote more attention to combining prediction and explanation, which we call integrative modeling, and to outline some practical suggestions for realizing this goal.
Duncan Watts is the Stevens University Professor and twenty-third Penn Integrates Knowledge University Professor at the University of Pennsylvania, where he holds primary faculty appointments in the Annenberg School of Communication, the Department of Computer and Information Science in the School of Engineering and Applied Science, and the Department of Operations, Information and Decisions in the Wharton School. He also holds a secondary appointment in the Sociology Department in the School of Arts and Sciences, and is the founder and director of the Computational Social Science Lab at Penn.