In my last blog post, I discussed the need for a Sociology of AI, sidestepping the thorny issue of what AI is. Is it a field of study? Is it a technology or collection of technologies? Is it a product? Is it nothing more than a marketing strategy?
A lot of ink has been spilled to define AI, and the definition varies depending on who provides it. My favorite definition is from The Future Computed which in plain language describes AI as,
“a set of technologies that enable computers to perceive, learn, reason and assist in decision-making to solve problems in ways that are similar to what people do.”
It underscores that AI is a mix of technologies that allows machines to approximate human intelligence. (I would add a minor edit that whether AI “assists” in decision making or hinders it or even replaces it, is a matter of how we decide to build AI, not an inherent property.)
What distinguishes the AI of today from the AI of yesterday and other digital technologies that we’ve become accustomed to over the last several decades? Machine learning. It is currently the dominant technique in AI that is causing much of the hubbub over AI and its ethical implications. Rather than pre-programming an expert system that is told what to do by its code, machine learning relies on algorithms that learn from data over time in order to do things like classify objects or make predictions.
Object of Analysis
Given this definition of AI, what exactly is the object of study for the sociologist? The sociologist needs to examine AI from its most expansive conceptualization possible to trace its full reach into and across societies. I think of it as originating with the AI model, the difficult-to-get-to, blinding core of the sun. (Many think of this as the “black box,” but I like a slightly sunnier metaphor.) From there, the model emanates like sun rays all the way to the most macro-level social structures of human society, impacting everything it touches. (On second thought, given the countless negative “emanations” of AI perhaps this metaphor is too sunny after all.) The point is that while the algorithm’s code is relatively bounded, its effects radiate far and wide. It is hard for most of us to make the mental leap from some lines of code to say, the destruction of Western democracy. The sociologist can help us connect the dots across the necessary levels of analysis.
The Model Level
We can start at the AI model level as many computer scientists encourage us to do. They’d have us examine social values like fairness and how they can be included in the statistical underpinnings of machine learning (see Barocas et al. Fairness in Machine Learning). Similarly, we can also think about how to “program” ethical values into the algorithms as theoretical computer scientists Michael Kearns and Aaron Roth beseech us to do in The Ethical Algorithm. This area of research must examine all the steps involved in generating a model from data collection, to data labeling, feature selection, model training, deployment and monitoring.
The Human-Computer Interaction Level
We can then locate the model in whatever technology that it will be productionized in (e.g. a customer service chat bot, search engine, email client, surveillance camera, app, or car) and examine how a human will interact with that technology. Importantly, the sociologist will push the envelope in this research tradition to examine how not just individuals but also entire social groups will interact with and be impacted differently by the same technology. She’ll examine how the interpretations and uses vary depending on social factors like race, gender, nationality, language, socioeconomic status etc., thus vastly expanding the range of direct and indirect stakeholders traditionally considered.
The Institutional Level
Here is where it gets really fun and where sociologists’ strengths shine. We can push AI research, as some have started to do, to examine AI beyond any given socio-technical system and instead consider how entire institutions and industries are reshpared, ranging from banking, criminal justice system, retail, housing, and credit, to journalism, elections, warfare, employment and the tech industry itself. At this level of sociological analysis we can ask questions like, how are AI models in human resource systems reshaping power dynamics between employers and employees? How is algorithmic knowledge production within the tech industry shaped by social forces far beyond tech’s walls like structural racism, education inequality, and free market imperatives?
The World Systems Level
Finally, what is the effect of all of these institutional changes on world historical transformation and social change at the most macro level? What does the acceleration of AI mean for democracy worldwide? For example, will AI enable or stymie the expansion of democracy, and will it deepen or weaken democracy where it already exists? What does it mean for economic development and global inequality? For example, under what conditions will AI accelerate capitalist exploitation or alternatively reduce power differentials between global capital and labor and between the Global North and South? What does AI mean for the 21st century world? For example, through what kind of mechanisms or processes could AI improve global stability and strengthen international cooperation or alternatively drive wedges among traditionally allied states and exacerbate conflict and competition between adversaries?
While thinking in terms of these levels of analysis may be helpful, I admit identifying the object of analysis is still challenging. Even computer scientists disagree on what defines machine learning and distinguishes it from “mere” statistics. Mathematicians have been working with algorithms for centuries, but they disagree over when an algorithm “becomes” AI. Even if we settle on some starting point, how does the sociologist delimit the field of inquiry? How does she choose the appropriate level(s) of analysis? These are difficult methodological questions for any sociological inquiry. And the answers depend in large part on the research questions, the sociologist’s personal interests, and research methods available. But AI makes it particularly difficult because it is such an ephemeral, multiuse, and quickly changing technology. It makes tracing its causal effects across broad social systems challenging.
Moreover, how would this Sociology of AI fit into the sociology of digital/information technology more broadly? And how does that sociology fit into the sociology of knowledge production in general and ultimately the field as a whole? As we stack these sociological Russian dolls, how do they relate to many other disciplines studying AI, like Science & Technology Studies, Gender & Sexuality Studies, African-American Studies, Communication and Media Studies, Anthropology, Psychology, Cognitive Sciences, and Behavioral Sciences, not to mention Computer Science itself?
Just like the development and deployment of AI is a bit of a Wild Wild West, so is the study of AI. This is why I propose a Sociology of AI with a capital “S”, a sociological subfield that can give AI the attention it deserves and coherently relay the research findings back into tech and society. To me, the end goal of a Sociology of AI is not just deeper understanding for its own sake, but ultimately the ability to shape technological innovation and inform the public and policymakers of AI’s potential and pitfalls. It will take all of us to harness the promise of AI and limit its shortcomings. Sociologists in particular have a lot to contribute.