As a sociologist in AI, I often wonder what can sociology do for tech? How can the development of responsible or ethical AI benefit from sociologists’ insights, perspectives, and research? To answer that question, like any decent researcher, I turned to the sociology of AI literature and quickly realized that there isn’t one. Why not? And why should you care?
The social study of AI throughout the 20th century has been dominated by psychologists, cognitive scientists, philosophers, and more recently behavioral scientists. Given that the central question of interest was what distinguishes human intelligence from that of machines, it makes sense that the disciplines that focus on how humans learn and make sense of the world were involved. Needless to say, these disciplines have brought invaluable perspectives to the conversation. But one important discipline – sociology – sat that conversation out on the logic that machines are not humans and therefore are not social actors. And if they’re not social actors, then they’re not worthy of being studied by sociologists. The other disciplines “have got this.”
And so, there is little sociological “literature” or theory of AI to speak of. There are some social histories of AI and analyses of AI “discourse”. There are some dated and isolated pieces on the sociology of machines and software. And there is more and more sociological research on social movements and social identity in social media as well as the impacts of algorithmic bias on race, gender, and class inequalities. And there are some studies on the sociology of tech, automation, and AI in specific areas of social life like warfare and education. There is even work on how sociologists can use big data analytics and AI to do new and interesting social research. Yet there is no sociological theory of how AI or even information technology more broadly comes together to shape culture and society. I’ll admit “AI” is notoriously hard to pin down, but I’ll leave that discussion for another time.
And why do we care if sociologists study AI? Because by nature of their training, they will ask unique questions that more of us need to be asking and researching in-depth, like:
- How is the distribution of resources like access to information or opportunities changing over time due to algorithmic interventions?
- How are AI technologies reshaping social interactions between social groups that used to primarily interface without the mediation of information technology?
- How are structural inequalities across race, class, gender, and sexuality reconstituted by the virtue of AI interventions?
- Are institutions like democracy, the media, and the criminal justice system being categorically changed by AI and in what ways or is the introduction of this new technology no different from the way prior technologies reformed those institutions in the past?
So I argue that we need a more robust sociology of AI not for its own sake and not just as a hot new sub-discipline to be professionalized within the American Sociological Association, but because sociology of AI will help make AI better for society. AI is powerful. It is no longer impacting just a handful of individuals at the level of human-computer interaction. AI is having societal impacts on a global scale which means we need to study society-comptuer interaction. And sociologists are perfectly poised to do that. I’ll admit, I am ambivalent about how much we really need a singular grand theory of AI. The days of “metatheory” are over and, in retrospect, perhaps weren’t as useful as we thought. Certainly we can entertain a more pluralistic social study of AI that still brings disparate studies together into a cohesive theory-building project. Whether we develop a “sociology of AI” or not, I believe that sociological insights from a variety of sociology subfields are currently woefully underutilized in AI research, development, and deployment today.
Let’s take one example – employment. Sociologists are really good at studying power dynamics between different institutional actors, e.g. unions and employers. Work from decades of research in the sociology of labor could inform discussions about if or how, for example, AI should be used to gather insights from employees’ use of technology at work by turning it into a “productivity score” for management. Sociologists would come at this not from the perspective of if the tool will be useful to employers, which is also a valid perspective, but from the perspective of how that tool will impact the employees’ (notice plural) position of vulnerability vis-a-vis their employer in a capitalist marketplace like that of the US that already privileges the organizational power of corporations over people? The sociologist would not ask if the tool violates that individual employee’s rights under their employment contract. We can leave that for the lawyers. Nor would they ask if the employee understands how the AI tool works and how best to interact with it. We can leave that for the designers and user experience researchers. Rather, they would ask how it shifts the power relationship between two distinct social groups – employees and employers – and in whose favor. Sociologists would ask, whose problems is AI, in this instance, trying to solve? Employers or employees, and why? And how does the cultural environment of the technology producers shape their understanding of what the problem is in the first place. Could our obsession with quantifying productivity have something to do with the protestant work ethic that we’re all so deeply steeped in? Maybe a look back to the classical sociology of Max Weber could be productive, no pun intended.
Now, I know a memo to an executive of a productivity software company expounding on the spirit of capitalism and why we should be less or even differently focused on productivity might not go over well. But a sociologist in “the room where it happens” can help. Moreover, sociologists can help deepen broader AI conversations. To continue with the employment example, one of the big debates is around fears that AI-induced automation will eliminate blue-collar and even white-collar administrative jobs. The sociology of occupations and organizations would suggest that there is a lot of social labor that goes into maintaining and reproducing effective teams far beyond administrative tasks like scheduling meetings and making photocopies. And that kind of affective work is hard to replace with an AI-powered personal assistant. But where an activity does not require much social interaction, like driving a truck, the job is more likely to be replaced by AI technology, like self-driving cars. This helps us pinpoint where the problems might be most acute.
Now what about potential solutions? Many tech leaders argue that while some jobs will disappear, new ones will be created and that this rapid change will require the workforce to be more agile and continuously reskilling and learning. That is fine and good, but an economic sociologist might have a slightly different take on the solution. They might point to structural changes that need to be made in the economy to support workers so the burden of adapting to technologically-induced social change is distributed more evenly. After all, reskilling and finding new work takes time, money, and energy. How might industries that use AI to cut jobs (e.g. in customer service centers) be responsible for investing those cost savings back into the workforce? How can local and federal governments craft social policies that redistribute that burden so that everyone can thrive in a high-tech workplace?
We need to bridge the gap between sociology and tech. Tech needs sociological insights and lessons, and the field of sociology cannot continue to show modest interest in a technology that now touches every aspect of our society. The difficulty lies in the fact that these two sectors have traditionally seen little overlap, and experts in each field are often not aware of each other’s work, let alone its value. Even when thrown together at the occasional AI conference, they don’t speak the same language and ended up talking past each other. Sociologists need to do more to try to understand the tech and become fluent in AI, and technologists need to show interest in how their tech will impact society beyond the immediate “user” bringing non-engineers into the fold. The ivory towers of both industries need to come crashing down if we’re going to solve social problems, which I believe we can only do in partnership. In an uber connected world, we have the (productivity) tools to do this. We just need to take the initiative.