Why AI needs the Social Sciences

From obscurity to ubiquity, artificial intelligence, commonly referred to as AI, is now being used in an ever-growing range of human activities. McGill Philosophy Professor Eran Tal discusses the ethical challenges brought on by AI and how AI can benefit from the social sciences.

The rapid growth of artificial intelligence technologies has brought an onset of ethical questions for societies to now grapple with. Though the existence of clever machines and humanoid robots has been fantasized about for decades, we are seeing what was once strictly science fiction become a reality. Computer scientists and data engineers are fueling innovations in artificial intelligence that will soon surpass human decision-making capacities. To be clear, artificial intelligence refers broadly to systems that can execute functions or perform tasks that in the past had required human intelligence. As AI technologies expand, so will their impacts on a range of human activities.

In our tech-driven world, many AI technologies are readily welcomed into our lives. There is a rush to keep up with the latest smartphones, apps, home assistance devices, etc. Companies such as Facebook, Alphabet (i.e. Google’s holding company), Amazon and Apple, continue to figure at the top of the US stock market, a clear indication of our society’s appetite for tech.

In our excitement to get the latest Alexa or Google Home, we forget about the gap between how much we know about AI-powered technologies and the frequency with which we adopt and use them. The algorithms behind these devices can extract data about the most intimate aspects of our daily life, yet we remain unaware of how this information is being used. Aside from personal use, many AI-led technologies are now integrated into the public sector, law enforcement, banking, and medical services. Despite the obscurity surrounding the workings of AI, it has become in charge of determining credit scores, informing decisions about bail releases and rendering medical diagnoses.

Social scientists and humanities scholars are now starting to reflect on the potential impacts of such technological development and McGill Philosophy Professor Eran Tal is one of them. Though we should be concerned about AI, this does not negate AI’s capacity to produce immense social good. As pointed out by Professor Tal “artificial intelligence has the potential to bring along diverse benefits for our health, safety and general well-being”. AI technologies have already gained the capacity to detect early signs of cancer, help direct aid in disaster-relief efforts, and offer assistance to people with speech and hearing impediments through lip reading. However, these technologies, if not carefully monitored, can also adversely impact certain population groups.

Professor Tal affirms that ethical questions arise when AI-powered algorithms are used to inform high-stakes decisions, such as a person’s medical treatment plan or their criminal sentencing. Computer scientists affirm AI can deliver more sophisticated and accurate knowledge than human actors. The capacity for AI technologies to process information quicker than humans also leads to more cost-effective resource allocation. This rests on the assumption that machines are neutral and less fallible to committing errors than their human counterparts. While AI can facilitate the acquisition and interpretation of large volumes of information, the data on which algorithms are developed can be tinged with human biases that can lead to ethically problematic results.

As detailed by Professor Tal, machine bias can appear in automated ways. For example, an algorithm that aggregates data to help a judge decide whether a defendant is eligible for bail can reproduce human biases. A black defendant, because of systemic racial disparities in the United States, runs higher chances of having their request for bail denied than a white defendant with a similar criminal history. Race itself as a variable is not directly taken into account, but proxy variables that can reflect an individual’s racial background, such as an area of residence, employment status, ancestry, and socioeconomic background, are. This can allow racial inequities to creep into machine learning systems and create feedback loops.

As described by Professor Tal, “there is an in-built tendency for machine learning systems to perpetuate bias if the data set itself is biased”. For Professor Tal, this not only raises questions regarding fairness but also of accountability and transparency- especially in false negatives cases. There is a challenge in deciding who to hold accountable if AI-developers themselves can’t pinpoint where the machine went awry. One can imagine the disastrous consequences of receiving a faulty medical diagnosis that was based on an AI-powered algorithm. Professor Tal adds that “technical complexities, such as determining error rates for algorithms, become intertwined with ethical complexities”. Both sociologists and ethicists will have a role to play in addressing and developing solutions to machine bias and error. They will also need to partake in the wider discussion regarding epistemic and ethical values in algorithmic development.

Overselling the merits of machines over human capacity is a familiar trend in human history. As a philosopher of science, Professor Tal warns of placing naïve trust in machines. He comments that “historians have an important role in reminding us of this tendency to view new scientific developments as objective”. Professor Tal brings up the example of photography that in the late 19th and early 20th centuries was viewed by some scientists as promising mechanical objectivity and freedom from personal judgment. However, photographs are sensitive to choices of framing, lighting, and perspective. Reading them requires interpretation and expert judgment. As Professor Tal indicates “all-new technological developments ought to be placed in their historical context”.

For Professor Tal, a clear analogy exists between AI and measuring instruments: “AI systems, by analyzing data to score and rank individuals, essentially perform measurement functions”. But unlike other scientific procedures, we rely on machine learning systems to make predictions without always having a clear justification for thinking that trends can be extrapolated beyond the training dataset. This lack of theorizing on the results produced by AI systems can jeopardize their reliability and raise ethical concerns. In cases where AI is used to detect certain diseases, the responses measured on the machine systems can influence whether individuals receive treatment.

Professor Tal offers the example of fibromyalgia, a chronic pain condition for which no clinically established biomarkers exist and whose diagnosis procedure largely rests on a patient’s testimony. Computer scientists, by analyzing data on an individual’s neural signature, are trying to develop an AI system to diagnose fibromyalgia. As explained by Professor Tal, “Machine learning algorithms may eventually become ‘gold standards’ for diagnosing some diseases, like fibromyalgia, that is difficult to diagnose in traditional ways. Consider a patient who would have been diagnosed with fibromyalgia based on their self-report, but is tested negative by the algorithm. By giving the final say to the algorithm, we may be harming people that were previously eligible for treatment and artificially restricting the category of the disease.” Medical anthropologists, ethicists, and sociologists can offer insight into the fair and safe use of AI as well as what kind of regulatory structure should be in place when using the results produced by AI.

Ensuring that AI has a positive societal impact will require the expertise of social scientists from a range of fields. Those with a background in the humanities and social sciences can detect the potential nefarious uses of AI by considering the wider societal implications of these technologies. Such individuals may be especially equipped to spot the problems in AI that aggravate long-engrained prejudices and leave vulnerable populations at risk. These conversations must include computer scientists but also complement their expertise with those of social scientists.

Making AI a positive collective force in our future is not an easy task. It shall require insight and collaboration from a wide range of actors, not just those in Silicon Valley. Social scientists ought to recognize their capacity to play a leading role in helping pave the way. Those like Professor Tal are already breaking down silos and reflecting on the wider ethical considerations of these new technologies.

Artificial intelligence can also lead to artificial stupidity. As stated by Yoshua Bengio, AI expert and scientific director of Mila, the Quebec Institute of Artificial Intelligence, “current machine learning systems, they are really stupid”. Bengio adds that “they don’t have an understanding of how some aspects of the world work”. Social scientists and those in the humanities may help prevent intelligent machines from making brainless decisions.

Back to top