Artificial Intelligence and the BIAS – UPSC Sociological Perspective
Blog

The Best Source to Enhance Your Sociology Optional Content

Artificial Intelligence and the BIAS – UPSC Sociological Perspective
Artificial Intelligence and the BIAS – UPSC Sociological Perspective

September 25, 2024

A study conducted to gauge the magnitude of biases in generative AI to generate image for different levels of jobs in US found that image sets generated for every high-paying job were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.”

How Artificial Intelligence amplifies Bias

  • Bias in Datasets: Many data sets reflect societal inequalities and historical discrimination. For example, facial recognition systems may perform poorly on people of colour or women because the training data disproportionately includes lighter-skinned faces or fewer women. This reflects the underrepresentation of minorities in certain datasets, leading to biased outputs when the system encounters diverse users.
  • Design Choices and Accessibility: Technology products, such as apps or websites, may be designed with a “default” user in mind, typically reflecting the experiences and expectations of the designers. If those designers predominantly come from one demographic (e.g., able-bodied, young, Western), the system may inadvertently cater to their preferences, making it less accessible or usable for other groups, such as people with disabilities or older adults.
  • Bias in Natural Language Processing (NLP): Language models trained on internet data can reflect and propagate harmful stereotypes. For instance, word associations in some AI language models have been found to associate certain professions with men (e.g., doctors, engineers) and others with women (e.g., nurses, teachers). This reflects biases present in the data sources (e.g., online text, articles) used for training.
  • Lack of Diverse Representation in Development: The tech industry, particularly in fields like AI and software engineering, often lacks diversity in terms of race, gender, and socio-economic backgrounds. When teams developing these technologies do not reflect the diversity of the broader population, they are more likely to overlook the needs and perspectives of marginalized groups.
  • Reinforcement of Stereotypes: Technology systems like social media algorithms or targeted advertising can create feedback loops where individuals are repeatedly shown content that aligns with or reinforces their pre-existing beliefs, deepening biases and stereotypes. This can have social consequences, such as promoting extremism, racism, or sexism.

Sociological Analysis of Bias in AI:

  • Ruha Benjamin in her book Race After Technology argues that AI systems often reproduce historical racial biases, describing this as “the New Jim Code,” where technology reinforces racial hierarchies under the guise of neutrality. She highlights how biased algorithms in sectors like criminal justice and healthcare disproportionately harm marginalized racial groups.
  • The Labelling Theory of Howard Becker can also be applied in analysing bias of AI, where AI “labels” people based on data inputs. For instance, AI algorithms categorize individuals into specific groups based on their behaviour, demographics, or past actions. These labels can lead to stigmatization, especially when they reinforce stereotypes or create feedback loops where individuals are treated based on biased predictions.
  • Applying Merton’s idea of latent dysfunction in AI, AI can inadvertently produce negative outcomes, such as reinforcing social inequalities.
  • AI and its bias also produces anomie when AI systems produce biased outcomes, it undermines the social order by creating feelings of mistrust and disillusionment, particularly among marginalized groups who are unfairly targeted by these systems.
  • From a conflict perspective it can be said that, technology including AI, often serves the interests of the powerful, particularly corporations and states, which use it to control and manipulate populations. This aligns with the broader idea that AI systems tend to reflect and support the socio-economic status quo, favouring dominant social classes while reinforcing the marginalization of others.
    • For example, AI systems are designed to maximize profit, which may result in biased outcomes that devalue or exploit minority populations. The commodification of data, which feeds AI systems, often disregards the privacy and autonomy of marginalized groups, furthering their economic exploitation.

By analyzing AI bias through these sociological perspectives, it becomes clear that technology does not operate in isolation from society; rather, it reflects and reinforces the social structures, values, and inequalities embedded within it.

Buy Course Call Us Mail