top of page

From Data to Decision: Unveiling the Human Biases in AI and Algorithms

  • Writer: Policy Corner JSGP
    Policy Corner JSGP
  • Apr 9
  • 5 min read

By Vanshika Singh



Abstract

In today’s digital era, artificial intelligence (AI), algorithms and technology are largely seen as an epitome of objectivity, free from biases. Without any consciousness and personal experiences, they are thus neutral, operating on logic with precise calculations. However, under this facade, lies another reality. The paper explores the subtle yet significant ways AI and algorithms shape our collective identity and online public discourse.


Illusion of Objectivity in AI

AI operates on algorithms to process information. Algorithms are essentially a set of instructions based on calculations that transforms “input data into a desired output” when triggered (Gillespie, 2016). At a time when not only the mathematics of the calculations but also all information is being digitalized, we are using algorithms to select the most relevant information from datasets. Gillespie (2016) calls these as public relevance algorithms, as our discourses and experiences shape it. How an individual interacts with it in turn shapes the algorithm and highlights certain things over others for that individual. For example, on Instagram if you spend some time viewing a post about a beach vacation, the algorithm would work in a way to show you more of content on island towns, beach wear, sea food and the likes. However, algorithms are not personalisedinstead vary as they are based on the numerous databases they are fed into it. For instance, WhatsApp came with one algorithm for everyone, the database (contacts) shapes what is shown.


How Algorithms Shape Information Processing

Gillespie (2016) elaborates on six aspects of public relevance algorithms of political significance. By examining these aspects, the paper explores how our collective understanding of the world is shaped. The first is “patterns of inclusion” where for algorithms to work on the datasets, the data is collected. In contemporary times, when every interaction, page view and click leaves a digital footprint and thus is being recorded extensively. Even with Incognito mode, an individual’s activities are recorded with the sites visited. Also, there are concerns over what is recorded- is only public information recorded? what is public? who consents? The data is then ‘readied.’ However, categorisation of data holds immense implications as defining the categories and its content are based on some perceived standards. But who decides these standards and what is the standard are important questions to ask. So information is included and excluded which in turn shapes the diversity of the public discourse. So, while algorithms might be viewed as automated but their underlying patterns which are set by humans dictate what is acceptable and legitimate content to be shown.


Second is “cycles of anticipation.” Though platforms collect extensive details, they wish to know on a deeper level about their users, so that they can predict what you want even before you actually type it in. Such predictions are made on a ‘digital profile’ that is created based on an individual’s browsing habits across the web. However, these predictions are not precisely accurate and thus can influence an individual’s thoughts of what they want to see, along with privacy concerns. The conclusions that providers make from these predictions can be further problematic. Third is “evaluation of relevance” wherein algorithms’ hidden criteria decide what is relevant by picking what is “important.” These criteria are subjective as well as judgement based, and hence these choices can be political. For example, Twitter’s “trending,” “top stories” on Google, or Microsoft’s Edge’s “what’s trending”- these are opaque criteria. Fourth is “promise of algorithmic objectivity” which focuses on the presentation of the algorithm’s technical nature as fair and objective, consequently giving it credibility. It leads us to ponder then how is human knowledge being shaped. So, Google may show “trending news” but when Google’s more than 85% of revenues are generated from ads, it puts the content being pushed by Google into question.


Fifth is “entanglement with practice.” Algorithms are very calculated practices by humans where it becomes a question of who shapes who and what shapes what. There occurs a type of domestication of these technologies where as we reveal ourselves to such tools like the algorithms, we also incorporate them into our routines and so altering their meaning and design. For instance, until recently when searching the words “school girl” on Google images displayed pornographic material whereas the words “school boy” did not reflect such results. This was rectified. However, it does reveal the underlying biases of algorithms where harmful stereotypes of sexualizing girls are perpetuated. We thus feed into each other’s way of acquiring knowledge. The sixth and the last aspect is “production of calculated publics.” When we are guided by algorithms and coupled with our tendency to seek like-minded people, we enter “filter bubbles” where we only encounter news and content that aligns with our beliefs. Technologies today not only evaluate but also actively fabricate representations of publics.


Autonomy vs. Algorithmic Control

This ability of AI to evolve and adapt in response to acquired inputs and data and generating new algorithms is termed as “intelligence.” But what does intelligence mean? Is intelligence just interpretation of data? Algorithms just generate predictions based on probabilities. Chomsky et al. (2023) argues that humans have inherent capacity for language and can engage in abstract reasoning beyond mere pattern recognition. True intelligence lies in the ability to grasp nuances while thinking creatively and morally - something albeit in human intellect. Hayashi (2023) counters Chomsky with the argument that AI can produce descriptive and analytical knowledge and agrees that AI cannot produce explanatory knowledge. But datasets of analytical knowledge for AI to produce also contain biases. However, today algorithms and AI are touted as symbols of unbiased aptitude. But only the site of bias has shifted to backend and backend of all algorithms is a product of human labour.


This discussion is not to suggest that algorithms are all terrible. They can be used in contexts which require interpretation of enormous data, for instance where AI tools are used to identify cancer cells in human bodies. However, today all data collected can be correlated for interoperability because of the judgement an algorithm makes - which is coded by humans.


About the Author

Vanshika Singh is a final year public policy student with interests in environment, technology, and governance.


References

Tarleton, Gillespie (2016) ‘Algorithms’ in Digital Keywords by Benjamin Peters

Noam Chomsky et al (2023) ‘The False Promise of ChatGPT in The New York Times

Hayashi, Ryusel-Best (2023) ‘Can AI create knowledge? A counter to Noam Chomsky et al.’ Medium. https://ryuseibesthayashi.medium.com/can-ai-create-knowledge-a-counter-to-noam-chomsky-et-al-06589c6f05d6

Ismail, Kaya. “AI Vs. Algorithms: What’s the Difference?” CMSWire.com, 26 Oct. 2018, www.cmswire.com/information-management/ai-vs-algorithms-whats-the-difference.

Scott, Mark, et al. “Anatomy of a Scroll: Inside TikTok’s AI-powered Algorithms.” POLITICO, 7 May 2024, www.politico.eu/article/anatomy-scroll-inside-tiktok-ai-powered-algorithm-israel-palestine-war.


Commentaires


Liability

"Policy Corner" falls under the Jindal School of Government and Public Policy. Policy Corner's social media handles are kept free of advertisement as it is funded in entirety by O.P. Jindal Global University, Sonipat.

Policy Corner does not any endorse the views of its writers or any parent/associated organization.

Follow Us
  • Instagram
  • Twitter
  • LinkedIn

OP Jindal Global University, Sonipat-Narela Road, Sonipat, Haryana - 131001, India

bottom of page