Mind Map Activity
NotebookLM Activity
Mind Map Activity
The source is based on a presentation from a faculty development program organized by SRM University Sikkim, which focused on the issue of bias in Artificial Intelligence and its impact on literary interpretation. The session was conducted by Dilip P. Barad, an academic known for his work in English literature and digital humanities. The presentation explained how AI systems are often trained on large datasets that are created by humans. Because much of this data reflects dominant cultural perspectives—often Western or Eurocentric—AI models can unintentionally reproduce these same biases.
The discussion highlighted different types of biases, including gender bias, racial bias, and political bias. Through interactive activities, participants were encouraged to test prompts in generative AI tools to observe these patterns themselves. For example, some prompts revealed a tendency toward male-centered narratives in creative writing, while others showed how certain AI systems restrict or filter political content. These experiments helped participants understand how technological systems can mirror existing social structures and cultural assumptions. Overall, the session aimed to encourage critical awareness of how AI operates and how its outputs should be carefully interpreted.
Report Activity: Blog Post
5 Surprising Truths About AI Bias We Learned From a University Lecture
Artificial intelligence is often imagined as a neutral and purely logical system. Many people believe that machines make decisions based only on data, without being influenced by human prejudices. However, this idea is misleading. AI systems learn from huge collections of human-generated information, such as books, articles, and online content. Because of this, they often reflect the same cultural assumptions and biases present in society.
During a lecture by Dilip P. Barad, several experiments demonstrated how AI can reproduce and reveal these hidden biases. The discussion combined insights from literary theory with real-time testing of AI tools, offering valuable lessons about technology and culture.
1. AI Learns Human Bias
One important idea discussed in the lecture was unconscious bias. This refers to the tendency to categorize people or ideas instinctively without being aware of it. Since AI systems learn from human-produced material, they inevitably absorb these same patterns of thinking. As a result, AI does not create bias on its own; it learns it from the data that humans provide.
This is why scholars in literature and cultural studies are increasingly important in analyzing AI outputs. Their training in identifying ideological patterns helps reveal the cultural assumptions embedded within technological systems.
2. Gender Stereotypes Appear in AI-Generated Stories
Another experiment involved asking an AI model to write a Victorian story about a scientist discovering a cure for a deadly disease. The AI automatically created a male character as the scientist. This result demonstrated how historical stereotypes about gender and professional roles continue to influence AI-generated narratives.
However, in another prompt involving a female character in a Gothic setting, the AI produced a more independent and courageous protagonist. This example suggested that AI systems are capable of evolving and sometimes challenging traditional stereotypes.
3. Some Bias Is Deliberately Programmed
The lecture also explored how certain biases are intentionally built into AI systems. For instance, the Chinese-developed AI model DeepSeek was asked to generate satirical poems about world leaders. While it produced poems about several international leaders, it refused to create one about Xi Jinping.
This response demonstrated that some forms of bias are not accidental but deliberately programmed. In some cases, AI models are designed to avoid criticism of particular political figures or governments, revealing how technology can also serve ideological or political purposes.
4. Consistency Is the Real Measure of Fairness
Another key point discussed was the concept of epistemological bias. The example used was the Pushpaka Vimana, a flying vehicle described in the Indian epic Ramayana.
According to the lecture, it is not necessarily biased if AI identifies the Pushpaka Vimana as mythical. Bias becomes a problem when similar objects from other mythological traditions are treated differently. Fairness therefore depends on whether the AI applies the same standards across all cultures.
5. The Goal Is to Make Bias Visible
The final idea emphasized that complete neutrality is impossible, whether for humans or machines. Every interpretation is shaped by perspective. Instead of trying to eliminate bias entirely, the goal should be to identify and understand it.
Professor Barad explained that some biases are harmless personal preferences, while others are harmful because they privilege dominant groups and silence marginalized voices. The real danger occurs when these biases become invisible and are accepted as universal truths.
Conclusion: AI as a Mirror of Society
In conclusion, AI can be understood as a powerful mirror that reflects the collective values and assumptions of society. By examining how AI systems generate responses, we can better understand the cultural patterns and biases that exist in our own thinking.
Rather than simply asking how AI can be improved, the lecture encouraged us to reflect on a deeper question: if AI mirrors human biases, then perhaps the real challenge is to recognize and change the biases present in our own societies.