Introduction

In contemporary digital culture, Artificial Intelligence is frequently imagined as autonomous, objective, and self-learning. Popular discourse celebrates AI as a technological breakthrough that functions beyond human bias and limitation. However, Aranya Sahay’s Humans in the Loop (2024) unsettles this dominant narrative by exposing the hidden human infrastructure that sustains algorithmic systems. The film shifts attention away from the myth of machine autonomy and toward the invisible labour, cultural negotiations, and epistemic hierarchies embedded within digital technologies.Set in rural Jharkhand, the film follows Nehma, an Adivasi woman who becomes involved in AI data-labelling work. Through her everyday experiences, the narrative reveals how machine learning systems depend upon repetitive human cognition and culturally situated interpretation. What appears as automated intelligence is, in reality, structured by global labour inequalities and knowledge hierarchies. The “human in the loop” is not merely a technical requirement; it is the ethical and political foundation upon which AI operates.This blog, written in response to the worksheet designed by Prof. Dilip Barad for the film screening, critically examines how Humans in the Loop represents AI, labour, and indigenous knowledge systems. Drawing upon concepts from film theory, such as representation, ideology, apparatus theory, and Marxist cultural analysis, the discussion explores how the film renders visible the invisible architecture of digital capitalism. Ultimately, the film invites viewers to reconsider not only how machines learn, but whose knowledge is recognized, whose labour is valued, and whose realities are flattened within algorithmic frameworks.

PRE-VIEWING WORKSHEET

A. Background & Core Themes

🔹 AI Bias & Indigenous Knowledge Systems

  • AI bias refers to the way algorithmic systems reflect the assumptions, priorities, and cultural limitations embedded within their training data and design structures.

  • Although AI is often presented as neutral and mathematical, it learns from human-labelled data, which is already shaped by social hierarchies and dominant worldviews.

  • Bias, therefore, is not simply a technical error but a structural condition built into the system.

  • Indigenous ecological knowledge systems operate differently from algorithmic logic. They are relational, experiential, spiritual, and context-based rather than binary and classificatory.

  • Where AI systems depend on fixed categories and predefined labels, indigenous epistemologies emphasize interconnectedness between land, memory, ancestry, and community.

  • If such lived knowledge must be translated into rigid technological frames, reduction and misrepresentation may occur.

  • This suggests that AI bias may also function as epistemic exclusion—where certain ways of knowing are marginalized because they do not fit computational structures.

  • The tension between machine categorization and indigenous knowledge seems central to the film’s thematic exploration.

🔹 Labour & Digital Economies

  • Invisible labour in digital economies refers to the hidden human work that supports automated systems such as AI.

  • This includes data-labelling, content moderation, tagging, annotating, and correcting machine outputs.

  • Although AI is described as “self-learning,” it depends heavily on repetitive human input.

  • Such labour is often outsourced to economically vulnerable regions, where workers perform cognitive tasks for low wages and little recognition.

  • The invisibility of this workforce sustains the myth of technological autonomy.

  • Highlighting this labour is significant because it shifts the narrative from innovation to exploitation.

  • It reveals that automation does not eliminate labour; rather, it redistributes and obscures it.

  • In digital capitalism, human perception and attention themselves become commodities.

  • By focusing on invisible labour, the film likely challenges viewers to reconsider who actually builds the technological future and who benefits from it.

🔹 Politics of Representation

  • Based on publicity and reviews, the film appears to position an Adivasi woman at the center of a story about Artificial Intelligence, which already disrupts common stereotypes.

  • Mainstream representations often portray tribal communities as disconnected from modern technology or confined to exoticized cultural imagery.

  • This film seems to resist that binary by showing coexistence between indigenous life and digital labour.

  • Technology is not portrayed as purely progressive or glamorous; it is shown as dependent on marginalized workers.

  • At the same time, Adivasi culture is not romanticized but represented as grounded, lived experience.

  • The contrast between nature and technological interfaces may visually emphasize tensions between ecological knowledge and algorithmic systems.

  • Representation, therefore, operates on two levels:

    • It reframes AI as socially embedded rather than autonomous.

    • It reframes indigenous identity as intellectually engaged within global digital structures.

Points to Ponder While Watching

Narrative & Storytelling

While watching the film, it becomes clear that Nehma’s personal life is not separated from the larger algorithmic structures she works within. The narrative carefully situates her domestic space alongside the digital interface, suggesting that technological systems are not distant abstractions but embedded within everyday life. Scenes of family interaction, economic concern, and cultural practice unfold in parallel with repetitive labelling work, foregrounding how labour reshapes emotional and social rhythms. The storytelling makes visible the intrusion of global digital economies into intimate rural spaces.

When Nehma “teaches” AI through tagging and categorizing images, the process destabilizes the popular idea of machine autonomy. The human-machine loop appears less like technological advancement and more like a relationship of dependency. Although the machine is supposedly learning, it becomes evident that its intelligence relies entirely on her interpretation. At the same time, Nehma seems to gradually adapt her perception to fit the system’s rigid categories, raising the question of whether the learning process is reciprocal or asymmetrically imposed.

Representation & Cultural Context

The representation of Adivasi culture in the film avoids both romanticization and marginalization. Rituals, language, and ecological knowledge are presented as part of lived reality rather than exotic spectacle. The forest is framed not merely as scenic background but as a meaningful environment tied to memory, belonging, and identity. This grounded portrayal resists dominant media stereotypes that often depict tribal communities as disconnected from technological modernity.

Instead, Nehma exists at the intersection of indigenous life and digital labour. She is neither portrayed as technologically incompetent nor culturally frozen in time. By positioning her at the center of AI production, the film complicates the binary opposition between “traditional” and “modern.” Representation operates as critique, challenging the assumption that technological participation belongs exclusively to urban or elite subjects.

Cinematic Style & Meaning

The film’s mise-en-scène and cinematography visually construct a contrast between organic life and digital structure. The forest sequences are often captured in wide frames with natural lighting and fluid movement, conveying openness and continuity. In contrast, scenes of digital labour are tightly framed, dominated by the cold blue glow of the computer screen. The screen frequently occupies central space in the composition, visually enclosing Nehma within a rectangular boundary. This spatial restriction mirrors the conceptual confinement of algorithmic categories.

Sound design further intensifies this contrast. Ambient natural sounds—wind, birds, distant village conversations—create a sensory richness that underscores relational life. However, during labelling sequences, repetitive mouse clicks and keyboard taps dominate the soundscape. The editing rhythm also shifts between slower pastoral pacing and mechanical repetition. Together, these formal elements shape the viewer’s emotional experience of labour and subtly communicate the tension between analog existence and digital abstraction.

Ethical & Political Questions

As the film unfolds, ethical dilemmas emerge when culturally specific images must be reduced to simplified labels. Sacred landscapes and context-rich environments are translated into neutral technological categories, raising questions about whether AI systems can accommodate cultural depth. The issue is not merely technical accuracy but epistemic authority—who decides what constitutes valid knowledge?

The metaphor of the “human-in-the-loop” extends beyond its technical meaning. Politically, it reveals the dependency of global AI industries on marginalized labour. Socially, it highlights unequal power structures embedded in digital capitalism. Culturally, it exposes the risk of reducing complex knowledge systems into standardized data points. The film encourages viewers to consider AI not simply as innovation but as a contested terrain shaped by labour, hierarchy, and representation.

POST-VIEWING REFLECTIVE ESSAY

 AI, Bias & Epistemic Representation

AI, Knowledge, and the Politics of Recognition in Humans in the Loop

Aranya Sahay’s Humans in the Loop (2024) challenges the dominant belief that Artificial Intelligence operates as a neutral, objective, and autonomous system. In popular discourse, AI is often imagined as a purely technical achievement, functioning through mathematical precision beyond human bias. However, the film disrupts this narrative by revealing the deeply human foundations of machine learning. Through the lived experience of Nehma, an Adivasi woman working as a data-labeller in Jharkhand, the film exposes how AI systems are shaped by cultural assumptions, labour hierarchies, and epistemic power structures.Rather than portraying technology as detached from society, the film situates AI within global inequalities and competing knowledge systems. It demonstrates that algorithms do not simply “learn”; they are trained within specific ideological frameworks that determine what counts as valid knowledge.

Algorithmic Bias as Cultural Construction

One of the film’s central arguments is that algorithmic bias is not merely a technical flaw but a cultural condition. AI systems rely on predefined categories and training datasets, which are themselves products of social and historical contexts. When Nehma labels images for machine learning systems, she must choose from limited options that reflect utilitarian and infrastructural priorities. Landscapes are categorized as “forest,” “road,” or “obstacle,” but there is no space within the interface for sacred, ancestral, or spiritually significant meanings.This absence reveals that bias is embedded within the structure of classification itself. The machine does not recognize what it has not been programmed to value. Through Nehma’s hesitation while labeling culturally complex images, the film dramatizes the friction between relational indigenous knowledge and rigid computational logic. The issue is not that the machine makes mistakes; it is that its framework excludes alternative epistemologies from the outset.

In this way, the film reframes AI bias as epistemological limitation. Algorithms reflect the worldview of their designers and the dominant socio-economic systems within which they are developed.

Epistemic Hierarchies and the Politics of Knowledge

The film highlights a clear epistemic hierarchy between technological rationality and indigenous ecological knowledge. Computational systems operate through binary logic, segmentation, and standardization. In contrast, Nehma’s worldview is relational, contextual, and spiritually grounded. The forest, for her, is not merely a geographic entity but a living space embedded with memory and identity.However, within the technological system, this knowledge carries no institutional authority. The algorithm privileges standardized categories over experiential depth. This hierarchy reveals whose knowledge counts within digital infrastructures and whose is reduced to data points.The film suggests that AI does not simply process information—it organizes reality according to dominant frameworks. Those who design the categories control the structure of meaning. Those who supply the labour operate within imposed constraints. Nehma contributes essential cognitive labour, yet she does not possess the power to redefine the system’s epistemic boundaries.

Representation, Ideology, and Apparatus Theory

From a film studies perspective, the relationship between technology and knowledge in the film can be examined through the concept of representation. Representation is not a neutral reflection of reality; it is a construction shaped by ideological forces. The AI interface functions as a representational apparatus that frames the world according to its programmed logic.

Apparatus Theory, traditionally applied to cinema, argues that media technologies shape ideological perception through framing and positioning. In Humans in the Loop, this concept operates on two levels. First, the cinematic apparatus frames Nehma’s labour for the audience. Second, the algorithmic interface frames reality for Nehma. The viewer watches Nehma watching the screen, creating a layered structure of mediation.

This reflexive framing draws attention to how knowledge is filtered through technological systems. Just as cinema determines what is visible within the frame, AI determines what is legible within its grid. Both systems produce meaning through selection and exclusion.

By foregrounding screens within the filmic composition, Sahay emphasizes the ideological dimension of technological mediation. The audience becomes aware that objectivity is constructed, not inherent.

Power Relations in Digital Capitalism

The film also illustrates algorithmic bias within global power structures. AI development is largely controlled by corporations and institutions in economically dominant regions, while data-labelling labour is outsourced to marginalized communities. This uneven distribution of authority reflects broader inequalities within digital capitalism.Nehma’s role illustrates this asymmetry. She is indispensable to the functioning of the system, yet she remains peripheral in terms of recognition and decision-making power. Her knowledge is instrumentalized but not institutionalized. This dynamic exposes how global technological systems depend on marginalized labour while simultaneously disciplining it.The “human-in-the-loop” metaphor thus carries political significance. It reveals that AI is not autonomous but reliant on human input. However, the loop is unequal. The machine depends on her labour, yet she must adapt to its categories. The relationship is structured by power, not reciprocity.

Conclusion

Humans in the Loop ultimately reframes Artificial Intelligence as a culturally embedded system rather than a neutral technological innovation. By exposing algorithmic bias as structurally embedded and by highlighting epistemic hierarchies within digital infrastructures, the film challenges viewers to reconsider the politics of knowledge in the age of AI.The narrative demonstrates that technology does not merely reflect reality; it organizes it. What cannot be categorized risks invisibility. Through its careful use of narrative, visual contrast, and reflexive framing, the film makes visible the ideological foundations of machine learning.The question the film leaves us with is not simply whether machines can learn, but whether technological systems can accommodate the plurality of human knowledge without reducing it to standardized data. In revealing the architecture of the invisible, Humans in the Loop compels us to rethink not only artificial intelligence but also the hierarchies of recognition that sustain it.