
AI in higher education is a reality, but it must be deployed and used more responsibly and equitably.
by Prachi Bansal and Namesh Killemsetty, O.P. Jindal Global University
May 2, 2025
In a classroom discussion on women’s work in India, a student confidently discussed the AI-generated result of her prompt “summarise women’s work in India”. The result was a well-structured essay on how women in the Indian economy are engaged in agriculture and how their work is under-paid.
While many dimensions of this response were correct, what it missed completely was the unpaid care work women do in Indian households. It was a textbook example of algorithmic bias — when women’s unpaid work isn’t measured well, it doesn’t appear in the datasets AI models are trained on.
And when students rely on these tools without question, they risk internalising those same silences. More and more teachers now face similar unsettling moments in classrooms with the overuse of AI by students.
As a response to these experiences, four archetypes of teachers are emerging in response to AI’s growing presence in academia. First, those who consider AI as a hindrance to authentic learning, and do not think that AI should change anything in terms of their teaching pedagogy. They resist integrating AI, perceiving it as incompatible with discipline-specific epistemologies.
These are traditionalists. This approach works better while teaching subjects such as mathematics and physics where foundational knowledge of the subject matter needs to be taught deeply and one can abstract from the contemporary world.
Second are the pragmatic integrators who adapt AI and try to integrate it in their pedagogy as and when they think it helps in their classrooms. They maintain their agency but use AI in simple tasks such as lesson planning, thinking about providing examples for a concept, and experimenting with different kinds of assessments..
A third set of covert users of AI are those who use tools (primarily ChatGPT) in the backend while not willing to accept its usage in front of students or their peers because of discomfort with the ethical implications, institutional guidelines or fear of undermining authority.
The last kind are the AI collaborators who are trying to build learning experiences with AI and are transparent about its usage. They are exposing students to AI tools, allowing them to use AI and also preparing them to see the bias and issues with AI-generated content.
Critical questions
Irrespective of the teacher type – and sometimes one can be in two different categories depending on the course being taught – most instructors today are grappling with two key questions: one, what does it mean to learn or teach in the presence of AI? Two, how can academic integrity and student agency be upheld?
Besides, there are questions of ethical nature. If students use ChatGPT to answer questions or write their term papers, it is described as cheating, but what if teachers use it? A course on public policy, for instance, cannot avoid discussion on AI completely. A teacher’s expertise on AI is critical to engage with students.
Technology has long played a transformative role in classroom pedagogy, far preceding the advent of AI. For instance, science education has historically benefited from laboratory experiments, which offered students tangible, experiential learning opportunities that stood in stark contrast to rote memorisation.
Similarly, in the contemporary teaching of statistics and econometrics, the integration of statistical software such as R, Stata, or Python has gained traction. These tools enable students to engage with real-world data and internalise theoretical concepts through application.
Many undergraduate and postgraduate students take econometrics for largely theoretical orientation. However, it was when a noted statistician and theoretician demonstrated how statistical software could practically apply these theories that students began to fully understand the concepts. By visually seeing the theory in action, the abstract mathematics became much more relatable and accessible.
A similar moment of clarity occurred when simulations were used to teach the law of large numbers – a noted concept in statistics. Inspired by these experiences, many university teachers now incorporate simulations in their teaching of probability distributions, allowing students to visualise and experiment with statistics. These methods significantly enhance student engagement and conceptual understanding.
Making learning effective
In this broader context, AI should be viewed not as a radical rupture, but as a continuation – and evolution – of the pedagogical tradition of leveraging technology to make learning more interactive, personalised and effective. However, AI is not just a visual aid or a statistical software; it has adaptive and generative abilities.
These abilities also make it hallucinate or fabricate data, misattribute sources, or produce conceptually flawed explanations with high linguistic confidence. Teachers now face a special challenge to engage critically with digital literacy to interrogate AI outputs.
Just as students are taught to read texts critically or interpret data with caution, they must now be equipped with AI literacy – the ability to engage with generative tools such as ChatGPT with a discerning eye. This involves understanding how AI works, recognising its limitations and developing strategies for verification and triangulation. In doing so, AI becomes not a shortcut to learning, but a site for deeper inquiry and reflection.
If it is argued that AI has the potential to exacerbate existing inequalities – whether through biased data, algorithmic opacity or differential access to technology – then it is an imperative that students are equipped to identify and interrogate these biases. This means going beyond merely using AI tools to engaging with questions such as: Whose data is this model trained on? What perspectives are missing? Why does this output seem biased or skewed?
By incorporating critical data literacy and algorithmic awareness into the curriculum, students can begin to see AI not as a neutral authority, but as a product of human design, carrying the values, assumptions and limitations of its creators. Teaching students how to spot patterns of exclusion, detect stereotyping and question AI-generated narratives is a vital step toward using AI responsibly and equitably in education.
While the risks of AI in exacerbating inequality must be taken seriously, it is equally important to recognise that AI also holds tremendous potential to bridge gaps in pedagogy, particularly for differently abled learners and teachers. Specialised AI tools such as screen readers powered by natural language processing, speech-to-text and text-to-speech systems, real-time captioning, sonification tools, and AI-driven sign language recognition are already transforming accessibility in classrooms.
For students and teachers with visual, auditory or cognitive impairments, these tools can create more equitable learning environments by offering personalised, multimodal learning experiences.
Other interactive platforms such as Mentimeter, Kahoot and Quizziz are fostering more participatory and responsive classrooms by allowing students to engage anonymously, respond at their own pace and visualise collective understanding in real time. Together, these technologies represent not just a disruption, but a democratisation of learning – one that, if guided thoughtfully, can create more inclusive, engaging and learner-centric pedagogies.
Prachi Bansal is an Assistant Professor at the Jindal School of Government and Public Policy, O.P. Jindal Global University, Sonipat, Haryana.
Namesh Killemsetty is an Associate Professor at the Jindal School of Government and Public Policy, O.P. Jindal Global University, Sonipat, Haryana.
Subscribe to our newsletter.
Originally published under Creative Commons by 360info.