Recognising and embracing AI in research

AI tools are especially beneficial for non-native English speakers and scholars from the Global South. Photo by Igor Omilaev/ Unsplash

Artificial Intelligence and human endeavour can work together in harmony to reshape scholarly work.

By Namesh Killemsetty, O.P. Jindal Global University and Prachi Bansal, O.P. Jindal Global University

May 8, 2025

The AI revolution has transformed the world – not only by creating charming Ghibli-inspired images but also by prompting us to rethink how we conduct research. As tools like ChatGPT and Google NotebookLM redefine how information is accessed and synthesised, researchers find themselves divided. 

Some see generative AI as a transformational ally, capable of accelerating discovery and democratising knowledge. Others view it with suspicion, fearing it threatens the core values of creativity, critical thinking and academic rigour. 

This divide is particularly sharp in academic circles, where the use of AI is too often caricatured as a shortcut – outsourcing entire papers to a machine. But that oversimplifies a more nuanced reality. Like any emerging technology, the ethical and productive use of AI depends not on the tool itself, but on how we choose to wield it.

Researchers today face a clear choice: use AI to automate tasks or to augment their abilities. Automation implies full delegation – letting a tool generate a literature review, write an abstract or even draft entire sections of a paper. Augmentation, by contrast, is about assistance: refining outlines, identifying relevant works, or summarising dense material. 

It keeps the human firmly in the loop. There is no question that AI can streamline workflows. It can help format references, draft a plain-language summary or provide a surface-level overview of a topic. But we must draw boundaries. AI cannot – at least not yet – grasp the subtle nuances of a specific research problem or weigh conflicting interpretations of complex data. It lacks context, judgement and the lived experience of scholarly work.

Generative AI’s shortcomings go beyond mere limitation – they can pose risks to scholarly integrity. Many AI tools, including ChatGPT, are prone to “hallucinations”, confidently fabricating and falsifying information. In one classroom example, a student using AI to locate literature on slum policies in India was presented with a fictional title authored by a hybrid of a first name and a PhD supervisor’s surname. 

No such book existed – leaving it aspirational of something that should have been done with the PhD supervisor. Another example in the same class involved AI fabricating the title of a report supposedly published by a major global NGO. On verification, no such document or organisation record could be found. 

Risks of misinterpretations

Recently, a generative AI tool misinterpreted a 1959 article by merging words from two different columns, resulting in the creation of a new term: “Vegetative Electron Microscopy”. This term does not exist in the scientific community, yet it has already appeared in over 20 published research papers. 

These are not harmless errors; they can potentially undermine trust and credibility in academic writing. These issues stem in part from how large language models are trained. The datasets often include internet content with little to no scholarly oversight – Reddit threads with as few as three upvotes, blog posts and low-quality forums all feed into what is ultimately presented as authoritative knowledge.

Purpose-built academic tools such as SciteResearch RabbitElicit, and Inciteful represent a step in the right direction for using AI tools in research. These tools offer scholars promising avenues to accelerate literature discovery, visualise citation networks, and synthesise ideas across papers. These platforms go beyond general-purpose AI by tailoring their features for academic workflows. 

However, their limitations are significant. Most rely heavily on open-access databases like Semantic Scholar and PubMed, which means they exclude large volumes of critical literature locked behind paywalls – often home to the most critical and nuanced research. 

This is especially problematic for disciplines such as the humanities and social sciences, where key work often appears in subscription-only journals. Another common shortfall is their reliance on abstracts rather than full-text articles. 

While summaries and keyword analysis offer a quick overview, they miss the nuance and rigour found deeper in a paper’s methodology, argumentation or theoretical framework. Besides, semantic links generated between articles can be misleading, as these tools struggle to distinguish between agreement, contradiction or disciplinary differences.

Wise usage

Despite certain limitations, these platforms excel when used wisely. Google’s NotebookLMprovides quick summarisation and can convert podcasts to text. Elicit and SciSpace are particularly strong in conceptual synthesis. Inciteful facilitates meta-analysis by mapping relationships among authors, institutions and citations. 

When used alongside traditional tools like Google Scholar – and with the occasional visit to a library – these technologies can significantly enhance the research process. For non-native English speakers and scholars from the Global South, AI tools are especially beneficial. In addition to helping with the tasks mentioned above, they can bridge linguistic gaps, clarify complex ideas and improve global access to locally relevant research.

The ethical landscape surrounding the use of AI in research is continually evolving. Scholars must create personal ethical frameworks to guide their use of these tools. Recognising bias – both in the data and within the model itself – is crucial. It’s also essential to understand when the use of AI crosses into the realm of plagiarism. 

As peer-reviewed academic journals increasingly mandate the disclosure of AI assistance, transparency is becoming essential, not optional. A growing number of academic publishers now encourage or require authors to disclose how AI tools have contributed to their work – whether in drafting text, generating summaries or conducting literature searches. This move is an important step toward maintaining academic integrity while embracing innovation.

Researchers need to be cautious about relying too heavily on AI-generated content, especially when it comes to interpretation and argumentation. Over-delegating intellectual work to machines can simplify complex ideas into generic narratives, which undermines the originality essential to quality scholarship. 

Additionally, ethical AI use involves educating both students and colleagues. Universities have a duty to integrate AI literacy into research training, addressing issues such as authorship, consent and proper attribution. The future of AI in academia will not only depend on the tools we choose but also on how responsibly we use them.

The future of research isn’t AI versus human – it’s AI and human. If we want to preserve the integrity of academic inquiry while embracing the power of emerging tools, we must be thoughtful and transparent in how we integrate AI into our work.

List Building Program in 90 days

The revolution is here. Let’s not waste time resisting it. Instead, let’s shape it – wisely.

Namesh Killemsetty is an Associate Professor at the Jindal School of Government and Public Policy, O.P. Jindal Global University, Sonipat, Haryana.

Prachi Bansal is an Assistant Professor at the Jindal School of Government and Public Policy, O.P. Jindal Global University, Sonipat, Haryana.

Originally published under Creative Commons by 360info™.

0 Shares