
The temptation to use GenAI for academic writing is strong, especially for academics with heavy workloads under pressure to publish. These tools can help scholars publish more papers that receive, on average, more citations than those produced without AI,research suggests.
In light of increasedAI use in research, many publishers have introduced policies requiring researchers to declare use. However, those declarations do not always accurately reflect the nature of the AI use. And AI use doesn’t always lead to high-quality outcomes. In this resource, we offer guidance about when and how to use itethically and responsibly foracademic writing.
GenAI tools can be helpful for checking grammar, polishing style and improving the overall flow and clarity of writing, especially for non-native speakers. However, they struggle to create genuinely new, academically rigorous and relevant content.
In academic writing, assumptions must be justified through data collection and analysis, and claims must be supported with credible evidence, often by referring to past research. Many AI tools are not good at finding rigorous academic sources because they draw on what is freely available online. This means they may lean towards blog posts and other freely available online material, rather than peer-reviewed literature that sits behind a paywall. In some cases, they can even fabricate references, producing “hallucinated” citations that look convincing but don’t exist.
Fake references is the most obvious danger but another common issue is misleadingly plausible referencing. An AI tool might suggest a real paper that sounds relevant but doesn’t support your claim, or it might attach a weak citation to a strong statement, making the argument appear better evidenced than it really is. It may even recommend citing well-known scholars for work they never published in a journal.
The solution is straightforward: open every source you cite. Check the DOI, title, authors and publication year, read the abstract and confirm it truly backs up your statement. A good compromise is to ask AI for search terms or themes, then for you to do the actual database-searching and verification.
GenAI can open new avenues for research and highlight different ways of working. But an individual’s AI literacy influences how effectively they can use it for idea generation.
At the same time, AI may nudge scholars toward narrower, “data-rich” topics, reducing broader scientific engagement and making it harder to speak to wider academic audiences. It may encourage researchers to overlook areas where less data exists, while favouring high-visibility, data-rich phenomena.
If used carefully, AI tools can help academics identify patterns and gaps in literature, especially when they are trained using high-quality, relevant academic sources.
With new AI capabilities emerging fast, marketing researchers have started exploring “silicon samples”, AI-generated consumer data that can be automatically produced to reflect the characteristics of a target population. So far, most work on silicon samples has focused on text-based synthetic data, such as AI-generated interview transcripts or survey responses.
These silicon samples can be a convenient alternative to traditional data collection, saving time and cost. But like any AI-generated content, they can reflect the biases of the data they were trained on and reproduce stereotypes.
Debates persist about how far AI can go in replacing or supplementing real-world data, and whether it can capture the richness and complexity of human behaviour and emotions. For example, AI can now generate product reviews that are highly readable, coherent, relevant, and informative, and are often difficult to distinguish from reviews written by humans.
AI can make a paragraph sound smoother but it can’t take responsibility for its mistakes. If a tool edits “associated with” to “caused by,” or adds an overconfident claim, you will be the one answering reviewers, correcting the record or dealing with complaints. The same applies to ethics, conflicts of interest and data integrity where AI can’t sign forms, justify decisions or explain how evidence was handled.
TreatAI as an assistant, not an author: only submit sentences you would feel comfortable defending. And if you disclose AI use, a vague statement such as “we used AI” can raise more questions than answers. Reassure editors and readers with a practical description ofwhat you used AI for andhow you managed the risks. For example: “We used ChatGPT to improve readability and shorten sentences in the introduction; all technical claims, interpretations and references were written and verified by the authors.”
If AI supported coding or analysis, say that too, and note that outputs were rerun, validated and checked. This kind of transparency makes it clear that AI was not used to invent evidence.
Remember that sensitive data must not go into your prompts. These include:
A simple test: if you wouldn’t send it to a stranger, don’t upload it to a public AI tool. Safer options include institution-approved systems or working with heavily anonymised excerpts.
Eleonora Pantano is associate professor and Marios Kremantzis is senior lecturer, both at the University of Bristol.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
Register for free
and unlock a host of features on the THE site