Three essential elements for science-fit Generative AI
24 Sep 2025

Generative AI will become ever more embedded within scientific research, the challenge is to keep it fit for purpose. Focus on the trusted, intuitive and fast, says Cameron Ross.
Research estimates that by the end of 2025, more than 30% of new drugs and materials will be discovered using Generative AI techniques [1]. It’s clear that the interest in AI tools among scientific researchers is only going to grow exponentially.
But many publicly available GenAI tools and their underlying large language models (LLMs) are not designed for the nuance or deep domain knowledge required in scientific R&D. This lack of rigour could have several negative consequences: for example, one mistake in a chemical compound name could hugely derail expensive research projects. Rushing into AI adoption without carefully considering which tools are suitable for R&D could cause more problems than it solves.
Lack of rigour could have several negative consequences: for example, one mistake in a chemical compound name could hugely derail expensive research
It’s becoming harder for organisations to know which GenAI tools to invest in as new solutions continually emerge. To drive scientific innovation safely and effectively, R&D organisations need to look for tools that are trusted, intuitive, and fast.
Trusted: R&D users in fields including pharmaceuticals, chemicals and engineering are working in heavily regulated industries. Trust in AI outputs is essential, so scientific accuracy and transparency must be baked into any GenAI tool. Answer hallucination – a well-known risk with off-the-shelf LLMs – must not be a factor in a science-fit AI tool.
The chance of hallucination can be dramatically reduced by limiting the search to a specific dataset of trusted, peer-reviewed content relevant to the specific domain a scientist or organisation is investigating. It is also vital that any answers generated are directly linked to exact sources for full transparency. A smart GenAI tool could even allow users to select a particular phrase and see the specific passages it was quoted from. And if the tool can find no relevant information in its data, it should be designed to say that it cannot answer the query.
In short, a science-fit GenAI tool will not invent answers just to please the user – but will instead be focused on surfacing, summarising and synthesising information while still allowing users to dig deeper into the source texts.
Intuitive: If a GenAI tool isn’t more intuitive for researchers to use than other literature search tools, then it will present little additional value to an organisation. For this reason, natural language querying and responses are extremely important for effective GenAI in R&D.
Natural language capabilities eliminate the need to construct complex keyword searches. These capabilities also allow tools to understand the context and intent behind a user query, and handle spelling variations, abbreviations and synonyms that are common among scientific domains, improving accessibility for users of all experience levels.
Speaking with a GenAI research tool should be like having a conversation with a well-informed colleague. You can ask questions like “Is X drug effective?” but also follow up with, “Why?” – and a smart GenAI tool should be able to interpret that “Why?” as “Why does this drug work in this way?”.
Capabilities allow tools to understand the context and intent behind a user query, and handle spelling variations, abbreviations and synonyms that are common among scientific domains
Likewise, giving users a synthesised summary of available literature in natural language rather than a long list of articles allows them to quickly see what the most pertinent insights are for their query.
Fast: Ultimately, the most impactful way GenAI tools can support R&D organisations is by accelerating the time it takes to identify mission-critical insights. Any tool researchers are using needs to be able to cut through to the specific insights a user needs to answer a research question quickly and accurately.
GenAI excels at analysing huge volumes of data and finding connections. Coupled with natural language search and additional functionalities made possible with advanced GenAI tools means scientists can get to critical insights in minutes rather than hours. Additional functionalities include allowing results to be exported in a suitable format for record keeping or being able to construct tables directly comparing the data from different experiments.
When researchers are confident that answers are produced using verified and ringfenced datasets, checking the veracity of AI-generated answers takes considerably less time. While retaining a human-in-the-loop is vital to scientific R&D, the parameters of domain-specific purpose-built GenAI tools mean researchers can avoid spending hours trawling through dozens of texts to find linked insights to verify results.
The future is in reach
The future of GenAI in science isn’t in generic tools but in purpose-built assistants that understand what scientists want and need. Sophisticated tools designed with researchers in mind are already helping fuel the massive increase in AI-designed drugs and materials.
The most important principle, though, is that AI should not be thought of as able to replace human researchers. Rather, it should be both an assistant that removes the most time-consuming and frustrating elements of their work, and “thought partner” that can stimulate, challenge and enhance scientists’ knowledge. This will allow researchers to focus their time and critical thinking skills on what they do best: advancing human knowledge.
Pic: @Shutterstock (Gorodenkoff)
[1] https://www.sciencedirect.com/science/article/pii/S0148296324000468?via%3Dihub
Cameron Ross is SVP generative AI at Elsevier