Ethical and medical challenges combined with technological advances and improved data capabilities have advanced the use of computational modelling in preference to animal testing. It may take longer though to provide a complete replacement.
Within the engineering sector and much of the overlapping process industries, the virtual twin has been a familiar presence, recognised for its value in enhancing predictive maintenance and staff training programmes.
By contrast, computer modelling’s application within biomedical and related spheres has been more limited, not only for diagnosis and treatment but also when testing drugs for use in such instances. Hardly surprising perhaps, given that construction of all or part of the virtual human remains a still more complex task than than that of creating a ‘twin’ oil refinery.
Traditionally, drugs efficacy testing has depended upon the ability to source data from living systems – primarily animal based. While this has the virtue of being able to ascertain medicines’ effectiveness within another, non-human, living system, there are limitations; a drug’s performance within an animal species cannot be guaranteed to replicate within a human subject.
Thanks to the significant developments in a variety of areas including machine learning, Cloud computing and data storage/analysis, it has become possible to more effectively combine and analyse information from existing human and animal sources. As a result, in silico modelling offers the possibility of approaches that are not only more accurate but faster and, when scaled up, cheaper.
Work such as that carried out by Oxford University [1], employing simulation to better predict cardiotoxicity, or Rockefeller University’s research [2] on the synthetic antibiotic cilagicin, demonstrate the effectiveness of computational approaches. The European Commission-funded CompBioMed centre of excellence, meanwhile, is focused on the overarching concept of the Virtual Physiological Human (VPH), with the aim of not merely replicating the human body as a single complex system but enabling eventually what one of its videos describes as a ‘virtual doppelganger’ of any individual.
Economics and changing social mores have also impacted: animal testing can be expensive and has become increasingly controversial. Growing political pressure over several decades has contributed to legislative restrictions, whose benefits in terms of welfare improvements have been accompanied by stricter compliance requirements.
Set against this background, the potential of in silico approaches might have been evident but has taken longer to realise in practice. The key foundations on which good quality computational models rely are rigorous testing and validation and access to high quality, usable data that follows all principles for storage and access, says Professor Ilias Kyriazakis, Institute for Global Food Security, Queen’s University Belfast.
And, he adds, while advances in mathematical techniques have enhanced the value and accuracy of models, “the major contribution towards this has been the advances in data storage and their availability”.
“We are now able to use enormous amounts of data as inputs to simulation models and the quality of data can be scrutinised, especially if stored under the principles of good laboratory practice, are associated with good data management plans [and] are accompanied by the use of metadata, etc,” continues Kyriazakis.
While scientific communities are often “seduced” by novel methods, they can forget that the success of simulation models heavily relies on the data employed, he cautions.
Dr Roman Bauer, Lecturer in Computational Biology at the Department of Computer Science, University of Surrey, emphasises the contribution of technological advances in boosting modelling accuracy in recent years. Specifically, he cites the development of opensource libraries with active user and developer communities, such as OpenCV, PyTorch and Open MPI. Additionally, the availability of familiar tools to support interactive collaboration (Github, Slack, Google Docs). And also the benefits that accrue from container software like Docker and Singularity in speeding the uptake and validation of computer models to increase flexibility and usability.
Collaboration and feedback underpin those factors that help ensure the quality of models – validation, peer review and reproducibility, continues Bauer. Models must be thoroughly compared with real-world data from the wet-lab with their agreement statistically validated, and they must be assessed by independent domain experts, whose feedback should be addressed, he advises.
Sharing of code and datasets is crucial to enable use and evaluation by other labs and research groups, insists Bauer, employing formats and languages used by others, in order that they will be usable also in future.
The achievements of simulation models in assessing new drugs in place of animal testing have been notable, acknowledges Kyriazakis, generating “information that is actually more suitable to guide relevance for human drugs”. Data acquisition, furthermore, reduces the need for incremental research on animals in questions of minor impact in situations where existing data can be combined with simulation.
Both academics, however, maintain that reductions in animal experimentation are unlikely to result in its demise in the short to medium term, at least.
Collaboration and feedback underpin those factors that help ensure the quality of models
“Using animals as models to answer questions about animals (eg using pigs to answer questions about pig diseases) is likely to continue in the medium term,” says Kyriazakis, adding that limited experimentation may be required for testing, “for example, for emerging pathogens”.
Bauer concurs that it will likely take a decade to substantially reduce the use of animals in research: “Such testing is vital to obtain comprehensive information from changes in the central nervous system, the immune system, the gastrointestinal system,” he advises. “There is currently no alternative, as these extremely complex systems cannot be fully and reliably modelled/predicted with in silico models at the moment. Without significant increases in computational biology research and funding, this will likely continue to be the case for at least 10 years.”
References:
1 Frontiers in Physiology, 12 September 2017 (Passini, Britton, Hua, Rohrbacher, Hermans, Gallacher, Greig, Bueno-Oravio, Rodriguez)
2 Science, 26 May 2022, Vol 376, issue 6596, pp991-996 (Wang, Koivala, Hernandez, Zimmerman, Brady)