Computers have revolutionised science, but are scientists ready for the next wave of computerised innovations? According to Microsoft Research, many don’t know their html from their elbows…
They are more part of a scientists working life than ever before, but unless scientists can get to grips with computers then the very future of science could be at risk.
In the summer of 2005, a group of international experts was brought together by Microsoft Research Cambridge to take part in a workshop considering the direction science will take over the next 15 years. In particular, the group focused on the potential for computing and computer science to revolutionise science, and how it could speed breakthroughs to some of the greatest challenges for the 21st century.
The resulting report – Towards 2020 Science – is the first to comprehensively analyse the potential computer science has to transform the way science is conducted in the future.
It not only highlights the importance of computing in science, but also the need for scientists to grasp how computing can help them in their work. The writers are clear that technology is outpacing the scientists’ ability to use it, pointing out that: “the global scientific community has not fully adjusted, nor is it well equipped, to take advantage of the new information landscape.”
It may come as no surprise that a research centre funded by the world’s largest software company would come up with a report stating that computers and science are now inextricably linked, but in reality there is no denying the importance of the silicon chip on scientific endeavour.
But it is not just the here and now that is the issue
|Tomorrows scientist will have to be computer literate as well as scientifically literate|
according to the group. Stephen Emmott, a director at Microsoft Research Cambridge, told computing website vnunet.com that it is the future of computing in science that needs to be secured. “Tomorrows scientists will need to be highly computationally literate as well as being highly scientifically literate. We need to rethink how we educate today’s children in order to ensure that we have the new kinds of scientists that we need for tomorrow’s science.”
This is a view held by many of the 34 leading scientists that made up the panel. Professor Andrew Parker of the High Energy Physics Group at Cambridge University pointed out that even at graduate level students don’t always have the computer literacy needed. He said: “When I take on a PhD student they come to me trained very well in mathematics and physics, and they’re trained to solve problems that can be expressed on two sides of A4, because that’s what the examination system presents them with.
“They might have used computers before or even an oscilloscope, but they will have no training and no experience of data handling, data analysis or many of the things we need to make a single meaningful plot of modern scientific data. So we need to completely change the way we train the next generation of scientists in order to tackle the challenges.”
So far, computer science has largely evolved in relative isolation from the scientific community. However, by pushing the limits of computing hardware to solve problems in other areas, such as the business or video-game industries, computer science has developed a body of knowledge with theoretical and practical applications that are ideal for many scientific problems.
Indeed, the application and importance of computing is set to grow dramatically across almost all the sciences towards 2020. The report predicts that the next 15 years will see progress in the ‘codification’ of scientific knowledge. Codification means, quite literally, turning knowledge into a coded representation – meaning that it can be searched, compared and analysed using a variety of computational techniques.
Biology leads the way
Biology is one area where codification – and the associated computer expertise – is seen as crucial to scientific progress. A good example has been the human genome project, and a glimpse of the computerised innovations that are to come can already be seen in the field of bioinformatics.
Dr Matt Wood of the Institute of Computational Biomedicine at Cornell University explains: “The wealth of data flowing from experimental life science research has huge potential to unlock some long un-answered biological questions. However, extracting meaning from the vast bodies of data that now exist is challenging. Bioinformatics attempts to use methods from statistics and computer science to analyse this data.”
For bioinformaticians data management is vital, but the goal should always be to allow scientists to access and analyse data easily, from any number of disparate repositories, and to have sufficient computational power to process it.
But it is not just the scale of the data involved that makes computer analysis vital in biology, but also the breadth and depth of the relationships and potential relationships contained within the data. To try and uncover these hidden trends and patterns, the more advanced bioinformatic approaches use artificial intelligence techniques to carry out the entire cycle of scientific experimentation, including formulation of hypothesis to explain observations, devising experiments to test these hypotheses and implementation of experiments – using robots – to falsify hypotheses.
In this post-genomic age however, bioinformaticians are hopeful that they can also crack what is hailed as the ‘next frontier’ – systems biology. And it is not just biologists hoping that the complexity of systems will be the key to their next breakthrough, scientists in many other disciplines are keen to understand and predict how complex systems produce coherent behaviour. Few scientists can imagine a way to do this without the use of large computing power – and utilising this power will of course require very specific skills and knowledge.
The info cram
So, it is inevitable that scientists will have to rely on computing to get to the answers that they want, and ever increasing data sets will mean they will undoubtedly have to use increasingly complex computing platforms. But is it really necessary that the experimental scientists themselves are advanced computing experts? Would it not be better for scientists to work with computer experts as opposed to trying to take on all their skills?
“In many areas that is how it currently works to be honest,” explains Dr Wood. “In my department I pretty much act as a hired hand for any lab that needs computational guidance or expertise. Of-course there are many informaticians and computer scientists that work on their own scientifically valid research, but some do act as experts for hire.”
However, this idea has been swiftly met with indignation by scientists on the 2020 panel. “A scientist not interested in computing is an oxymoron,” suggests Ehud Shapiro, a professor at the Weizmann Institute of Science in Israel.
“A physicist cannot hire a mathematician to sit in the next office and help him do things. To be a good physicist you must also be a good mathematician. We believe that tomorrow’s biologist will not be able to be a poor computer scientist. It will not work out to hire a computer scientist to sit next door and do the computation for the research. He will not be able to be a good biologist without being a good computer scientist.”
The group’s message to the scientific community is clear then. Future generations must be taught more effective computing skills. “They don’t need IT courses on how to read their email and do word processing; they need computational science courses which are relevant to analysing large data collections, searching, making hypotheses, doing simulations,” said Andrew Parker.
He added: “We need to build this sort of computer science training into school level and right through university if we want to have scientists capable of taking advantage of the experiments we’ll be able to deliver in the next 15 years.”
However, with the number of students studying science subjects constantly falling, there is a larger problem to deal with before the specifics of computer science can be tackled.
Stephen Emmott explains: “People are leaving science because it is very badly taught and it’s very uninteresting. They’re not allowed to use chemicals that might go bang or make a bad smell in the lab. If you insist on making
it very utilitarian they will be turned off in droves.”
Computing continues to change how science is done, not only enhancing current methods, but actually enabling new kinds of experiments. If the cultural shift away from science education can be halted, and students can be brought back and inspired then the problem of a computer skills shortfall can be solved. But if the trend continues then it is not just computer science that will suffer, but the entire spectrum of scientific discovery.
by Phil Prime, assistant editor, Laboratory News