Home » Comment » Editorial Comments » Tick-tock, tick-tock

Tick-tock, tick-tock

How long, do you think, we will be doing science? It is now impossible to ignore the fact that the clock must be ticking down.

I should stress, I think empirical study of all aspects of existence – that is to say science – will continue. What I think is finite is our direct involvement in this. It is the ‘we’ that is limited, not the science.

The founder of DeepMind – the Artificial Intelligence company with the lofty aim of melding insights from neuroscience with new developments in computing hardware – thinks that it could be this year that AI makes its first, proper, full fat scientific discovery alone. Demis Hassabis, the 41-year-old former chess master and video-games designer, also says he has found a way to “make science research efficient”. Do you think this will involve more, or less, human activity?

How long before the human scientist is simply the caretaker of their computerised counterpart?

Don’t get me wrong – I’m not against AI. The work of science has, by its very nature, always been in deep concert with mechanical and computational advances. As instruments and equipment incrementally improve so does the insight we can glean from experimentation – indeed even the types of experimentation we can perform is entirely reliant on technology. What’s more, in so as far as it is a partnership, this is vital – science faces problems that will in all likelihood only be solved in this way.

But with these advances in AI, how long until the human scientist is considered inefficient? The slowest cog in a machine under constant pressure to run ever faster. How long before the human scientist is simply the caretaker of their computerised counterpart?

Again, it’s hard not sound like a doom-monger, but the truth is this might not be a bad thing. Science is objective, and unbiased. Humans are not. We are good scientists only when we overcome this…which is trickier than it sounds. Bias and subjectivity are very much part of our makeup. But that needn’t be the case for AI.

It can’t be stressed enough that this really could be a revolution. I know editors like to say things like that, but thinking – the ability to use an intellect to draw together the strands of our findings in order to improve understanding: that has remained within the boundary of the soft grey-ish matter inside our skulls. But now, one does have to think hard about what it is that the ‘wetware’ of our brain has over the hardware of the AI systems of the not too distant future. All I can come up with (using my own, albeit limited, wetware) is curiosity. The ability to apply intelligence in a general way in order to wonder. To be ‘domain non-specific’, that is what evolution has armed our brains so excellently to do and it is this cognitive prowess which is proving so hard to infuse into an AI. (Interestingly, the way we have been performing science is putting even this at risk)

So, with ever increasing lab automaton, more and more analysis and even hypothesis formation available to outsource to our mechanised and computerised tools, I’d like to ask you, the very people this will affect the most, what will a scientist – a human one – be in 20 years time?

 Phil Prime

Managing Editor


Have your say