Developments in nanotechnology will be one of the areas of technology studied at CSER.
Researchers at the University of Cambridge have proposed a new centre that will study the risks of technology to humans.
The researchers - which include a philosopher, a scientist, and a software engineer - have come together to propose the new centre at Cambridge. The Centre for the Study of Existential Risk (CSER) would address developments in human technologies that might pose “extinction-level” risks to the human species, looking at developments in bio and nanotechnology, to extreme climate change and even artificial intelligence.
“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” said Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders.
Price was speaking about the possible impact of Irving John ‘Jack’ Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as it is called today. Good, wrote a paper for New Scientist in 1965 called 'Speculations concerning the first ultra-intelligent machine.' In it, he wrote that that the ultra-intelligent machine would be the “last invention” that mankind would ever make, leading to an “intelligence explosion” - an exponential increase in self-generating machine intelligence.
For Good, who went on to advise 'Stanley Kubrick on 2001: a Space Odyssey,' the “survival of man” depended on the construction of this ultra-intelligent machine.
Price said: “We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”
Price said the basic philosophy was that “we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history.”
Price acknowledged that some the ideas seemed far-fetched, the stuff of science fiction, but insisted that that was part of the point. “To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.”
“What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?” he adds. “We hope that CSER will be a place where world class minds from a variety of disciplines can collaborate in exploring technological risks in both the near and far future.
“Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate its millennium.”
Price is co-founding CSER with Jaan Tallinn, the co-founder of Skype, and Martin Rees, Cambridge professor of cosmology and astrophysics.
The centre’s launch is planned for next year, the university was reported as saying.