The idea of machines turning into man’s worst enemy is not a new one. Whole movies have been devoted to this theme, from Space Odyssey 2000, where the computer HAL “eliminates” humans because it thinks they will sabotage a mission, to the Matrix where Neo has to rescue the human race from machines that are harvesting people like battery chicken.
Where this doom’s day theory used to make interesting conversation in the past, it is taken much more serious today due to the huge advances made in the development of artificial intelligence. Swedish philosopher, Nick Bostrom, summarised the threat in 2003 with his “paperclip maximiser” theory, according to which a super intelligent machine goes awry after being programmed to ensure a company never runs out of paperclips.
The computer turns the command into its sole goal, “eliminating” anyone and anything that stands in its way. While machines like this does not yet exist, cracks are already showing in the form of biases from some of the artificial intelligence programmes that are already in operation. These biases can take many forms, ranging from race to gender, ethnic group, social class and age.
The problem with the paperclip maximiser theory, however, is that it bears a very narrow view of what “super intelligence” really is.
Super intelligence in its essence suggests a superior understanding to anything we have ever seen. So wouldn’t an entity with this type of intelligence be smart enough to “read between the poorly constructed commands of a human” and understand that the goal is to make life easier for people by ensuring, in this case, they have access to paperclips when they need it.
The thing about super intelligent computers is that they will be radically different from any technology we have ever developed. As a results, we are unable to look back in history or human and animal nature to predict what it would be like. It might be totally different from humans, not needing any motivations but merely autonomously searching for ways to make the world a better and fairer place.
Machine bias could also be a thing of the past, with the super intelligent machines being able to monitor themselves for bias and recode themselves when needed. Most of machine bias with which we are struggling at the moment, is any case due to either humans consciously or unconsciously transferring their biases onto machines during programming or the data, from which the machines are learning, being biased.
A better tomorrow
Bostrom is probably right when he says in one of his papers entitled, Ethical Issues in Advanced Artificial Intelligence, that “super intelligence may be the last invention humans ever need to make.” Need to make, because these entities thereafter, will be so powerful that they will radically accelerate all other technological developments, which in turn can help to solve poverty, environmental concerns, diseases and other issues with which the human race is currently grappling with.
Super intelligence, as such, could allow people to break free from a society where there is a constant slog to get ahead, to one where they have the time and means to focus on the things they like most. To achieve this, we now need to start working, as Bostrom put it, at embedding philanthropic goals into the development of this technology and preventing it from only benefitting a select group of individuals.
Superintelligence will develop sooner or later. Like many technologies, such as steam engines and electricity, it bring numerous advantages and threats. What is different with superintelligence is that it will have the capacity to eliminate these threats without human interference, resulting in a brighter tomorrow and not the doom and gloom so many people like to preach.
Paul Stemmet is the CEO of a YAP, a company aimed at optimising advertising returns for publishing companies as well as the chairperson of the Interactive Advertising Bureau’s Techlab. The views expressed here are his own.