Artificial intelligence (AI) experts from all across the globe are signing an open letter urging that AI research should not only be done to make it more capable, but should also proceed in a direction that makes it more robust and beneficial all the while protecting mankind from machines.
Future of Life Institute, a volunteer-only research organization, has put out an open letter to ensure that the progress in the field of AI does not grow out of control – an early attempt to draw everyone’s mind towards the probable dangers of a machine that could enslave humankind.
The letter’s concluding remarks read: “Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls.”
The letter states that the 3 most immediate concerns in the field of AI are areas like machine ethics and self-driving cars and autonomous weapons systems. The letter also notes that the long-term plan is to stop treating fictional dystopias as fantasy and address the possibility that artificial intelligence could start acting against its programming someday.
Future of Life Institute’s main aim is to mitigate the potential risks of human-level man-made intelligence that may then advance exponentially. It was mainly founded by Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.
The signees of the open letter include co-founders of Deep Mind – recently acquired by Google, MIT professors and experts at some of technology’s biggest corporations, including from within IBM’s Watson supercomputer team and Microsoft Research.
SpaceX and Tesla CEO Elon Musk, who’s on the institute’s board of directors, has been calling it “summoning the demon” and had said that there should be some regulatory oversight just to make sure that “we don’t do something very foolish.”
In May 2014, renowned physicist Stephen Hawking co-wrote in an article for the Independent, alongside Future of Life Institute members Tegmark, Stuart Russell and Frank Wilczek, that “one can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.”
Of course, Asimov solved this decades ago with the Three Laws of Robotics:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But seriously, could this type of programming logic be made to work?