Scientists, companies working on Artificial Intelligence (AI) products and their CEOs have come together pledging to protect humanity from machines and to ensure that the research explores the potential of harvesting the benefits to humanity and avoiding the potential pitfalls.
By signing an open letter floated across by Future of Life Institute, AI leaders have come in favour of research in the field is not limited to making “AI more capable, but also how to make it robust and beneficial.”
One of the key points of the research priorities document attached with the open letter highlights the fact that “significant amounts of intelligence and autonomy leads to important legal and ethical questions whose answers impact both producers and consumers of AI technology”.
The open letter notes that these questions span “law, professional ethics, and philosophical ethics” requiring contribution from experts in each of these fields.
The five main law and ethics issues highlighted in the research priorities document are:
1. Liability and law for autonomous vehicles
2. Machine Ethics
3. Autonomous Weapons
4. Privacy
5. Professional ethics
Some of the questions raised in the open letter include:
1. In what legal framework can the safety benefits of autonomous vehicles such as drone aircraft and selfdriving cars best be realized?
2. Should legal questions about AI be handled by existing (software and internet-focused) “cyberlaw”, or should they be treated separately?
3. How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost?
4. How should lawyers, ethicists, and policymakers engage the public on these issues?
5. Should such trade-offs be the subject of national standards?
6. Can lethal autonomous weapons be made to comply with humanitarian law?
7. If it is permissible or legal to use lethal autonomous weapons, how should these weapons be integrated into the existing command-and-control structure so that responsibility and liability be distributed, what technical realities and forecasts should inform these questions, and how should “meaningful human control” over weapons be defined?
8. How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy?
9. How will privacy risks interact with cyberwarfare?
10. What role should computer scientists play in the law and ethics of AI development and use?