Abstract
The aim of this chapter is to explore the safety value of implementing Asimov’sLaws of Robotics as a future general framework that humans shouldobey. Asimov formulated laws to make explicit the safeguards of the robotsin his stories: (1) A robot may not injure or harm a human being or, throughinaction, allow a human being to come to harm; (2) A robot must obey theorders given to it by human beings except where such orders would conflictwith the First Law; (3) A robot must protect its own existence as long as suchprotection does not conflict with the First or Second Law. In Asimov’sstories, it is always assumed that the laws are built into the robots to governthe behaviour of the robots. As his stories clearly demonstrate, the Laws canbe ambiguous. Moreover, the laws are not very specific. General rules as aguide for robot behaviour may not be a very good method to achieve robotsafety – if we expect the robots to follow them. But would it work forhumans? In this chapter, we ask whether it would make as much, or more,sense to implement the laws in human legislation with the purpose of governingthe behaviour of people or companies that develop, build, market oruse AI, embodied in robots or in the form of software, now and in the future.