Three Laws of Robotics

related topics
{theory, work, human}
{film, series, show}
{law, state, case}
{ship, engine, design}
{god, call, give}
{system, computer, user}
{disease, patient, cell}
{war, force, army}
{specie, animal, plant}
{math, number, function}
{work, book, publish}

The Three Laws of Robotics, often shortened to The Three Laws or Three Laws, are a set of three rules written by science fiction author Isaac Asimov and later expanded upon. The rules are introduced in his 1942 short story Runaround although they were foreshadowed in a few earlier stories. The Laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Three Laws form an organizing principle and unifying theme for Asimov's fiction, appearing in his Robot series, the other stories linked to it and his Lucky Starr series of young-adult fiction. The Laws are built in to almost all positronic robots appearing in his fiction and cannot be bypassed. Other authors working in Asimov's fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres.

The original premise has been somewhat changed and expanded upon by both Asimov and other authors. Asimov also has made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. Asimov himself added a fourth, or Zeroth, law to precede the first three stating:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

The Three Laws, and the fourth, have pervaded science fiction and been referenced in many books, films and other media and have often been the base from which Artificial Intelligence discussions about how robots and humans will interact in the future have grown. It is accepted that the Three Laws are not completely appropriate for future robotic constraints but rather that their basic premise, to prevent robots from harming humans, will ensure robots are acceptable in their actions to the general public.


Full article ▸

related documents
Robert A. Heinlein
Jean Piaget
Louis Althusser
Thought experiment
Phenomenology (philosophy)
Transactional analysis
John Searle
Eastern philosophy
Myers-Briggs Type Indicator
Collective intelligence
Homo economicus
William A. Dembski
Paul Feyerabend
International relations
Alfred Adler
Theodor W. Adorno
Unconscious mind
Information science
Muhammad ibn Zakariya ar-Razi