A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Who laid down those laws! Who said that robots must be free to go anywhere or do anything! Thanks for your time. That doesn’t answer my question at all. If a robot breaks the First Law, a human being may be held legally responsible. Can a robot be punished for crimes committed by a human being? How ! Why do you not include harm to any life forms in the first law! If a robot has the ability to harm a human being, then there must be something wrong with the robot. I mean: no life form should harm any other. Lions should eat leaves: but even then it is harming trees or shrubs. It is better to die of hunger than to live in the mess where laws are dependent on whim of despotic regimes. Is it the robot’s fault if laws are designed to help the government become a tyranny?