Isaac Asimov, in 1942, introduced the three laws of robotics as a way to prescribe some rules for robotic moral judgement. The three laws are: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. A zeroth law was introduced later that said: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Fast forward to today, roboticists are still trying to prescribe rules of moral judgement for robots. However, uncovering the rules of moral judgement is tricky and nuanced. First, consider the “Trolley Problem” as an example of what is morally permissible. The Trolley Problem presents a scenario that asks whether it is okay to harm one or more individuals in order to save others. There are several versions of this problem, but the gist of the problem is as follows:
There is a trolley and its conductor has fainted. The trolley is headed toward five people walking on the track. The banks of the track are so steep that they will not be able to get off the track in time. Bystander Hank is standing next to a switch, which he can throw, that will turn the trolley onto a parallel side track, thereby preventing it from killing the five people. However, there is a man standing on the side track with his back turned. Hank can throw the switch, killing him; or he can refrain from doing this, letting the five die. Is it morally permissible for Hank to throw the switch?
https://www.livescience.com/5729-robots-ethical-decisions.html
A variety of studies across cultures have shown that most people think it is permissible to throw the switch.
However, consider another version of the Trolley Problem, which presents a very similar situation, but results in a different human judgement:
Ian is on the footbridge over the trolley track. He is next to a heavy object, which he can shove onto the track in the path of the trolley to stop it, thereby preventing it from killing the five people. The heavy object is a man, standing next to Ian with his back turned. Ian can shove the man onto the track, resulting in death; or he can refrain from doing this, letting the five die. Is it morally permissible for Ian to shove the man?
https://www.livescience.com/5729-robots-ethical-decisions.html
According to studies, most people say this is not morally okay.
As roboticists try and uncover the rules of moral logic, they are trying to replicate human judgement in robotic behavior. However, the intricacies of moral judgement may not follow a conventional logic. Nevertheless, roboticists claim success in modeling these difficult moral problems in computer logic, utilizing the hidden rules of judgement.
Sources: https://www.livescience.com/5729-robots-ethical-decisions.html, https://en.wikipedia.org/wiki/Three_Laws_of_Robotics