-
Essay / Analysis of Isaac Asimov's Laws in the Book I, Robot
The science fiction “I robot” by Isaac Asimov is a must-read book for beginners wishing to develop an interest in robotics. It presents a collection of nine short stories that imagine the development of a positronic brain for robots with intelligence equivalent to or greater than that of humans and discusses the moral implications of the technology. Among them, I find the first story of Robbie, Gloria's robot nanny, particularly interesting and very easy to understand. Gloria's mother hates robots, and as such, she conspires to get rid of them. Gloria becomes sad and her parents try to convince her that robots are not humans. But when Gloria is in danger, Robbie saves her life, which makes everyone appreciate robots. Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”? Get an original essay We can very well imagine ourselves in a similar situation in the next 40-50 years, when our grandchildren would be taken care of by such robots. A child is bound to become attached to toys, especially if they look like humans. Every coin has two sides and I would like to discuss both here. On the one hand, a robot nanny is in demand, but not so much that your children start to hate real humans. We all know that a computer is much smarter than a human brain. So, letting your child develop in the care of a robot could be dangerous, because it would absorb more intelligence than necessary from the child at a given age. Imagine your child knows the theory of relativity and concepts of quantum physics at age 7 while his or her peers are still learning basic math. This would give the child a feeling of superiority over others and his behavior would appear arrogant not only towards his friends but also towards his own parents. Again, this will also disrupt the child's social life just like Gloria, who never wanted to hang out with other kids her age but stick to Robbie. A mother is particularly responsible for imbibing moral values in her children. But if left under the supervision of a robot, what values will the child learn? True or false? And as we discussed in our previous book, it still seems difficult to build a perfect moral machine capable of distinguishing good from evil. Furthermore, from the chapter "Liar", we know how the mind-reading robot gave the answers that the person wanted to hear, in order to give them satisfaction, thus respecting the first law of robotics of not hurting humans: both physically and mentally. If a child does something unethical, parents strongly oppose it to teach the child a lesson. But a robot prefers to support a bad action. Additionally, the second law of robotics says that the robot must follow human orders. So, when the child becomes a teenager and the robot is of the opposite sex, equally attractive, then you can imagine what orders the teenager might give to satisfy lusty desires. This will affect the adolescent's social behavior. At the other end of the scale, we have robots designed to provide social care to humans. More sophisticated robots can act as companions, accompanying their users during fetching and carrying, issuing reminders about appointments and medications, and sending alarms if certain types of emergencies occur. They expect no respect or pay. Today we have robots capable of detecting our state of mind and ouremotions and correspond to possible solutions. This would be a good alternative for lonely and depressed people. So, I believe there are both advantages and disadvantages to having a robot nanny. I believe that although Asimov's laws are organized around the moral value of preventing harm to humans, they are not easy to interpret. We need to stop seeing them as an adequate ethical basis for robotic interactions with humans. Part of the reason Asimov's laws seem plausible is the fear that robots could harm humans. I bet most of us have heard of self-driving car malfunctions causing deaths in the United States. Additionally, AI is mainly concerned with training robots to adapt their behavior to new situations, but obviously this behavior can sometimes be unpredictable. Asimov was therefore right to be concerned about the robot's unexpected behavior. But when we take a closer look at how robots work and the tasks they are designed to do, we find that Asimov's laws don't clearly apply. Take the example of military drones. They are robots run by humans to kill other humans. The very idea of military drones seems to violate Asimov's First Law, which prohibits harm to humans from robots. But if a robot is directed by a human controller to save the lives of its fellow citizens by killing other attacking humans, it obeys and disobeys the first law. In this case, the balance would shift between the first and second laws and give rise to a scenario described in the “Runaround” chapter. It is also unclear whether the drone is responsible when someone is killed in these circumstances. Perhaps the drone's human controller is responsible. But a human cannot break Asimov's laws, which are aimed exclusively at robots. In the meantime, it may be that drone-equipped armies will significantly reduce the total number of human lives lost. Not only is it preferable to use robots rather than humans as cannon fodder, but there is arguably nothing wrong with destroying robots in war since they have no lives to lose and no personality or personal project to sacrifice. Moreover, during robot-assisted surgery. , the first law would be problematic since the skin must be cut to heal the person and therefore must be modified. Robots working in the industry dealing with hazardous chemicals will experience frequent conflicts between the Second and Third Laws. I had read in an article that the United States was trying to use robot judges in the courts. To what extent would the robot be able to judge the person? Even then, the first law of robotics is followed or not to protect citizens and punish criminals who are both "humans". In the Mercury Expedition story, Mike did not focus on extracting selenium from the pool. which was the cause of all this fun. But in our daily lives, we don't always think about dictating every order insistently. We ordered casually. So the robot must also have priority settings. Additionally, Mike and Powell were present to resolve the conflict between laws but what to do when no humans are physically present. The chapter "Reason" is thought-provoking as it shows how the robots' faith in their Master changes due to his own strange logical reasoning. We talk about robots keeping in mind that they would serve humans. But in this case, the Cutie robot, programmed according to Asimov's laws, wants clear logic explaining why the.