Artificial Intelligence has been defined as “the implementation and study of systems that exhibit autonomous intelligence or behaviour of their own” (Chitra & Subashini, 2013, p. 220). This definition highlights the capacity of artificial intelligence systems to have autonomous intelligence and autonomous behaviour. This can be related to the capacity of individuality of the systems of artificial intelligence. The 2001 film Artificial Intelligence tells a story set in 22nd century, where humanoid robots named Mecha have been created with capacity for complex thought not emotions. The story involves several themes relevant to Artificial Intelligence theory and the difficulties associated with creating Artificial Intelligence. This essay discusses the themes involved in the film Artificial Intelligence, by associating these themes with the theory related to Artificial Intelligence, particularly psychological theory and media representations of Artificial Intelligence. It may be mentioned at the outset that the two sources that are being discussed here are theory of Artificial Intelligence and the media representations of Artificial Intelligence. If you need a computer science dissertation help, understanding the themes in Artificial Intelligence can provide valuable insights into both theoretical and media perspectives on AI.
First, the essay will identify and list the themes that are involved in the film Artificial Intelligence. The most predominant theme is whether the Mecha or humanoid robot is capable of experiencing love in the same way as human beings can experience it. In the film, David, the Mecha child is not only capable of feeling such love and affection, his entire existence is rooted to the love he feels for his human mother, Monica. David seeks Monica’s love till the very end and his search for the Blue Fairy is also driven by his desire to be a real human boy and win Monica’s love again. On the other hand, David is shown to not be capable of jealousy in the same way as human child, Martin. The other important theme that is involved in this film is the desire for individuality and whether a humanoid is capable of a desire for individuality. In the film, David is disheartened by his lost sense of individuality when he finds copies of humanoid robots similar to himself, and this leads him to attempt suicide by jumping from a skyscraper. Therefore, the theme that is involved here is the capacity of Artificial Intelligence to develop a sense of individuality. The third theme that is involved in this film is the risk posed by Artificial Intelligence. This is reflected in the events related to David’s jumping into the pool with Martin which almost kills Martin and leads Monica to leave David in a forest. These three themes can be said to be the principal representations of the Artificial Intelligence in the film. Now, the essay will analyse how these themes coincide with the literature on Artificial Intelligence.
One of the problems with creating and developing Artificial Intelligence is that there is confusion in the field of theory with regard to what kind of attributes must be embedded in the Artificial Intelligence agents (Wang, 2019). Within the field of Artificial Intelligence, there is little agreement on evaluation criteria, benchmark tests, and milestones which comes in the way of research and development (Wang, 2019). With regard to Artificial Intelligence, conceptualisation is that of computer systems similar to human mind, although not identical to human mind (Wang, 2019). However, the key issue is whether the Artificial Intelligence can be the same as human mind because from a cognitive point of view, the working definition of Artificial Intelligence corresponds to abstraction of human mind but which is based on a certain level of abstraction or belief of certain kind of intelligence (Wang, 2019). It has been argued that the abstraction which guides the construction of a computer system of Artificial Intelligence, neglects certain aspects of the human mind as irrelevant or secondary and this would also mean that in no sense can Artificial Intelligence be the same as human mind because the latter is very complex and varied in nature (Wang, 2019, p. 8).
With regard to the first theme, that is, the capacity of the Artificial Intelligence system to experience emotions and love in the same way as humans do, theory on Artificial Intelligence at this point does not consider it possible or appropriate for robots with Artificial Intelligence to experience feelings and emotions because these emotions come with experience (Gray & Wegner, 2012). This viewpoint is also linked to ethical theory, such as Aristotelian virtue ethics, which emphasises on the lack of experience that Artificial Intelligence does not have (Aristotle, 2009). It is also argued that abstract notions or theories cannot be used to prescribe emotions and feelings to humanoid robots because these have to come with training and experience (Gray & Wegner, 2012). What is also argued is that while humanoid robots may be programmed to think and feel and thus become moral agents, capable of making moral decisions, emotions and feelings cannot be programmed into such Artificial Intelligence agents in the absence of experience (Aristotle, 2009). Therefore, there seems to be some inconsistency between theory on Artificial Intelligence and media representations of the same as shown in this film. In the film, not only is the humanoid robot capable of making moral decisions, he is also capable of human emotions. This does not coincide with the literature which argues that it is not possible to use abstract theories for prescribing emotions to Artificial Intelligence.
With regard to the second theme, that is, the desire for individuality and whether a humanoid is capable of a desire for individuality. In this context, literature on Artificial Intelligence mentions the difference between moral producers and moral consumers in context of robots as moral agents (Torrance, 2009). A moral producer creates moral actions and makes moral decisions while a moral consumer has capacity for receiving moral actions (Torrance, 2009). If a robot is treated as a moral consumer, then it ought to have certain rights and needs recognised and respected by other members of the community (Torrance, 2009). Robots do not have experience of feelings and emotions and therefore, there is an argument against them being moral producers (Gray & Wegner, 2012). On the other hand, literature does accept their capacity for being moral consumers in that they have the capacity of being bearers of certain rights (Torrance, 2009). However, is this sufficient to consider robots as having individuality? In the film, David’s realisation that he is not unique and that he is just a prototype of hundreds of other robots, disheartens him to the extent that he wishes to destroy himself. Therefore, he does consider this absence of individuality to be a problem enough for him to not want to continue his existence. However, from an Artificial Intelligence theory point of view, it is not possible for a robot to be individualistic because the very moral agency that he is given by the programmer, is not individualistic in nature but, a part of the programme created by the programmer. To go back to the definition of Artificial Intelligence taken in the beginning to be “the implementation and study of systems that exhibit
autonomous intelligence or behaviour of their own” (Chitra & Subashini, 2013, p. 220), there is a study of systems that is implemented to bestow artificial moral agents with autonomous intelligence. As this is systematised, it is difficult to see how it can be individualised. In order to be individualised, a robot would have to have human experiences which is not possible because a robot does not have human relationships. This is an area where there is a marked contrast between literature on Artificial Intelligence, which presents the difficulties associated with moral agents being created as individualistic and the film representation of the Artificial Agent. In the film, David has individualistic nature and even the realisation that he is one of many humanoid robots who look like him is enough to make him want to destroy himself. This does not seem to be in line with the theory of the Artificial Intelligence and there is a clear divergence from what is possible with regard to creation of Artificial Intelligence.
With regard to the third theme, that is, the risks presented in Artificial Intelligence, literature discusses many of these risks and there are also some famous examples of such risks having been seen in reality (Scherer, 2015). Risks associated with Artificial Intelligence agents are related to their capacity to function autonomously and with foresight (Scherer, 2015). In this, there is a similarity between Artificial Moral agents and human beings because it is possible for both to make bad decisions; however, the risks associated with artificial agents are compounded by their inability to have emotions (Coeckelbergh, 2010). Due to this reason, it has been argued that it is more appropriate for beings and not robots to exercise moral agency because humans have natural human agency and robots are programmed to make moral decisions (Harris Jr, et al., 2013).
The Therac-25 case is an example of how moral autonomy of the artificial moral agent can lead to undesirable outcomes and what are the problems associated with creating computer programs with moral autonomy (Harris Jr, et al., 2013). Therac-25 had the capacity to administer dosage to patients but, the dosage was administered beyond permissible limits by the machine leading to a number of deaths of patients (Harris Jr, et al., 2013). Another ethical issue associated with risks of Artificial Intelligence is that when artificial agents are programmed with moral agency, they may have the ability to make decisions in response to human emotions, but they may not have sufficient experience to interpret human emotions accurately; this may lead to complications or adverse situations where the artificial agent misinterprets the human actions. In the film, this possibility arose when David feels threatened by Martin’s friend and jumps into the pool with Martin leading to the latter almost dying.
Literature and real life research shows that it is possible to programme robots to respond to human emotions for informing own decision making, but also that it is still open to possibility for intelligent moral agents to make bad decisions due to programming defects or inability to accurately interpret human emotions (Harris Jr, et al., 2013). It is also possible for moral agents to make evil decisions in response to human behaviour (Harris Jr, et al., 2013). These are some of the risks that are expressly noted in the literature on robots with moral agency.
There is an argument in literature that it is not appropriate to programme robots with ability to be moral agents to the extent of programming them with emotions and feelings because it is opposed to the law of nature (Petersen, 2007). From an ethical perspective, Aristotelian ethics are opposed to robots with moral agency because this amounts to engineering humans (Petersen, 2007). This is contrary to the laws of the nature (Petersen, 2007). In the film, these ethical notions are obviously not used to inform the story because the humanoid robot David and other robots who he later finds, are all capable of love and affection. This amounts to creating human like robots. This does not seem to be in accord with the literature on ethics of robots.
To conclude, the film Artificial Intelligence represents Artificial Intelligence in a way that presents difficulties in the sense of actually creating such moral agents. In literature, there are a number of ethical problems that are associated with the creation of moral agents. These do not support the creation of moral agents that should be capable of feeling and emotion because this is programmed and not created by experience. Moral agents are not capable of being individualised because they are created through a system of programming. Moral agents do present risks if they are given too much autonomy. In the film representation of the Artificial Intelligence, these notions are not considered and something is created which may not be possible or appropriate.
Take a deeper dive into Enhance Security Against Escalating Threats on Web Applications with our additional resources.
Bibliography
Aristotle, 2009. Nicomachean ethics. New York: World Library Classics. Chitra, K. & Subashini, B., 2013. Data mining techniques and its applications in banking sector. International Journal of Emerging Technology and Advanced Engineering, 3(8), pp. 219-226.
Coeckelbergh, M., 2010. Moral appearances: emotions, robots, and human morality. Ethics and Information Technology , 12(3), pp. 235-241.
Gray, K. & Wegner, D. M., 2012. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition , 125(1), pp. 125-130.
Harris Jr, C. E. et al., 2013. Engineering ethics: Concepts and cases. 5 ed. Boston(MA): Cengage Learning.
Petersen, S., 2007. The ethics of robot servitude. Journal of Experimental & Theoretical Artificial Intelligence, 19(1), pp. 43-54.
Scherer, M. U., 2015. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech., Volume 29, pp. 353-400.
Torrance, S., 2009. Will Robots Need Their Own Ethics?. Philosophy Now, Volume 72, pp. 10-11.
Wang, P., 2019. On defining artificial intelligence. Journal of Artificial General Intelligence , 10(2), pp. 1-37.
Academic services materialise with the utmost challenges when it comes to solving the writing. As it comprises invaluable time with significant searches, this is the main reason why individuals look for the Assignment Help team to get done with their tasks easily. This platform works as a lifesaver for those who lack knowledge in evaluating the research study, infusing with our Dissertation Help writers outlooks the need to frame the writing with adequate sources easily and fluently. Be the augment is standardised for any by emphasising the study based on relative approaches with the Thesis Help, the group navigates the process smoothly. Hence, the writers of the Essay Help team offer significant guidance on formatting the research questions with relevant argumentation that eases the research quickly and efficiently.
DISCLAIMER : The assignment help samples available on website are for review and are representative of the exceptional work provided by our assignment writers. These samples are intended to highlight and demonstrate the high level of proficiency and expertise exhibited by our assignment writers in crafting quality assignments. Feel free to use our assignment samples as a guiding resource to enhance your learning.