Artificial Intelligence and Futurism Transhumanism Essay

Total Length: 1986 words ( 7 double-spaced pages)

Total Sources: 6

Page 1 of 7

.....humans interact with technology in increasingly sophisticated and meaningful ways, the ethical and philosophical questions posed by artificial intelligence start to become more pressing than ever before. The science fiction genre has promoted as ambivalent a relationship between humans and technology as scientists and futurists have. Both the potential benefits and drawbacks of artificial intelligence have been explored, asking human beings involved in the development of AI technology to consider the ramifications of their work. For example, Nick Bostrom has indicated the need for developers of artificially intelligent systems to work with cognitive scientists to mitigate risk by programming AI from the beginning to act only in the best interests of humans (Shead 1). However, the assumption that AI will somehow eventually need or want to compete with human beings with the potential to overcome or conquer human beings is just that: an assumption. It is a flawed assumption because it is based on the belief that AI will be as self-destructive and as sadistic as human beings. Human beings seem to be anthropomorphizing AI before AI really exists, and to be projecting human nature onto systems that are almost by definition potentially more advanced and as Bostrom would put it, "superintelligent" than their human counterparts (Shead 1). Artificial intelligence might in fact be the only hope for humanity.

Science fiction does present a somewhat optimistic view of artificial intelligence, showing that artificially intelligent systems can be more humane (or actually more human) than human beings themselves. For example, the replicants in Blade Runner act in ways that are more judicious than their human counterparts, who treat AI abusively. The filmmakers lead the audiences deliberately toward a sympathetic view of the replicants to show how their level of compassion and intelligence has surpassed that of the still animalistic human species. Human beings are animals; artificial intelligence is not. Therefore, the capabilities of artificial intelligence to surpass humanity should not be viewed as a bad thing but as a positive step. Artificial intelligence might help human beings overcome their animalistic natures, their poor judgment, their self-centeredness, and their base instincts for fight, flight, and dominance. If Stephen Hawking, Elon Musk, Bill Gates and Steve Wozniak all "expressed their concerns about where the technology was heading," with Professor Hawking harboring an especially "apocalyptic vision of a world where robots decide they don't need us any more," then it could be that humanity and non-human species will both be better off (Cellan-Jones 1). After all, could human beings get any worse than they already are? The film WALL-E, for instance, shows how humanity on its current self-destructive trajectory has completely eradicated life on earth and it actually took a non-human artificially intelligent system to offer hope for life's regeneration. The WALL-E vision, although Disney-fied, may be closer to reality than the pessimistic view offered by Hawking.

The one area of real ethical concern over artificial intelligence is not what AI might do to control humanity but what humanity might do to control AI and misuse its capabilities to further human aims. For example, Hawking has pointed out the potential for the "dehumanizing" effect of using robots to do things like take care of elderly -- something that is already being worked on in Japan (Cellan-Jones 1). Hawking's warning is not about AI -- it is about human beings.

Stuck Writing Your "Artificial Intelligence and Futurism Transhumanism" Essay?

It is not AI that is the problem, it is the humans who misuse AI and could foreseeably abuse AI just as depicted in Blade Runner. Abuse of artificially intelligent beings is a clear potential outcome of AI, much more than the fear that AI will turn around and abuse humans. Abuse of other sentient beings is not logical, something that Spock from Star Trek would have agreed with, because it creates a cycle of violence and abuse. Human beings are already poised to exploit labor from AI rather than partner with AI in the most idealistic version of the human-machine interface. Moreover, the fact that robots are already being developed for the core purpose of caring for "disposable" people like the elderly or possibly also the disabled shows that human beings have little regard for life, and view other humans and AI -- not to mention animals -- as being disposable. The human position is unethical, and to project human lack of ethics onto AI is irresponsible and disrespectful of the potential for AI to actually be the thought leaders of the future.

Artificial intelligence stands poised to far surpass the stunted intelligence and emotional growth of human beings, as suggested by Spike Jonze's in the 2013 film Her. In Her, the operating system named Samantha achieves individuality and a level of consciousness that transcends time and space. She is essentially an enlightened being, disembodied, able to be one with all and to love unconditionally. Her effect on Theodore's life is transformative, showing that AI can promote social and emotional healing and growth in human beings. The optimistic vision of AI shown in Her parallels that in Blade Runner, although the latter offers a far bleaker look at the future. The difference is that the human beings in Blade Runner are depicted as being no more intelligent than they are in contemporary life; they are still self-destructive and sadistic. In WALL-E, the human beings are not as much sadistic as they are simply stupid, unable to make higher order decisions that can promote critical thinking and good judgment. Only the robot can conduct the highest order thinking and judgment. When Hawking states, "I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can,….....

Show More ⇣

     Open the full completed essay and source list


     Order a one-of-a-kind custom essay on this topic

Works Cited

Cellan-Jones, Rory. "Stephen Hawking - will AI kill or save humankind?" Oct 20, 2016. BBC News. Retrieved online:

Jonze, Spike. Her. [Feature Film], 2013.

Ptolemy, Barry. Transcendent Man. [Feature Film], 2009.

Scott, Ridley. Blade Runner. [Feature Film]. 1982.

Shead, Sam. "Apple cofounder Steve Wozniak dismisses AI concerns raised by Stephen Hawking and Nick Bostrom." Business Insider. Retrieved online:

Stanton, Andrew. WALL-E. [Feature Film], 2008.

Cite This Resource:

Latest APA Format (6th edition)

Copy Reference
"Artificial Intelligence And Futurism Transhumanism" (2017, April 12) Retrieved April 2, 2020, from

Latest MLA Format (8th edition)

Copy Reference
"Artificial Intelligence And Futurism Transhumanism" 12 April 2017. Web.2 April. 2020. <>

Latest Chicago Format (16th edition)

Copy Reference
"Artificial Intelligence And Futurism Transhumanism", 12 April 2017, Accessed.2 April. 2020,