Artificial Intelligence Term Paper

Total Length: 2400 words ( 8 double-spaced pages)

Total Sources: 1+

Page 1 of 8

Artificial Intelligence

What if these theories are really true, and we were magically shrunk and put into someone's brain while he was thinking. We would see all the pumps, pistons, gears and levers working away, and we would be able to describe their workings completely, in mechanical terms, thereby completely describing the thought processes of the brain. But that description would nowhere contain any mention of thought! It would contain nothing but descriptions of pumps, pistons, levers!

Gottfried Willhelm Leibniz (1679).

Not even a century ago -- in fact, not even a half-century ago -- few people could have imagined the present-day world with computers operating most of the government and business processes and the Internet running in millions of homes. Thus it would have been nearly impossible to comprehend artificial intelligence (AI) and that scientists would try to create a machine (AI) to learn, adapt, reason, correct or improve itself. Whether or not this will become a reality is still unknown. AI pioneer Chris Langton says that this "intelligent entity" will never be possible. He believes, "when scientists are faced with the choice of either admitting that the computer process is alive, or moving the goalposts to exclude the computer from the exclusive club of living organisms, they will choose the latter." Is this true? Will humans never admit that a computer can actually function as real life? Or will they instead decide there is nothing special about life, and humanity can therefore be designed, built and replicated? At least for the time being, there is no answer to this dilemma.

According to the American Association for Artificial Intelligence, AI is "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines." The evolution of this science actually became noticed as early as 1821 when Charles Babbage stared at a table of logarithms and said, "I think that all these tables might be calculated by machinery." From then on, the scientist devoted his life to developing the first programmable computer.

Much later, in 1943, Babbage's idea finally took hold when Warren McCulloch (a psychiatrist, cybernetician, philosopher, and poet) and Walter Pitts (a research student in mathematics) published an innovative paper combining early twentieth-century ideas on computation, logic, and the nervous system. In fact, the report promised to revolutionize psychology and philosophy. The next year, Harvard University applied these ideas to develop the first American programmable computer, the Merck I.

It did not take long for British scientist Alan Turing to see the similarity of the computational process to that of human thinking. In his paper, "Comparing Machinery and Intelligence," he explained the direction for the remainder of the century -- developing computers for game playing, decision-making, natural language understanding, translation, theorem proving and encryption code cracking.

To help recognize if and when a computer had actually become intelligent, Turing suggested the idea of the "imitation game" where an interrogator would interview a human being and a computer and not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish the two by questioning, then it would be unreasonable not to call the computer intelligent.

Turing's game is now usually called "the Turing test for intelligence."

In the 1950s, Newell, Shaw and Simon created the program Logic Theorist (later called General Problem Solver), which used recursive search techniques, or defining a solution in terms of itself. IBM developed the first program that could play a full game of chess in 1957. The following year, Newell, Shaw and Simon noted, "There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until -- in a visible future -- the range of problems they can handle will be co-extensive with the range to which the human mind has been applied (Simon, p. 3).

In 1967, an MIT computer won the first tournament match against a human player. The world chess champion Gary Kasparov said in 1988 that there was "no way" a grandmaster would be defeated by a computer in a tournament before 2000. Ten months later he lost the bet. However, many people changed their tune and said that winning a championship game really did not require "real" intelligence. For a number of persons, the connection between human and machine was getting a little too close for comfort.

This was exactly why Turing had developed his test -- any other attempt to define intelligence seemed to run into problems.
The AI specialists did not even get excited about the win, because the computer "which put custom chips inside a machine," was seen as a type of idiot savant that was able to play a good game of chess without an understanding of why it did what it did (Gershenfeld, 1999, p130).

This is a curious argument. It retroactively adds a clause to the Turing test, demanding that not only must a machine be able to match the performance of humans at quintessentially intelligent tasks such as chess or conversation, but the way that it does so must be deemed to be satisfactory" (ibid).

Since then, the basic question has not been whether or not an advanced computer can be built, but rather what is intelligence. Fatmi and Young define intelligence as "that faculty of mind by which order is perceived in a situation previously considered disordered. Another definition states that something intelligent should be able to improve its own processes (Martin, 2000, 46).

By the mid-1960s, AI was being pursued by researchers worldwide. Yet, the memories of computers remained very limited. Perception and knowledge representation in computers became the theme of many AI studies. For example, in the Blocks Micro World project at MIT, the computer SHRDLU looked through cameras and a collection of pure geometric shapes and interpreted what it saw. Then it manipulated blocks and expressed its perceptions, activities, and motivations. SHRDLU could respond to commands typed in natural English, such as, "Will you please stack up both of the red blocks and either a green cube or a pyramid." The program would plan out a sequence of actions, and the robot arm would arrange the blocks appropriately. SHRDLU could correctly answer questions about its world of blocks, for example, "Can a pyramid be supported by a pyramid?" (SHRDLU attempts to stack up two pyramids and fails) and "Is there anything which is bigger than every pyramid but is not as wide as the thing that supports it?" (to which SHRDLU answered "Yes, the blue block").

Although SHRDLU was initially seen as a major breakthrough, the program's developer, Winograd, soon announced this work was a dead end. The techniques pioneered in the program proved unsuitable for use in wider applications. Moreover, the appearance that SHRDLU gave of understanding the blocks and English statements concerning them was in fact false. SHRDLU had no idea what a red block was.

Since then, computers have been created that are more "intelligent" in narrow domains such as diagnosing medical conditions, selling stock and guiding missiles. Copeland says that this so-called "expert system" has come much closer to the idea of AI. This is a computer program dedicated to solving problems and giving advice within a specialized area of knowledge. A good system can match the performance of a human specialist.

The basic components of an expert system are a "knowledge base" or KB and an "inference engine." The information in the KB is obtained by interviewing experts on a particular topic. The interviewer, or "knowledge engineer," organizes the gathered information into a collection of "production rules" typically of "if-then" structure. The inference engine enables the expert system to draw deductions from the rules in the KB. For example, if the KB contains production rules "if x then y" and "if y then z," the inference engine is able to deduce "if x then z." The expert system might then query its user "is x true in the situation that we are considering?" or, for example, "does the patient have a rash?" If the answer is affirmative, the system will proceed to infer z (ibid).

However, expert systems have no common sense nor an understanding of what they are for, the limits of their applicability, and of how their recommendations fit into a larger context. If a medical diagnosis computer is told that a patient with a gunshot wound is bleeding to death, the program attempts to diagnose a bacterial cause for the patient's symptoms. Expert systems can make also absurd errors, such as prescribing an obviously incorrect dosage of a drug for a patient whose weight and age are accidentally swapped by the clerk.

One of the most advanced such programs is being been developed by Douglas Lenat (Martin, 2000, 439). In the 1980s, he set out to understand what would it take to give a computer common sense, "or a vast number of ordinary-sounding pieces of knowledge that, when used collectively, enable….....

Show More ⇣


     Open the full completed essay and source list


OR

     Order a one-of-a-kind custom essay on this topic


Related Essays

Artificial Intelligence and the Human Brain

Artificial intelligence has been at the center of many science fiction stories in the last fifty years. Some have become obsessed with proving or disproving the idea that computers can possess real minds, real consciousness. The latest take on this has been HBO's Westworld, a show about androids achieving consciousness. However, realistically many say this is an impossibility. While true artificial intelligence seems, unrealistic many have tried to actualize such a dream through AI projects and development of new, robotic technologies. However, will the goal of real consciousness derived from… Continue Reading...

Artificial Intelligence and Futurism Transhumanism

.....humans interact with technology in increasingly sophisticated and meaningful ways, the ethical and philosophical questions posed by artificial intelligence start to become more pressing than ever before. The science fiction genre has promoted as ambivalent a relationship between humans and technology as scientists and futurists have. Both the potential benefits and drawbacks of artificial intelligence have been explored, asking human beings involved in the development of AI technology to consider the ramifications of their work. For example, Nick Bostrom has indicated the need for developers of artificially intelligent systems to work with cognitive scientists to mitigate risk by programming AI from the beginning to act only in… Continue Reading...

Intelligence Simulation Essay

Successful Simulation of Intelligence is Essentially Equivalent to the Possession of Intelligence Researchers have been attempting to develop artificial intelligence for more than a half century now, but it has become increasingly apparent that intelligence is a multidimensional construct that is incredibly difficult or perhaps even impossible to truly simulate. As the review that follows below will demonstrate, the assertion that successful simulate of intelligence is essential equivalent to the possession of intelligence is erroneous due to the multidimensionality of intelligence as well as its other nebulous qualities that preclude successful simulation being the essential equivalent to the possession of intelligence (Pogio & Meyers, 2016). This paper reviews… Continue Reading...

Warehouse Management Information Systems and Technology

of corporations to have access to information that would benefit economic growth. 2. Not all artificial intelligence systems are fancy, high-end technologies or advanced robotics like those depicted in science fiction films. Many of the technologies consumers use every day rely on some kind of artificial intelligence, from Siri to simple email filters for spam. One of the most obvious uses of artificial intelligence is in the online shopping environment. Through collaboration with Google, retailers can track consumer web browsing habits and then use those aggregates to develop a consumer profile. From that profile, marketers can then present targeted ads or even targeted hits that… Continue Reading...

Business Programs in Canada

focus for the first two years. Then for years 3 and 4, students learn software engineering, artificial intelligence, and management of information systems. The degree has a core of basic business courses with heavy emphasis on computer science principles. Stephen J.R. Smith School of Business at Queen's University The next school to examine is the Stephen J.R. Smith School of Business at Queen's University. This is a business school within a university that aims to encourage students interested in business to pursue other avenues within the field related to data management and analysis. While the school mainly does graduate programs, they have an undergraduate business program.… Continue Reading...

Information Technology in Healthcare Administration

shift more care to the home, including many diagnostic functions previously only available at healthcare facilities. Artificial intelligence is the next wave in the development of healthcare decision-making systems. AI will advance such systems by imbuing them with the capacity to learn, to continually upgrade their knowledge, to build on what they learn. These systems will soon become smarter and more capable than human practitioners, with substantial implications for the efficiency and accuracy of health care delivery, and on the administrative side impacting everything from staffing to liability. Supercomputing will have a similar impact to AI – the ability to gather and process massive datasets will… Continue Reading...

The Future of Healthcare Technology

by hospital leadership to be successful. Currently, research is underway for the use of artificial intelligence, supercomputing, and clinical support systems in the healthcare environment. This research will explore how these technologies are likely to change healthcare in the future, and the role of readership in making the integration of these technologies a success. Overview Currently, information technology is an integral part of the healthcare system. HIT refers to a variety of technologies that collect, transmit, store, and display patient data electronically (Hemmat, Ayatollahi, Maleki, and Safhafi, 2017). This makes it easy to send, review, and update patient information quickly and easily. The current… Continue Reading...

Technologies Used by ISIS The Terrorist Organizations

access details of the attacker’s iPhone (Nakashima, 2016). The FBI can also employ drone surveillance, artificial intelligence, and data mining to get into the terrorists networks and minds (InfoSec, 2018). The security agencies can also deploy radio and cellphone jammers to disrupt, interfere, or block illegal, terrorist wireless communications (Phantom Technologies, 2018). Similarly the intelligence units can interfere with internet communication as… Continue Reading...

William Gibson's Neuromancer on Technology and Humans

towards chaotic and egotistical interactions, which is why the artificial intelligence systems have been constructed to prevent becoming overly powerful. The problem is that they are still created by human beings, causing their inherent design to be flawed. There are no benevolent or beneficent systems in Neuromancer’s reality. Case needs to resort to criminal activity just to survive. In some cases, technologies are literally physical extensions of human beings, as with the use of highly advanced prosthetics. The prosthetics and other technological enhancements are designed to improve human life but in most cases fail to achieve that objective; no… Continue Reading...

The Benefits of Crowdsourcing Medicine

online platforms including CrowdMed and the more artificial intelligence (AI)-driven HumanDx (Arnold 1). The way medical crowdsourcing works is a little more complicated than asking for fine dining tips in Tokyo or even asking the general public for clues to solving a crime. With crowdsourced medicine using the CrowdMed model, doctors, nurses, and other healthcare workers essentially compete for whoever offers the most accurate diagnosis, and receive financial compensation for accurate hits. Compensation is higher for difficult to diagnose problems. The HumanDx platform is different, available only to physicians at the moment and uses AI instead of… Continue Reading...

Technology and Education

members of society. 2. In this course, I have considered all the ramifications of technological advancements. We learned a little about artificial intelligence, which I previously did not know much about. Also, I considered some of the ethical dimensions of technology, such as how it can either widen or narrow the gap between different social classes. Now that I am used to online learning, which relies exclusively on technology, I can see all the differences between this method and face-to-face learning. With online learning, we are able to do almost everything we can do in a traditional classroom. The main difference is that when we interact with students and instructors, it… Continue Reading...

Dynamic Network Theory Article on Psychology

is also of interest to cognitive scientists interested in exploring the application of dynamic network theory to artificial intelligence systems. Human resources personnel stand to derive extensive benefits from the dynamic network theory approach, too. School counselors can also learn how social networks impact student goal setting and… Continue Reading...

Cisco Business Model

Tech for Competitive Advantage Technology is one of the main drivers of competitive advantage today. The emergent technologies such as artificial intelligence, big data and alternative energy are all powerful means by which a company can extract competitive advantage, if the company stakes out a leadership position in the technology. The key to deriving competitive advantage is to be a leader in the field, which means having the people in place to develop cutting edge technology. An example would be Tesla. The company's cars are beautifully-designed and the marketing has been top-notch but the company's competitive advantage rests on a couple of key technological competitive advantages. First, the company is… Continue Reading...

Media Theory Essay

Perhaps most importantly, actor-network theory has the capacity to deal with the problematic issues related to artificial intelligence and the relationships between humans and non-human actors that frequently characterize cybercrimes and responses to cybercrime. Another reason why actor-network theory is an ideal framework for understanding cybercrime is its systemic approach, as the theory sheds light on the inextricable connections between various social structures, institutions, individuals, and identities. Moreover, actor-network theory touches upon a central issue in criminology and the sociology of crime: the issue of power. According to the premises of actor-network theory, all entities within any given network acquire power in several core but relatively… Continue Reading...

Marketing Tools for Business Strategy

audiences find that video content is more engaging than other forms of content (Sharma, 2017). Artificial intelligence is a big one, and that includes the use of smarter chatbots. These will pop-up on screen to help pull visitors to a website into the marketing funnel more quickly, and they have proven to be quite effective (Grennan, 2017). Chatbots also provide an opportunity for the brand to tailor its experience, setting up a number of different scripts to help move prospects through the funnel more quickly. These are going to be increasingly incorporated into apps in the coming years, as well as into websites. These trends… Continue Reading...

Open Source Software

advice on OSS, then the services of another professional and competent individual should be sought. The Artificial Intelligence Laboratory of MIT initiated a similar project aimed at code enhancement by passing the source code among its programmers. However, IT advancements at the time resulted in the project losing momentum. In the year 1984, Richard Stallman, a programmer, launched the GNU Project (Fitzgerald et al., 2011). The GNU's General Public License (GPS) enables its users to change the source code and release the changed version for use by others, under the very same license. GNU did not have a kernel though, until one was developed by Linus… Continue Reading...

sample essay writing service

Cite This Resource:

Latest APA Format (6th edition)

Copy Reference
"Artificial Intelligence" (2004, May 22) Retrieved May 6, 2024, from
https://www.aceyourpaper.com/essays/artificial-intelligence-172651

Latest MLA Format (8th edition)

Copy Reference
"Artificial Intelligence" 22 May 2004. Web.6 May. 2024. <
https://www.aceyourpaper.com/essays/artificial-intelligence-172651>

Latest Chicago Format (16th edition)

Copy Reference
"Artificial Intelligence", 22 May 2004, Accessed.6 May. 2024,
https://www.aceyourpaper.com/essays/artificial-intelligence-172651