Monday 3 November 2014

What is artificial intelligence in cognitive psychology?


Introduction

Ideas proposed in cybernetics, developments in psychology in terms of studying internal mental processes, and the development of the computer were important precursors for the area of artificial intelligence (AI). Cybernetics, a term coined by Norbert Wiener in 1948, is a field of study interested in the issue of feedback for artificial and natural systems. The main idea is that a system could modify its behavior based on feedback generated by the system or from the environment. Information, and in particular feedback, is necessary for a system to make intelligent decisions. During the 1940s and 1950s, the dominant school in American psychology was behaviorism. The focus of research was on topics in which the behaviors were observable and measurable. During this time, researchers such as George Miller were devising experiments that continued to study behavior but also provided some indication of internal mental processes. This cognitive revolution in the United States led to research programs interested in issues such as decision making, language development, consciousness, and memory, issues relevant to the development of an intelligent machine. The main tool for implementing AI, the computer, was an important development that came out of World War II.















The culmination of many of these events was a conference held at Dartmouth College in 1956, which explored the idea of developing computer programs that behaved in an intelligent manner. This conference is often viewed as the beginning of the area of artificial intelligence. Some of the researchers involved in the conference included John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. Before this conference, Newell, Simon, and Shaw’s Logic Theorist was the only AI program. Subsequent projects focused on the development of programs in the domain of game playing. Games of strategy, such as checkers and chess, were selected because they seem to require intelligence. The development of programs capable of “playing” these games supported the idea that AI is possible.



Cognitive science, an interdisciplinary approach to the study of the mind, was influenced by many of the same factors that had an impact on the field of AI. Some of the traditional disciplines that contribute to cognitive science are AI, cognitive psychology, linguistics, neuroscience, and philosophy. Each discipline brings its own set of questions and techniques to the shared goal of understanding intelligence and the mind.




Traditional AI Versus Computer Simulations

“Artificial intelligence” is a general term that includes a number of different approaches to developing intelligent machines. Two different philosophical approaches to the development of intelligent systems are traditional AI and computer simulations. This term can also refer to the development of hardware (equipment) or software (programs) for an AI project. The goal remains the same for traditional AI and computer simulations: the development of a system capable of performing a particular task that, if done by a human, would be considered intelligent.


The goal of traditional AI (sometimes called pure AI) is to develop systems to accomplish various tasks intelligently and efficiently. This approach makes no claims or assumptions about the manner in which humans process and perform a task, nor does it try to model human cognitive processes. A traditional AI project is unrestricted by the limitations of human information processing. One example of a traditional AI program would be earlier versions of Deep Blue, the chess program of International Business Machines (IBM). The ability of this program to successfully “play” chess depended on its ability to compute a larger number of possible board positions based on the current positions and then select the best move. This computational approach, while effective, lacks strategy and the ability to learn from previous games. A modified version of Deep Blue in 1997 eventually won a match against Gary Kasparov, the reigning chess champion at that time. In addition to the tradition AI approach, this particular version incorporated strategic advice from Joel Benjamin, a former US chess champion.


The goal of computer simulations is to develop programs that take into consideration the constraints of how humans perform various cognitive tasks and incorporate these constraints into a program (for example, the amount of information that humans can think about at any given time is limited). This approach can take into account how human information processing is affected by a number of mechanisms such as processing, storing, and retrieving information. Computer simulations vary in the extent to which the program models processes that can range from a single process to a model of the mind.




Theoretical Issues

A number of important theoretical issues influence the assumptions made in developing intelligent systems. Stan Franklin, in his book Artificial Minds (1995), presents these issues in what he labels the three debates for AI: Can computing machines be intelligent? Does the connectionist approach offer something that the symbolic approach does not? and Are internal representations necessary?



Thinking Machines

The issue of whether computing machines can be intelligent is typically presented as “Can computers think in the sense that humans do?” There are two positions regarding this question: weak AI and strong AI. Weak AI suggests that the utility of artificial intelligence is to aid in exploring human cognition through the development of computer models. This approach aids in testing the feasibility and completeness of the theory from a computational standpoint. Weak AI is considered by many experts in the field as a viable approach. Strong AI takes the stance that it is possible to develop a machine that can manipulate symbols to accomplish many of the tasks that humans can accomplish. Some would ascribe thought or intelligence to such a machine because of its capacity for symbol manipulation. Alan Turing proposed a test, the imitation game, later called the Turing test, as a possible criterion for determining if strong AI has been accomplished. Strong AI also has opponents stating that it is not possible for a program to be intelligent or to think. John Searle, a philosopher, presents an argument against the possibility of strong AI.


Turing proposes a potential test of intelligence as a criterion for determining whether a computer program is intelligent. The imitation game is a parlor game consisting of three people: an examiner, one man, and one woman. The examiner can ask the man or woman questions on any topic. Responses from the man and woman are read. The man’s task is to convince the examiner that he is the woman. The woman’s job is to convince the examiner that she is the woman. Turing then proposes replacing either the man or woman with a computer. The examiner’s task, then, is to decide which one is human and which one is the computer. This version of the imitation game is called the Turing test. The program (computer) passes the test if the examiner cannot determine which responses are from the computer and which ones are from the human. The Turing test, then, serves as a potential criterion for determining if a program is intelligent. Philosopher Daniel Dennett, in his book Brainchildren: Essays on Designing Minds (1998), discusses the appropriateness and power of the Turing test. The Loebner Prize competition, an annual contest, uses a modified version of the Turing test to evaluate real AI programs.


Searle, for his part, proposed a thought experiment he called the Chinese room. This thought experiment provides an argument against the notion that computers can be intelligent. Searle suggests a room from which information can be fed both in and out. The information coming into the room is in Chinese. Inside the room is a person who does not understand Chinese, but this person does have access to a set of instructions that will allow the person to change one symbol to another. Searle argues that this person, while truly capable of manipulating the various symbols, has no understanding of the questions or responses. The person lacks true understanding even though, over time, the person may become proficient in this task. The end results look intelligent even though the symbols carry no meaning for the person manipulating them. Searle then argues that the same is true for computers. A computer will not be capable of intelligence since the symbols carry no meaning for the computer, and yet the output will look intelligent.




Connectionism Versus Symbolism

The second debate deals with the approaches to the cognitive architecture, the built-in constraints that specify the capabilities, components, and structures involved in cognition. The classic approach, or symbol system hypothesis, and the connectionist approach are both different cognitive architectures. Cognitive architecture can be thought of in terms of hardware of a computer; it can run a number of different programs but by its nature places constraints on how things are conducted. The questions here is, does the contribution of connectionism differ from that of traditional AI?


The physical symbol system hypothesis is a class of systems that suggests the use of symbols or internal representations, mental events, that stand for or represent items or events in the environment. These internal representations can be manipulated, used in computations, and transformed. Traditionally, this approach consists of serial processing (implementing one command at a time) of symbols. Two examples of this approach are John R. Anderson’s adaptive control of thought theory of memory (ACT;) model (1983) and Allen Newell’s Soar (1989). Both models are examples of architectures of cognition in which the goal is to account for all cognition.


The connectionist architecture is a class of systems that differ from the symbolic in that this model is modeled loosely on the brain and involves parallel processing, the ability to carry out a number of processes simultaneously. Other terms that have been used for this approach include parallel distributed processing (PDP), artificial neural networks (ANN), and the subsymbolic approach. The general makeup of a connectionist system is a network of nodes typically organized into various levels that loosely resemble neurons in the brain. These nodes have connections with other nodes. Like neurons in the brain, the nodes can have an excitatory or inhibitory effect on other nodes in the system. This is determined by the strength of the connection (commonly called the weight). Information then resides in these connections, not at the nodes, resulting in the information being distributed across the network. Learning in the system can take place during a training session in which adjustments are made to the weight during the training phase. An advantage that the connectionist approach has over the symbolic approach is the ability to retrieve partial information. This graceful degradation is the result of the information being distributed across the network. The system is still able to retrieve (partial) information even when part of the system does not work. This tends to be an issue for symbolic systems.




Internal Representation

Rodney Brooks, working at the Massachusetts Institute of Technology (MIT), proposed in 1986 a different approach to traditional AI, a system that relies on a central intelligence responsible for cognition. Brooks’s approach, a subsumption architecture, relies on the interaction between perception and actuation systems as the basis for intelligence. The subsumption architecture starts with a level of basic behaviors (modules) and builds on this level with additional levels. Each new level can subsume the functions of lower levels and suppress the output for those modules. If a higher level is unable to respond or is delayed, then a lower level, which continues to function, can produce a result. The resulting action may not always be the most “intelligent,” but the system is capable of doing something. For Brooks, intelligent behavior emerges from the combination of these simple behaviors. Furthermore, intelligence (or cognition) is in the eye of the beholder. Cog, one of Brooks’s robot projects, is based on the subsumption architecture. Cog’s movements and processing of visual information are not preprogrammed into the system. Experience with the environment plays an important role. Kismet, another project at MIT, is designed to show various emotional states in response to social interaction with others.





Approaches to Modeling Intelligence

Intelligent tutoring systems (ITSs) are systems in which individual instruction can be tailored to the needs of a particular student. This is different from computer-aided instruction (CAI), in which everyone receives the same lessons. Key components typical of ITSs are the expert knowledge base (or teacher), the student model, instructional goals, and the interface. The student model contains the knowledge that the student has mastered as well as the areas in which he or she may have conceptual errors. Instruction can then be tailored to help elucidate the concepts with which the student is having difficulty.


An expert system attempts to capture an individual’s expertise, and the program should then perform like an expert in that particular area. An expert system consists of two components: a knowledge base and an inference engine. The inference engine is the program of the expert system. It relies on the knowledge base, which “captures the knowledge” of an expert. Developing this component of the expert system is often time-consuming. Typically, the knowledge from the expert is represented in if-then statements (also called condition-action rules). If a particular condition is met, this leads to execution of the action part of the statement. Testing of the system often leads to repeating the knowledge-acquisition phase and modification of the condition-action rules. An example of an expert system is MYCIN, which diagnoses bacterial infections based on lab results. The performance of MYCIN was compared with that of physicians as well as with that of interns. MYCIN’s performance was comparable to that of a physician.


Case-based reasoning systems use previous cases to analyze a new case. This type of reasoning is similar to law, in which a current situation is interpreted by use of previous types of problems. Case-based reasoning is designed around the so-called four R’s: Retrieve relevant cases to the case at hand, reuse a previous case where applicable, revise strategy if no previous case is appropriate, and retain the new solution, allowing for the use of the case in the future.


Other approaches to modeling intelligence have included trying to model the intelligence of animals. Alife is an approach that involves the development of a computer simulation of the important features necessary for intelligent behavior. The animats approach constructs robots based on animal models. The idea here is to implement intelligence on a smaller scale rather than trying to model all of human intelligence. This approach may be invaluable in terms of developing systems that are shared in common with animals.




Bibliography


Bechtel, William, and George Graham, eds. A Companion to Cognitive Science. Malden: Blackwell, 1998. Print.



Clark, Andy, and Josefa Toribio, eds. Cognitive Architectures in Artificial Intelligence. New York: Garland, 1998. Print.



Cristianini, Nello. "On the Current Paradigm in Artificial Intelligence." AI Communications 27.1 (2014): 37–43. Print.



Dennett, Daniel C. Brainchildren: Essays on Designing Minds. Cambridge: MIT P, 1998. Print.



Franklin, Stan. Artificial Minds. Cambridge.: MIT P, 1995. Print.



Gardner, Howard. The Mind’s New Science: A History of the Cognitive Revolution. New York: Basic, 1998. Print.



Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. Cambridge: MIT P, 2008.



Muggleton, Stephen. "Alan Turing and the Development of Artificial Intelligence." AI Communications 27.1 (2014): 3–10. Print.



Vardi, Moshe Y. "Artificial Intelligence: Past and Future." Communications of the ACM 55.1 (2012): 5. Print.



Von Foerster, Heinz. Understanding Understanding: Essays on Cybernetics and Cognition. New York: Springer, 2003. Print.

No comments:

Post a Comment

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...