Sunday 25 December 2016

What are computer models of cognition?


Introduction

Human cognition depends on the operation of the neural anatomy that forms the nervous system. Essentially, the brain is composed of some 100 billion neurons. Roger Penrose has divided the brain into three areas: primary, secondary, and tertiary. Each of these three areas has a sensory and motor component. The primary areas are the visual, olfactory, somatosensory, and motor areas. These areas handle the input and output functions of the brain. The secondary areas lie near the primary regions and process the input received by the primary areas. Plans of action are developed in the secondary areas, and these actions are translated into movements of the muscular system by the primary cortex. The tertiary area makes up the rest of the brain. The most complex, abstract, sophisticated, and subtle activity of the brain occurs here. Information from the different sensory areas is received, collected, integrated, and analyzed. As Penrose says, “memories are laid down, pictures of the outside world are constructed, general plans are conceived and executed, and speech is understood or formulated.” Thus, information or stimulation from the environment is received or input at the primary sensory areas. This information is then processed in increasingly complex and sophisticated ways in the secondary and tertiary sensory areas. The processed sensory information is sent to the tertiary motor area in the form of a grand plan of action, and it is then refined into plans for specific actions at the secondary and primary motor regions.








Models of Information Processing

The question for psychologists to solve is how to represent or model this complex activity that is the basis for human thought and action in the three regions of the brain. The theory of information processing contends that human cognition can be successfully modeled by viewing the operation of the brain as analogous to the operation of a computer. Penrose observed that the brain presents itself as “a superb computing device.” More specifically, Robert J. Baron stated:
The fundamental assumption is that the brain is a computer. It is comprised of some 100 billion computational cells called neurons, which interact in a variety of ways. Neurons are organized into well defined and highly structured computational networks called neural networks. Neural networks are the principal computational systems of the brain.


A field known as neurocomputing, or computational neuroscience, holds great promise for providing such a computer-based model. The particular kind of computer to be used is a neurocomputer, which is modeled on the actual structure or architecture of the brain. The unit of the neurocomputer is the processing element, or neurode, which corresponds to a biological neuron. The neurocomputer is constructed of many neurodes that are interconnected to one another to form a neural network. Each neurode can receive a number of inputs, either externally or from other neurodes, and each input has a weight or strength given to it. These weights are all summed, and a single output results. This output can then act as an input to other neurodes to which it is interconnected. If the output is excitatory, it will encourage firing of the interconnected neurodes; if the output is inhibitory, it will discourage firing of the interconnected neurodes. The neurocomputer processes all the inputs and outputs in a parallel manner (that is, all of the neurodes can potentially operate simultaneously). The software that runs the neurocomputer is called netware. The netware provides the interconnections between neurodes, how the neural network will react to the input it receives (training law), and how the input and output are related to each other (transfer function). Neurocomputers are drastically different from any other kind of computer because their architecture and operation are modeled after the human brain. Thus, neurocomputers can perform human functions, such as being taught to learn new behaviors. Maureen Caudill refers to these computers as being “naturally intelligent,” as opposed to the serial computer used with “artificial intelligence.”


Because neurocomputers are constructed as analogues of the human nervous system, they are particularly adept and useful for solving the kinds of problems that the human brain can solve. Conventional computers would have great difficulty solving these problems because they are constructed to perform certain kinds of tasks very quickly and efficiently (for example, processing large amounts of numbers very rapidly), tasks that the human brain cannot do nearly as well.




Uses of Artificial Intelligence

In Naturally Intelligent Systems (1990), Caudill and Charles Butler discuss two applications of neurocomputers and neural networks, one in medicine and the other in finance.


A machine called the vectorcardiograph was found, in tests, to be able to detect heart problems better than cardiologists could. The usual electrocardiograph records signals received from up to twelve leads placed on different parts of the body. Each recording is made separately and in a particular sequential order. In contrast, the vectorcardiograph records signals from only three locations (front-back, head-foot, right-left), and it records all three sources of data simultaneously. This parallel processing of the information suits the vectorcardiograph very nicely to neural networks.


Essentially, the vectorcardiograph was trained in three stages to differentiate between normal and abnormal electrocardiograms, much as a human is trained to discriminate or distinguish between two stimuli. In the first stage, the system was trained to recognize all the normal cases presented to it and a portion of the abnormal cases presented to it. The input weights were then set at the appropriate values and training continued. In the second stage, the neural network was trained also to recognize all normal cases and a portion of the remaining abnormal cases. Again, the input weights were set at their appropriate values. The third stage of training commenced, and training continued until the system could recognize the remaining abnormal cases. The training set consisted of vectorcardiographs from 107 people, half of whom were judged to be normal and half abnormal. When the vectorcardiograph was presented with sixty-three new cases never before presented, it correctly diagnosed 97 percent of the normal and 90 percent of the abnormal cases. Trained clinicians were able to identify, respectively, 95 percent and 53 percent of the cases. The diagnostic capabilities of the vectorcardiograph demonstrate the capabilities and potentials of neural networks.


A neural network known as the Multiple Neural Network Learning System (MNNLS) can be trained to make decisions to accept or reject a mortgage application. The system uses twenty-five areas of information that are divided into four categories: cultural (credit rating, number of children, employment history); financial (income, debts); mortgage(amount, interest rate, duration); and property (age, appraised value, type).


The MNNLS is a system of nine separate neural networks that are divided into three different layers, with three networks in each area. Each layer is analogous to a panel of three experts. One expert in each of the three layers concerns itself only with financial information, the second only with cultural and mortgage information, and the third with all four categories. When presented with a mortgage application, the first layer attempts to arrive at a decision. If the three “experts” all agree, the mortgage is accepted or rejected; however, if one of the experts disagrees with another, then the application goes to the second layer of experts, and the same process is repeated. The MNNLS is useful because it is very efficient and accurate. It is efficient because it is able to process a wide variety of problems because the neural networks correspond to different experts. The first layer effectively handles simple decisions, whereas the second and third layers can handle increasingly difficult decisions. MNNLS agreed with decisions made by humans about 82 percent of the time. In those cases in which the MNNLS disagreed with the human decision, the MNNLS was in fact nearly always correct. This happens because the MNNLS is a neural network that insists on consensus of a panel of experts (that is, consensus between separate neural networks). It would be economically unfeasible to have a panel of humans evaluate mortgages; however, a single person evaluating applications is more likely to make a mistake than a panel of evaluators.




Metaphors of Modeling

Stephen J. Hanson and David J. Burr astutely observed that “the computer metaphor has had a profound and lasting effect on psychological modeling.” The influence of the computer can be seen especially in its use in artificial intelligence and in computer metaphors of learning and memory, in which information is processed, encoded, stored, and retrieved from three distinct memory stores (sensory, short-term, and long-term memory). The particular computer that has been used as the metaphor of the human mind and cognition has been the digital or serial computer.


It eventually became apparent to cognitive scientists, however, that the digital computer is actually a poor analogy for the human mind, because this computer operates in a decidedly nonhuman way. For example, the digital computer operates much too fast—much faster than the human mind can process information. It also processes much more data than the human mind can process. If the software is sound, the digital computer is perfect and operates error-free. Human problem solving, on the other hand, is characterized by mistakes. The digital computer is not capable of autonomous learning. It does only what it is told to do by the program; it cannot teach itself new things, as can a human. The digital computer is very poor at pattern recognition tasks, such as identifying a human face, something an infant can do very rapidly. The digital computer provides no information about the underlying structure (the nervous system) that makes human cognition and information processing possible.


A number of cognitive scientists have argued that the fields of artificial intelligence and traditional cognitive science have reached dead ends because of their reliance on the digital computer analogy of the mind, which is limited and largely inaccurate. Cognitive science and neurophysiology are now striking out in a promising new direction by using neural networks and neurocomputers as the analogue of the human mind. The human mind is closely related to the human brain; many would argue that the mind is equivalent to the brain. Therefore, to study the mind and cognition, the researcher must build a computer that is modeled on the architecture of the brain. The neurocomputer is modeled on the human brain, and the digital computer is not.


Unlike digital computers, neurocomputers operate in a manner consistent with the operation of the human nervous system and human cognition. Neurocomputers provide a potentially promising way to understand cognition, as well as providing a productive connection and interrelationship with neurophysiology.




Bibliography


Addyman, Caspar, and Robert M. French. "Computational Modeling in Cognitive Science: A Manifesto for Change." Topics in Cognitive Science 4.3 (2014): 332–41. Print.



Allman, William F. Apprentices of Wonder: Inside the Neural Network Revolution. New York: Bantam, 1990. Print.



Caudill, Maureen, and Charles Butler. Naturally Intelligent Systems. 3d ed. Cambridge, Mass.: MIT P, 1993. Print.



Coward, L. Andrew. A System Architecture Approach to the Brain: From Neurons to Consciousness. New York: Nova Biomedical, 2005. Print.



Frankish, Keith, and William M. Ramsey. The Cambridge Handbook of Cognitive Science. Cambridge: Cambridge UP, 2012. Print.



Friedenberg, Jay, and Gordon Silverman. Cognitive Science: An Introduction to the Study of Mind. Thousand Oaks: Sage, 2006. Print.



Gill, Satinder, ed. Cognition, Communication and Interaction: Transdisciplinary Perspectives on Interactive Technology. New York: Springer, 2007. Print.



Hanson, Stephen J., and David J. Burr. “What Connectionist Models Learn: Learning and Representation in Connectionist Networks.” Behavioral and Brain Sciences 13.3 (1990): 471–518. Print.



Harnish, Robert. Minds, Brains, Computers: An Historical Introduction to Cognitive Science. New York: Blackwell, 2002. Print.



Marsa, Linda. "Computer Model Mimics Infant Cognition." Discover Jan./Feb. 2012: 53. Print.



Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. New York: Penguin, 1991. Print.



Stillings, Neil A., et al. Cognitive Science: An Introduction. Cambridge: MIT P, 1998. Print.

No comments:

Post a Comment

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...