Tuesday, 4 November 2014

What is scurvy? |


Causes and Symptoms


Scurvy is a disease characterized by hemorrhages in body tissue, muscular pain, tender gums, physical exhaustion, and vision disorders, especially night blindness. In advanced cases, teeth fall out, and complications with kidney or intestinal functions may lead to death. The disease at one time was common among sailors who went on long ocean voyages where their diet did not include fruits and vegetables containing vitamin C.

Also, the populations of cities under siege and prisoners with very restricted diets often suffered from scurvy. During the American Civil War in the 1860s, scurvy was reported as a problem among the troops.





Treatment and Therapy

A causal connection between scurvy and a person’s diet had been suspected for a long time, but the particular missing nutrient was not known until the work of a Scottish physician, James Lind, in the 1750s. Lind experimented on six pairs of patients who had scurvy symptoms, giving them one of six different acidic diet supplements: vinegar, seawater, sulfuric acid solution, apple cider, garlic and mustard seed, or two oranges and a lemon. He found that the men who ate the citrus fruit improved rapidly, the ones who drank the cider recovered slowly, and the others showed no improvement. The British navy adopted a requirement for lemon juice aboard its ships in 1795, which virtually eliminated scurvy. Subsequently, lemons were replaced by limes, which led to the nickname Limeys for British sailors.


The essential nutrient in citrus fruits, now known as vitamin C, was first identified in 1932 by C. G. King and W. A. Waugh at the University of Pittsburgh. Its scientific name is ascorbic acid, which means “without scurvy.” Synthetic vitamin C is identical to the naturally occurring variety, in both its composition and its physiological effect. Vitamin C is essential for the formation and repair of collagen, which is a primary component of blood vessels. It is also necessary for the synthesis of hormones that control the rate of metabolism in the body.




Perspective and Prospects

During the nineteenth century, medical research by the so-called microbe hunters had firmly established that bacteria are the cause of numerous diseases that are transmitted from person to person. Some illnesses, however, were shown to be completely unrelated to bacteria but rather attributable to dietary deficiencies. Among these disorders are beriberi, rickets, anemia, and scurvy. They have been almost totally eradicated as people have learned that a healthy diet must include fruits, vegetables, whole grain foods, and vitamin supplements.




Bibliography



Consumer Guide, editors of. Complete Book of Vitamins and Minerals. Lincolnwood, Ill.: Publications International, 1996.



Kasper, Dennis L., et al., eds. Harrison’s Principles of Internal Medicine. 18th ed. New York: McGraw-Hill, 2012.



Kohnle, Diana, and Daus Mahnke. "Scurvy." Health Library, Oct. 31, 2012.



Johnson, Larry E. "Vitamin C." Merck Manual Home Health Handbook, Feb. 2013.



“Medicine and Surgery, History: Nutrition.” In The New Encyclopaedia Britannica. 15th ed. Chicago: Encyclopædia Britannica, 2002.



Vorvick, Linda J., et al. "Scurvy." MedlinePlus, Jan. 22, 2013.

Monday, 3 November 2014

What is animal experimentation in the field of psychology?


Introduction

Before the general acceptance of Charles Darwin’s theory of evolution in the late nineteenth century, in much of the Western world, animals were considered to be soulless machines with no thoughts or emotions. Humans, on the other hand, were assumed to be qualitatively different from other animals because of their abilities to speak, reason, and exercise free will. Therefore, it was thought that nothing could be learned about the mind by studying animals.







After Darwin, however, people began to recognize that although each species is unique, the chain of life is continuous, and species have similarities as well as differences. Because animal brains and human brains are made of the same kinds of cells and have similar structures and connections, it was reasoned, the mental processes of animals must be similar to the mental processes of humans. This new insight led to the introduction of animals as psychological research subjects around 1900. Since then, animal experimentation has yielded much new knowledge about the brain and the mind, especially in the fields of learning, memory, motivation, and sensation.


Psychologists who study animals can be roughly categorized into three groups: biopsychologists (psychobiologists), learning theorists, and ethologists and sociobiologists. Biopsychologists, or physiological psychologists, study the genetic, neural, and hormonal controls of behavior, for example, eating behavior, sleep, sexual behavior, perception, emotion, memory, and the effects of drugs. Learning theorists study the learned and environmental controls of behavior, for example, stress, stimulus-response patterns, motivation, and the effects of reward and punishment. Ethologists and sociobiologists concentrate on animal behavior in nature, for example, predator-prey interactions, mating and parenting, migration, communication, aggression, and territoriality.




Reasons for Using Animal Subjects

Psychologists study animals for a variety of reasons. Sometimes they study the behavior of a particular animal to solve a specific problem. They may study dogs, for example, to learn how best to train them as police dogs; chickens to learn how to prevent them from fighting one another in coops; and wildlife to learn how to regulate populations in parks, refuges, or urban areas. These are all examples of what is called applied research.


Most psychologists, though, are more interested in human behavior but study animals for practical reasons. A developmental psychologist, for example, may study an animal that has a much shorter life span than humans do so that each study takes a much shorter time and more studies can be done. Animals may also be studied when an experiment requires strict controls; researchers can control the food, housing, and even social environment of laboratory animals but cannot control such variables in the lives of human subjects. Experimenters can even control the genetics of animals by breeding them in the laboratory; rats and mice have been bred for so many generations that researchers can special order from hundreds of strains and breeds and can even obtain animals that are basically genetically identical to one another.


Another reason psychologists sometimes study animals is that there are fewer ethical considerations than in research with human subjects. Physiological psychologists and neuropsychologists, in particular, may use invasive procedures (such as brain surgery, hormone manipulation, or drug administration) that would be unethical to perform on humans. Without animal experimentation, much of this research simply could not be conducted. Comparable research on human victims of accident or disease would have less scientific validity and would raise additional ethical concerns.


A number of factors make animal research applicable for the study of human psychology. The first factor is homology. Animals that are closely related to humans are likely to have similar physiology and behavior, because they share the same genetic blueprint. Monkeys and chimpanzees are the animals most closely related to humans and thus are homologically most similar. Monkeys and chimpanzees make the best subjects for psychological studies of complex behaviors and emotions. However, they are expensive and difficult to keep, and there are serious ethical considerations when using them, so they are not used when another animal would be equally suitable.


The second factor is analogy. Animals that have a lifestyle similar to that of humans are likely to have some of the same behaviors. Rats, for example, are social animals, as are humans; cats are not. Rats also show similarity to humans in their eating behavior (which is one reason rats commonly live around human habitation and garbage dumps); thus, they can be a good model for studies of hunger, food preference, and obesity. Rats, however, do not have a similar stress response to that of humans; for studies of exercise and stress, the pig is a better animal to study.


The third factor is situational similarity. Some animals, particularly dogs, cats, domesticated rabbits, and some domesticated birds, adapt easily to experimental situations such as living in a cage and being handled by humans. Wild animals, even if reared by humans from infancy, may not behave normally in experimental situations. The behavior of a chimpanzee that has been kept alone in a cage, for example, may tell something about the behavior of a human kept in solitary confinement, but it will not necessarily be relevant to understanding the behavior of most people in typical situations.


By far the most common laboratory animal used in psychology is Rattus norvegicus, the Norway rat. Originally, the choice of the rat was something of a historical accident. Because the rat has been studied so thoroughly, it is often the animal of choice so that comparisons can be made from study to study. Fortunately, the rat shares many features analogous with humans. Other animals frequently used in psychological research include pigeons, mice, hamsters, gerbils, cats, monkeys, and chimpanzees.




Scientific Value

One of the most important topics for which psychologists use animal experimentation is the study of interactive effects of genes and the environment on the development of the brain and subsequent behavior. These studies can be done only if animals are used as subjects, because they require subjects with a relatively short lifespan that develop quickly, they may involve invasive procedures to measure cell and brain activity, or they may require the manipulation of major social and environmental variables in the life of the subject.


In the 1920s, Edward C. Tolman and Robert Tryon began a study of the inheritance of intelligence using rats. They trained rats to run a complex maze and then, over many generations, bred the fastest learners with one another and the slowest learners with one another. From the beginning, offspring of the bright rats were substantially faster than offspring of the dull rats. After only seven generations, there was no overlap between the two sets, showing that intelligence is at least partly genetic and can be bred into or out of animals just as size, coat color, or milk yield can be.


Subsequent work with selectively bred rats, however, found that high-performing rats would outperform the slower rats only when tested on the original maze used with their parents and grandparents; if given a different task to measure their intelligence, the bright rats were in some cases no brighter than the dull rats. These studies were the first to suggest that intelligence may not be a single attribute that one either has much or little of; there may instead be many kinds of intelligence.


Over the years researchers have developed selectively bred rats as models of a variety of interesting human characteristics. Of particular value are animal models of human psychopathology. For example, genetic lines of rats have been developed that serve as models for susceptibility to depression, anxiety, alcoholism, and attention-deficit hyperactivity disorder (ADHD). These models are important not only in understanding genetic, environmental, and physiological factors associated with these disorders, but also in serving as early tests for possible drug treatments for them. Indeed, the area of behavioral pharmacology, where drug effects on behavior are studied in animal models, is an important and growing area of research.




Brain Studies

Another series of experiments that illustrate the role of animal models in the study of brain and behavior is that developed by David Hubel and Torsten Wiesel, who studied visual perception (mostly using cats). Hubel and Wiesel were able to study the activity of individual cells in the living brain. By inserting a microelectrode into a brain cell of an immobilized animal and flashing visual stimuli in the animal’s visual field, they could record when the cell responded to a stimulus and when it did not.


Over the years, scientists have used this method to map the activities of cells in several layers of the visual cortex, the part of the brain that processes visual information. They have also studied the development of cells and the cell connections, showing how early experience can have a permanent effect on the development of the visual cortex. Subsequent research has demonstrated that the environment has major effects on the development of other areas of the brain as well. The phrase “use it or lose it” has some accuracy when it comes to development and maintenance of brain connections and mental abilities.




Harlow’s Experiments

Perhaps the most famous psychological experiments on animals were those done by Harry Harlow in the 1950s.
Harlow was studying rhesus monkeys and breeding them in his own laboratory. Initially, he would separate infant monkeys from their mothers. Later, he discovered that, in spite of receiving adequate medical care and nutrition, these infants exhibited severe behavioral symptoms: They would sit in a corner and rock, mutilate themselves, and scream in fright at the approach of an experimenter, a mechanical toy, or another monkey. As adolescents, they were antisocial. As adults, they were psychologically ill-equipped to deal with social interactions: Male monkeys were sexually aggressive, and female monkeys appeared to have no emotional attachment to their own babies. Harlow decided to study this phenomenon (labeled “maternal deprivation syndrome”) because he thought it might help to explain the stunted growth, low life expectancy, and behavioral symptoms of institutionalized infants which had been documented earlier by René Spitz.


Results of the Harlow experiments profoundly changed the way psychologists think about love, parenting, and mental health. Harlow and his colleagues found that the so-called mothering instinct is not very instinctive at all but rather is learned through social interactions during infancy and adolescence. They also found that an infant’s attachment to its mother is based not on its dependency on food but rather on its need for “contact comfort.” Babies raised with both a mechanical “mother” that provided milk and a soft, cloth “mother” that gave no milk preferred the cloth mother for clinging and comfort in times of stress.


Through these experiments, psychologists came to learn how important social stimulation is, even for infants, and how profoundly the lack of such stimulation can affect mental health development. These findings played an important role in the development of staffing and activity requirements for foundling homes, foster care, day care, and institutions for the aged, physically and mentally disabled, and mentally ill. They have also influenced social policies that promote parent education and early intervention for children at risk.




Limitations and Ethical Concerns

However, there are drawbacks to using animals as experimental subjects. Most important are the clear biological and psychological differences between humans and nonhuman animals; results from a study using nonhuman animals simply may not apply to humans. In addition, animal subjects cannot communicate directly with researchers; they are unable to express their feelings, motivations, thoughts, and reasons for their behavior. If a psychologist must use an animal instead of a human subject for ethical or practical reasons, the scientist will want to choose an animal that is similar to humans in the particular behavior being studied.


For the same reasons that animals are useful in studying psychological processes, however, people have questioned the moral justification for such use. Because it is now realized that vertebrate animals can feel physical pain and that many of them have thoughts and emotions as well, animal experimentation has become politically controversial.


Psychologists generally support the use of animals in research. The American Psychological Association (APA) identifies animal research as an important contributor to psychological knowledge. The majority of individual psychologists would tend to agree. In 1996, S. Plous surveyed nearly four thousand psychologists and found that fully 80 percent either approved of or strongly approved of the use of animals in psychological research. Nearly 70 percent believed that animal research was necessary for progress in the field of psychology. However, support dropped dramatically for invasive procedures involving pain or death. Undergraduate students majoring in psychology produced largely similar findings. Support was less strong among newer rather than more established psychologists and was also less strong in women than in men.


Some psychologists would like to see animal experimentation in psychology discontinued altogether. In 1981, psychologists formed an animal rights organization called Psychologists for the Ethical Treatment of Animals (PsyETA), which was later renamed the Society and Animals Forum. It is highly critical of the use of animals as subjects in psychological research and has strongly advocated improving the well-being of those animals that are used through publication (with the American Society for the Prevention of Cruelty to Animals) of the Journal of Applied Animal Welfare Science. The organization is also a strong advocate for the developing field of human-animal studies, in which the relationship between humans and animals is explored. Companion animals (pets) can have a significant impact on psychological and physical health, and they can be used as a therapeutic tool with, for example, elderly people in nursing homes and emotionally disturbed youth. In this field of study, animals themselves are not the subjects of the experiment; rather, it is the relationship between humans and animals that is the topic of interest.




Regulations

In response to such concerns regarding the use of animals in experiments, the US Congress amended the Animal Welfare Act in 1985 so that it would cover laboratory animals as well as pets. (Rats, mice, birds, and farm animals are specifically excluded.) Although these regulations do not state specifically what experimental procedures may or may not be performed on laboratory animals, they do set standards for humane housing, feeding, and transportation. Later amendments were added in 1991 in an effort to protect the psychological well-being of nonhuman primates.


In addition, the Animal Welfare Act requires that all research on warm-blooded animals (except those specifically excluded) be approved by a committee before it can be carried out. Each committee (known as Institutional Animal Care and Use Committees, or IACUCs) is composed of at least five members and must include an animal researcher; a veterinarian; someone with an area of expertise in a nonresearch area, such as a teacher, lawyer, or member of the clergy; and someone who is unaffiliated with the institution where the experimentation is being done and who can speak for the local community. In this way, those scientists who do animal experiments must justify the appropriateness of their use of animals as research subjects.


The APA has its own set of ethical guidelines for psychologists conducting experiments with animals. The APA guidelines are intended for use in addition to all pertinent local, state, and federal laws, including the Animal Welfare Act. In addition to being a bit more explicit in describing experimental procedures that require special justification, the APA guidelines require psychologists to have their experiments reviewed by local IACUCs and do not explicitly exclude any animals. About 95 percent of the animals used in psychology are rodents and birds (typically rats, mice, and pigeons), which are not governed by the Animal Welfare Act. It seems likely that federal regulations will change to include these animals at some point, and according to surveys, the majority of psychologists believe that they should be. Finally, psychologists are encouraged to improve the living environments of their animals and consider nonanimal alternatives for their experiments whenever possible.


Alternatives to animal experimentation are becoming more widespread as technology progresses. Computer modeling and bioassays (tests using biological materials such as cell cultures) cannot replace animal experimentation in the field of psychology, however, because computers and cell cultures will never exhibit all the properties of mind that psychologists want to study. At the same time, the use of animals as psychological research subjects will never end the need for study of human subjects. Although other animals may age, mate, fight, and learn much as humans do, they will never speak, compose symphonies, or run for office. Animal experimentation will thus always have an important, though limited, role in psychological research.




Bibliography


American Psychological Association. Committee on Animal Research and Ethics. http://www.apa.org/science/animal2.html.



Cuthill, I. C. “Ethical Regulation and Animal Science: Why Animal Behavior Is Not So Special.” Animal Behaviour 72 (2007): 15–22. Print.



Fox, Michael Allen. The Case for Animal Experimentation. Berkeley: U of California P, 1986. Print.



Gross, Charles G., and H. Philip Zeigler, eds. Readings in Physiological Psychology: Motivation. New York: Harper, 1969. Print.



Miller, Neal E. “The Value of Behavioral Research on Animals.” American Psychologist 40 (April, 1985): 423–40. Print.



National Academy of Sciences and the Institute of Medicine. Committee on the Use of Animals in Research. Science, Medicine, and Animals. Washington, DC: National Academy, 1991. Print.



National Research Council. Guide for the Care and Use of Laboratory Animals. Washington, DC: National Academy, 1996. Print.



Rose, Anne C. "Animal Tales: Observations of the Emotions in American Experimental Psychology, 1890–1940." Journal of the History of the Behavioral Sciences 48.4 (2012): 301–17. Print.



Saucier, D. A., and M. E. Cain. “The Foundations of Attitudes about Animal Research.” Ethics & Behavior 16 (2006): 117–33. Print.



Society and Animals Forum (formerly PsyETA). http://www.psyeta.org.



Vicedo, Marga. "The Evolution of Harry Harlow: From the Nature to the Nurture of Love." History of Psychiatry 21.2 (2010): 190–205. Print.

What is artificial intelligence in cognitive psychology?


Introduction

Ideas proposed in cybernetics, developments in psychology in terms of studying internal mental processes, and the development of the computer were important precursors for the area of artificial intelligence (AI). Cybernetics, a term coined by Norbert Wiener in 1948, is a field of study interested in the issue of feedback for artificial and natural systems. The main idea is that a system could modify its behavior based on feedback generated by the system or from the environment. Information, and in particular feedback, is necessary for a system to make intelligent decisions. During the 1940s and 1950s, the dominant school in American psychology was behaviorism. The focus of research was on topics in which the behaviors were observable and measurable. During this time, researchers such as George Miller were devising experiments that continued to study behavior but also provided some indication of internal mental processes. This cognitive revolution in the United States led to research programs interested in issues such as decision making, language development, consciousness, and memory, issues relevant to the development of an intelligent machine. The main tool for implementing AI, the computer, was an important development that came out of World War II.















The culmination of many of these events was a conference held at Dartmouth College in 1956, which explored the idea of developing computer programs that behaved in an intelligent manner. This conference is often viewed as the beginning of the area of artificial intelligence. Some of the researchers involved in the conference included John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. Before this conference, Newell, Simon, and Shaw’s Logic Theorist was the only AI program. Subsequent projects focused on the development of programs in the domain of game playing. Games of strategy, such as checkers and chess, were selected because they seem to require intelligence. The development of programs capable of “playing” these games supported the idea that AI is possible.



Cognitive science, an interdisciplinary approach to the study of the mind, was influenced by many of the same factors that had an impact on the field of AI. Some of the traditional disciplines that contribute to cognitive science are AI, cognitive psychology, linguistics, neuroscience, and philosophy. Each discipline brings its own set of questions and techniques to the shared goal of understanding intelligence and the mind.




Traditional AI Versus Computer Simulations

“Artificial intelligence” is a general term that includes a number of different approaches to developing intelligent machines. Two different philosophical approaches to the development of intelligent systems are traditional AI and computer simulations. This term can also refer to the development of hardware (equipment) or software (programs) for an AI project. The goal remains the same for traditional AI and computer simulations: the development of a system capable of performing a particular task that, if done by a human, would be considered intelligent.


The goal of traditional AI (sometimes called pure AI) is to develop systems to accomplish various tasks intelligently and efficiently. This approach makes no claims or assumptions about the manner in which humans process and perform a task, nor does it try to model human cognitive processes. A traditional AI project is unrestricted by the limitations of human information processing. One example of a traditional AI program would be earlier versions of Deep Blue, the chess program of International Business Machines (IBM). The ability of this program to successfully “play” chess depended on its ability to compute a larger number of possible board positions based on the current positions and then select the best move. This computational approach, while effective, lacks strategy and the ability to learn from previous games. A modified version of Deep Blue in 1997 eventually won a match against Gary Kasparov, the reigning chess champion at that time. In addition to the tradition AI approach, this particular version incorporated strategic advice from Joel Benjamin, a former US chess champion.


The goal of computer simulations is to develop programs that take into consideration the constraints of how humans perform various cognitive tasks and incorporate these constraints into a program (for example, the amount of information that humans can think about at any given time is limited). This approach can take into account how human information processing is affected by a number of mechanisms such as processing, storing, and retrieving information. Computer simulations vary in the extent to which the program models processes that can range from a single process to a model of the mind.




Theoretical Issues

A number of important theoretical issues influence the assumptions made in developing intelligent systems. Stan Franklin, in his book Artificial Minds (1995), presents these issues in what he labels the three debates for AI: Can computing machines be intelligent? Does the connectionist approach offer something that the symbolic approach does not? and Are internal representations necessary?



Thinking Machines

The issue of whether computing machines can be intelligent is typically presented as “Can computers think in the sense that humans do?” There are two positions regarding this question: weak AI and strong AI. Weak AI suggests that the utility of artificial intelligence is to aid in exploring human cognition through the development of computer models. This approach aids in testing the feasibility and completeness of the theory from a computational standpoint. Weak AI is considered by many experts in the field as a viable approach. Strong AI takes the stance that it is possible to develop a machine that can manipulate symbols to accomplish many of the tasks that humans can accomplish. Some would ascribe thought or intelligence to such a machine because of its capacity for symbol manipulation. Alan Turing proposed a test, the imitation game, later called the Turing test, as a possible criterion for determining if strong AI has been accomplished. Strong AI also has opponents stating that it is not possible for a program to be intelligent or to think. John Searle, a philosopher, presents an argument against the possibility of strong AI.


Turing proposes a potential test of intelligence as a criterion for determining whether a computer program is intelligent. The imitation game is a parlor game consisting of three people: an examiner, one man, and one woman. The examiner can ask the man or woman questions on any topic. Responses from the man and woman are read. The man’s task is to convince the examiner that he is the woman. The woman’s job is to convince the examiner that she is the woman. Turing then proposes replacing either the man or woman with a computer. The examiner’s task, then, is to decide which one is human and which one is the computer. This version of the imitation game is called the Turing test. The program (computer) passes the test if the examiner cannot determine which responses are from the computer and which ones are from the human. The Turing test, then, serves as a potential criterion for determining if a program is intelligent. Philosopher Daniel Dennett, in his book Brainchildren: Essays on Designing Minds (1998), discusses the appropriateness and power of the Turing test. The Loebner Prize competition, an annual contest, uses a modified version of the Turing test to evaluate real AI programs.


Searle, for his part, proposed a thought experiment he called the Chinese room. This thought experiment provides an argument against the notion that computers can be intelligent. Searle suggests a room from which information can be fed both in and out. The information coming into the room is in Chinese. Inside the room is a person who does not understand Chinese, but this person does have access to a set of instructions that will allow the person to change one symbol to another. Searle argues that this person, while truly capable of manipulating the various symbols, has no understanding of the questions or responses. The person lacks true understanding even though, over time, the person may become proficient in this task. The end results look intelligent even though the symbols carry no meaning for the person manipulating them. Searle then argues that the same is true for computers. A computer will not be capable of intelligence since the symbols carry no meaning for the computer, and yet the output will look intelligent.




Connectionism Versus Symbolism

The second debate deals with the approaches to the cognitive architecture, the built-in constraints that specify the capabilities, components, and structures involved in cognition. The classic approach, or symbol system hypothesis, and the connectionist approach are both different cognitive architectures. Cognitive architecture can be thought of in terms of hardware of a computer; it can run a number of different programs but by its nature places constraints on how things are conducted. The questions here is, does the contribution of connectionism differ from that of traditional AI?


The physical symbol system hypothesis is a class of systems that suggests the use of symbols or internal representations, mental events, that stand for or represent items or events in the environment. These internal representations can be manipulated, used in computations, and transformed. Traditionally, this approach consists of serial processing (implementing one command at a time) of symbols. Two examples of this approach are John R. Anderson’s adaptive control of thought theory of memory (ACT;) model (1983) and Allen Newell’s Soar (1989). Both models are examples of architectures of cognition in which the goal is to account for all cognition.


The connectionist architecture is a class of systems that differ from the symbolic in that this model is modeled loosely on the brain and involves parallel processing, the ability to carry out a number of processes simultaneously. Other terms that have been used for this approach include parallel distributed processing (PDP), artificial neural networks (ANN), and the subsymbolic approach. The general makeup of a connectionist system is a network of nodes typically organized into various levels that loosely resemble neurons in the brain. These nodes have connections with other nodes. Like neurons in the brain, the nodes can have an excitatory or inhibitory effect on other nodes in the system. This is determined by the strength of the connection (commonly called the weight). Information then resides in these connections, not at the nodes, resulting in the information being distributed across the network. Learning in the system can take place during a training session in which adjustments are made to the weight during the training phase. An advantage that the connectionist approach has over the symbolic approach is the ability to retrieve partial information. This graceful degradation is the result of the information being distributed across the network. The system is still able to retrieve (partial) information even when part of the system does not work. This tends to be an issue for symbolic systems.




Internal Representation

Rodney Brooks, working at the Massachusetts Institute of Technology (MIT), proposed in 1986 a different approach to traditional AI, a system that relies on a central intelligence responsible for cognition. Brooks’s approach, a subsumption architecture, relies on the interaction between perception and actuation systems as the basis for intelligence. The subsumption architecture starts with a level of basic behaviors (modules) and builds on this level with additional levels. Each new level can subsume the functions of lower levels and suppress the output for those modules. If a higher level is unable to respond or is delayed, then a lower level, which continues to function, can produce a result. The resulting action may not always be the most “intelligent,” but the system is capable of doing something. For Brooks, intelligent behavior emerges from the combination of these simple behaviors. Furthermore, intelligence (or cognition) is in the eye of the beholder. Cog, one of Brooks’s robot projects, is based on the subsumption architecture. Cog’s movements and processing of visual information are not preprogrammed into the system. Experience with the environment plays an important role. Kismet, another project at MIT, is designed to show various emotional states in response to social interaction with others.





Approaches to Modeling Intelligence

Intelligent tutoring systems (ITSs) are systems in which individual instruction can be tailored to the needs of a particular student. This is different from computer-aided instruction (CAI), in which everyone receives the same lessons. Key components typical of ITSs are the expert knowledge base (or teacher), the student model, instructional goals, and the interface. The student model contains the knowledge that the student has mastered as well as the areas in which he or she may have conceptual errors. Instruction can then be tailored to help elucidate the concepts with which the student is having difficulty.


An expert system attempts to capture an individual’s expertise, and the program should then perform like an expert in that particular area. An expert system consists of two components: a knowledge base and an inference engine. The inference engine is the program of the expert system. It relies on the knowledge base, which “captures the knowledge” of an expert. Developing this component of the expert system is often time-consuming. Typically, the knowledge from the expert is represented in if-then statements (also called condition-action rules). If a particular condition is met, this leads to execution of the action part of the statement. Testing of the system often leads to repeating the knowledge-acquisition phase and modification of the condition-action rules. An example of an expert system is MYCIN, which diagnoses bacterial infections based on lab results. The performance of MYCIN was compared with that of physicians as well as with that of interns. MYCIN’s performance was comparable to that of a physician.


Case-based reasoning systems use previous cases to analyze a new case. This type of reasoning is similar to law, in which a current situation is interpreted by use of previous types of problems. Case-based reasoning is designed around the so-called four R’s: Retrieve relevant cases to the case at hand, reuse a previous case where applicable, revise strategy if no previous case is appropriate, and retain the new solution, allowing for the use of the case in the future.


Other approaches to modeling intelligence have included trying to model the intelligence of animals. Alife is an approach that involves the development of a computer simulation of the important features necessary for intelligent behavior. The animats approach constructs robots based on animal models. The idea here is to implement intelligence on a smaller scale rather than trying to model all of human intelligence. This approach may be invaluable in terms of developing systems that are shared in common with animals.




Bibliography


Bechtel, William, and George Graham, eds. A Companion to Cognitive Science. Malden: Blackwell, 1998. Print.



Clark, Andy, and Josefa Toribio, eds. Cognitive Architectures in Artificial Intelligence. New York: Garland, 1998. Print.



Cristianini, Nello. "On the Current Paradigm in Artificial Intelligence." AI Communications 27.1 (2014): 37–43. Print.



Dennett, Daniel C. Brainchildren: Essays on Designing Minds. Cambridge: MIT P, 1998. Print.



Franklin, Stan. Artificial Minds. Cambridge.: MIT P, 1995. Print.



Gardner, Howard. The Mind’s New Science: A History of the Cognitive Revolution. New York: Basic, 1998. Print.



Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. Cambridge: MIT P, 2008.



Muggleton, Stephen. "Alan Turing and the Development of Artificial Intelligence." AI Communications 27.1 (2014): 3–10. Print.



Vardi, Moshe Y. "Artificial Intelligence: Past and Future." Communications of the ACM 55.1 (2012): 5. Print.



Von Foerster, Heinz. Understanding Understanding: Essays on Cybernetics and Cognition. New York: Springer, 2003. Print.

Sunday, 2 November 2014

What is behavioral family therapy?


Introduction

Behavioral family therapy is a type of psychotherapy used to treat families in which one or more members are exhibiting behavior problems. Behavioral therapy was employed originally in the treatment of individual disorders such as phobias (irrational fears). Behavioral family therapy represents an extension of the use of behavioral techniques from the treatment of individual problems to the treatment of family problems. The most common problems treated by behavioral family therapy are parent-child conflicts; however, the principles of this type of therapy have been used to treat other familial difficulties, including marital and sexual problems.







Role of Learning Theory

The principles of learning theory underlie the theory and practice of behavioral family therapy. Learning theory was developed through laboratory experimentation largely begun by Ivan Petrovich Pavlov
and Edward L. Thorndike
during the early 1900s. Pavlov was a Russian physiologist interested in the digestive processes of dogs. In the process of his experimentation, he discovered several properties regarding the production of behavior that have become embodied in the theory of classical conditioning. Pavlov observed that his dogs began to salivate when he entered their pens because they associated his presence (new behavior) with their being fed (previously reinforced old behavior). From this observation and additional experimentation, Pavlov concluded that a new behavior that is regularly paired with an old behavior acquires the same rewarding or punishing qualities of the old behavior. New actions become conditioned to produce the same responses as the previously reinforced or punished actions.


Another component of learning theory was discovered by Thorndike, an American psychologist. Thorndike observed that actions followed closely by rewards were more likely to recur than those not followed by rewards. Similarly, he observed that actions followed closely by punishment were less likely to recur. Thorndike explained these observations on the basis of the law of effect. The law of effect holds that behavior closely followed by a response will be more or less likely to recur depending on whether the response is reinforcing (rewarding) or punishing.


Building on the observations of Thorndike, American behaviorist B. F. Skinner
developed the theory of operant conditioning
in the 1930s. Operant conditioning is the process by which behavior is made to occur at a faster rate when a specific behavior is followed by positive reinforcement—the rewarding consequences that follow a behavior, which increase the rate at which the behavior will recur. An example that Skinner used in demonstrating operant conditioning involved placing a rat in a box with different levers. When the rat accidentally pushed a predesignated lever, it was given a food pellet. As predicted by operant conditioning, the rat subsequently increased its pushing of the lever that provided it with food.


Gerald Patterson and Richard Stuart, beginning in the late 1960s, were among the first clinicians to apply behavioral techniques, previously used with individuals, to the treatment of family problems. Although Patterson worked primarily with parent-child problems, Stuart extended behavioral family therapy to the treatment of marital problems.


Given the increasing prevalence of family problems, as seen by the rise in the number of divorces and cases of child abuse, the advent of behavioral family therapy has been welcomed by many therapists who treat families. The findings of a 1984 study by William Quinn and Bernard Davidson revealed the increasing use of this therapy, with more than half of all family therapists reporting the use of behavioral techniques in their family therapy.




Conditioning and Desensitization

The principles of classical and operant conditioning serve to form the foundation of learning theory. Although initially derived from animal experiments, learning theory also was applied to humans. Psychologists who advocated learning theory began to demonstrate that all behavior, whether socially appropriate or inappropriate, occurs because it is either classically or operantly conditioned. John B. Watson, an American psychologist of the early twentieth century, illustrated this relationship by producing a fear of rats in an infant known as
Little Albert by repeatedly making a loud noise when a rat was presented to Albert. After a number of pairings of the loud noise with the rat, Albert began to show fear when the rat was presented.


In addition to demonstrating how inappropriate behavior was caused, behavioral psychologists began to show how learning theory could be used to treat people with psychological disorders. Joseph Wolpe, a pioneer in the use of behavioral treatment during the 1950s, showed how phobias could be alleviated by using learning principles in a procedure termed systematic desensitization. Systematic desensitization involves three basic steps: teaching the phobic individual how to relax; having the client create a list of images of the feared object (for example, snakes), from least to most feared; and repeatedly exposing the client to the feared object in graduated degrees, from least to most feared images, while the individual is in a relaxed state. This procedure has been shown to be very effective in the treatment of phobias.


Behavioral family therapy makes the same assumptions regarding the causes of both individual and family problems. For example, in a fictional case, the Williams family came to treatment because their seven-year-old son, John, refused to sleep in his own bed at night. In attempting to explain John’s behavior, a behaviorally oriented psychologist would seek to find out what positive reinforcement John was receiving in response to his refusal to stay in his own bed. It may be that when John was younger his parents allowed him to sleep with them, thus reinforcing his behavior by giving him the attention he desired. Now that John is seven, however, his parents believe that he needs to sleep in his own bed, but John continues to want to sleep with his parents because he has been reinforced by being allowed to sleep with them for many years. This case provides a clinical example of operant conditioning in that John’s behavior, because it was repeatedly followed by positive reinforcement, was resistant to change.




Treatment Process

Behavioral family therapy is a treatment approach that includes the following four steps: problem assessment, family (parent) education, specific treatment design, and treatment goal evaluation. It begins with a thorough assessment of the presenting family problem. This assessment process involves gathering the following information from the family: what circumstances immediately precede the problem behavior; how family members react to the exhibition of the client’s problem behavior; how frequently the misbehavior occurs; and how intense the misbehavior is. Behavioral family therapy differs from individual behavior therapy in that all family members are typically involved in the assessment process. As a part of the assessment process, the behavioral family therapist often observes the way in which the family handles the presenting problem. This observation is conducted to obtain firsthand information regarding ways the family may be unknowingly reinforcing the problem or otherwise poorly handling the client’s misbehavior.


Following the assessment, the behavioral family therapist, with input from family members, establishes treatment goals. These treatment goals should be operationalized; that is, they should be specifically stated so that they may be easily observed and measured. In the example of John, the boy who refused to sleep in his own bed, an operationalized treatment goal would be as follows: “John will be able to sleep from 9:00 p.m. to 6:00 a.m. in his own bed without interrupting his parents during the night.”




Applying Learning Theory Principles

Once treatment goals have been operationalized, the next stage involves designing an intervention to correct the behavioral problem. The treatment procedure follows from the basic learning principles previously discussed. In cases involving parent-child problems, the behavioral family therapist educates the parents in learning theory principles as they apply to the treatment of behavioral problems. Three basic learning principles are explained to the child’s parents. First, positive reinforcement should be withdrawn from the unwanted behavior. For example, a parent who meets the demands of a screaming preschooler who throws a temper tantrum in the checkout line of the grocery store because he or she wants a piece of candy is unwittingly reinforcing the child’s screaming behavior. Time-out is one procedure used to remove the undesired reinforcement from a child’s misbehavior. Using time-out involves making a child sit in a corner or other nonreinforcing place for a specified period of time (typically, one minute for each year of the child’s age).


Second, appropriate behavior that is incompatible with the undesired behavior should be positively reinforced. In the case of the screaming preschooler, this would involve rewarding him or her for acting correctly. An appropriate reinforcer in this case would be giving the child the choice of a candy bar if the child were quiet and cooperative during grocery shopping, behavior inconsistent with a temper tantrum. For positive reinforcement to have its maximum benefit, before the specific activity (for example, grocery shopping) the child should be informed about what is expected and what reward will be received for fulfilling these responsibilities. This process is called contingency management because the promised reward is made contingent on the child’s acting in a prescribed manner. In addition, the positive reinforcement should be given as close to the completion of the appropriate behavior as possible.


Third, aversive consequences should be applied when the problem behavior recurs. When the child engages in the misbehavior, he or she should consistently experience negative costs. In this regard, response cost is a useful technique because it involves taking something away or making the child do something unrewarding as a way of making misbehavior have a cost. For example, the preschooler who has a temper tantrum in the checkout line may have a favorite dessert, which he or she had previously selected while in the store, taken away as the cost for throwing a temper tantrum. As with positive reinforcement, response cost should be applied as quickly as possible following the misbehavior in order for it to produce its maximum effect.




Designing Treatment Intervention

Once parents receive instruction regarding the principles of behavior therapy, they are actively involved in the process of designing a specific intervention to address their child’s behavior problems. The behavioral family therapist relates to the parents as cotherapists with the hope that this approach will increase the parents’ involvement in the treatment process. In relating to Mr. and Mrs. Williams as cotherapists, for example, the behavioral family therapist would have the couple design a treatment intervention to correct John’s misbehavior. Following the previously described principles, the couple might arrive at the following approach: They would refuse to give in to John’s demands to sleep with them; John would receive a token for each night he slept in his own bed (after earning a certain number of tokens, he could exchange them for toys); and John would be required to go to bed fifteen minutes earlier the following night for each time he asked to sleep with his parents.


Once the intervention has been implemented, the therapist, together with the parents, monitors the results of the treatment. This monitoring process involves assessing the degree to which the established treatment goals are being met. For example, in the case of the Williams family, the treatment goal was to reduce the number of times that John attempted to get into bed with his parents. Therapy progress, therefore, would be measured by counting the number of times that John attempted to get into bed with his parents. Careful assessment of an intervention’s results is essential to determine whether the intervention is accomplishing its goal.




Detractions

In spite of its popularity, this type of therapy has not been without its critics. For example, behavioral family therapists’ explanations regarding the causes of family problems differ from those given by the advocates of other family therapies. One major difference is that behavioral family therapists are accused of taking a linear (as compared to a circular) view of causality. From a linear perspective, misbehavior occurs because A causes B and B causes C. Those who endorse a circular view of causality, however, assert that this simplistic perspective is inadequate in explaining why misbehavior occurs. Taking a circular perspective involves identifying multiple factors that may be operating at the same time to determine the reason for a particular misbehavior. For example, from a linear view of causality, John’s misbehavior is seen as the result of being reinforced for sleeping with his parents. According to a circular perspective, however, John’s behavior may be the result of many factors, all possibly occurring together, such as his parents’ marital problems or his genetic predisposition toward insecurity.




Integration with Other Therapies

Partially in response to this criticism, attempts have been made to integrate behavioral family therapy with other types of family therapy. Another major purpose of integrative efforts is to address the resistance often encountered from families during treatment. Therapeutic resistance is a family’s continued attempt to handle the presenting problem in a maladaptive manner in spite of having learned better ways. In the past, behavioral family therapists gave limited attention to dealing with family resistance; however, behavioral family therapy has attempted to improve its ability to handle resistance by incorporating some of the techniques used by other types of family therapy.


In conclusion, numerous research studies have demonstrated that behavioral family therapy is an effective treatment of family problems. One of the major strengths of this type of therapy is its willingness to assess objectively its effectiveness in treating family problems. Because of its emphasis on experimentation, behavioral family therapy continues to adapt by modifying its techniques to address the problems of the modern family.




Bibliography


Atwood, Joan, ed. Family Therapy: A Systemic Behavioral Approach. Chicago: Nelson, 1999. Print.



Clark, Lynn. The Time-Out Solution. Chicago: Contemporary, 1989. Print.



Falloon, Ian R. H., ed. Handbook of Behavioral Family Therapy. New York: Guilford, 1988. Print.



Gladding, Samuel T. Family Therapy: History, Theory, and Practice. Boston: Prentice, 2011. Print.



Goldenberg, Herbert, and Irene Goldenberg. Family Therapy: An Overview. 7th ed. Belmont: Brooks, 2008. Print.



Gordon, Thomas. Parent Effectiveness Training: The Proven Program for Raising Responsible Children. Rev. ed. New York: Three Rivers, 2000. Print.



Nichols, Michael P. “Cognitive-Behavioral Family Therapy.” Family Therapy: Concepts and Methods. Ed. Michael P. Nichols and Richard C. Schwartz. 8th ed. Boston: Allyn, 2008. Print.



Podell, Jennifer, and Philip Kendall. "Mothers and Fathers in Family Cognitive-Behavioral Therapy for Anxious Youth." Jour. of Child & Family Studies 20.2 (2011): 182–95. Print.



Rasheed, Janice M., Mikal N. Rasheed, and James A. Marley. Family Therapy: Models and Techniques. Los Angeles: SAGE, 2011. Print.



Robin, Arthur L., and Sharon L. Foster. Negotiating Parent-Adolescent Conflict: A Behavioral Family Systems Approach. New York: Guilford, 2003. Print.

What are muscle sprains, spasms, and disorders?


Causes and Symptoms

There are three kinds of muscle tissue in the human body: smooth muscle, cardiac muscle, and striated muscle. Smooth muscle tissue is found around the intestines, blood vessels, and bronchioles in the lung, among other areas. These muscles are controlled by the autonomic nervous system, which means that their movement is not subject to voluntary action. They have many functions: They maintain the airway in the lungs, regulate the tone of blood vessels, and move foods and other substances through the digestive tract. Cardiac muscle is found only in the heart. Striated muscles are those that move body parts. They are also called voluntary muscles because they must receive a conscious command from the brain in order to work. They supply the force for physical activity, and they also prevent movement and stabilize body parts.



Muscles are subject to many disorders: Muscle sprains, strains, and spasms are common events in everyone’s life and, for the most part, they are harmless, if painful, results of overexercise, accidents, falls, bumps, or countless other events. Yet these symptoms can also signal serious myopathies, or disorders within muscle tissue.


Myopathies constitute a wide range of diseases. They are classified as inflammatory myopathies or metabolic myopathies. Inflammatory myopathies include infections by bacteria, viruses, or other microorganisms, as well as other diseases that are possibly autoimmune in origin (that is, resulting from and directed against the body’s own tissues). In metabolic myopathies, there is some failure or disturbance in the body’s ability to maintain a proper metabolic balance or electrolyte distribution. These conditions include glycogen storage diseases, in which there are errors in glucose processing; disorders of fatty acid
metabolism, in which there are derangements in fatty acid oxidation; mitochondrial myopathies, in which there are biochemical and other abnormalities in the mitochondria of muscle cells; endocrine myopathies, in which an endocrine disorder underlies muscular symptoms; and the periodic paralyses, which can be the result of inherited or acquired illnesses. This is only a partial list of the myopathies, the symptoms of which include weakness and pain.


Muscular dystrophies are a group of inherited disorders in which muscle tissue fails to receive nourishment. The results are progressive muscular weakness and the degeneration and destruction of muscle fibers. The symptoms include weakness, loss of coordination, impaired gait, and impaired muscle extensibility. Over the years, muscle mass decreases and the arms, legs, and spine become deformed.


Neuromuscular disorders include a wide variety of conditions in which muscle function is impaired by faulty transmission of nerve impulses to muscle tissue. These conditions may be inherited; they may be attributable to toxins, such as in food poisoning (for example, botulism) or by pesticide poisoning; or they may be side effects of certain drugs. The most commonly seen neuromuscular disorder is
myasthenia gravis.


The muscular disorders most often seen are those that result from overexertion, exercise, athletics, accidents, and trauma. Injuries sustained during sports and games have become so significant that sports medicine has become a recognized medical subspecialty. Besides the muscles, the parts of the body involved in these disorders include tendons (tough, stringy tissue that attaches muscles to bones), ligaments (tissue that attaches bone to bone), synovia (membranes enclosing a joint or other bony structure), and cartilage (soft, resilient tissue between bones). A sprain is an injury in which ligaments are stretched or torn. In a strain, muscles or tendons are stretched or torn. A contusion is a bruise that occurs when the body is subjected to trauma; the skin is not broken, but the capillaries underneath are, causing discoloration. A spasm is a short, abnormal contraction in a muscle or group of muscles. A cramp is a prolonged, painful

contraction of one or more muscles.


Sprains can be caused by twisting the joint violently or by forcing it beyond its range of movement. The ligaments that connect the bones of the joint stretch or tear. Sprains occur most often in the knees, ankles, and arches of the feet. There is pain and swelling, and at least some immobilization of the joint.


A
strain is also called a pulled muscle. When too great a demand is placed on a muscle, it and the surrounding tendons can stretch and/or tear. The main symptom is pain; swelling and muscle spasm may also occur.


Muscle spasms and cramps are common. Sometimes they occur spontaneously, such as the calf muscle cramps that occur at night. Sometimes they are attributable to muscle strain (the charley horse that tightens thigh muscles in runners and other athletes). Muscles that are used often will go into spasm, such as those in the thumb and fingers of writers (writer’s cramp), as can muscles that have remained in one position for too long. Muscle spasms and cramps can also occur as direct consequences of dehydration; they are common in athletes who perspire excessively during hot weather.


Some injuries to muscles and joints occur so regularly that they are named for the activities associated with them. A good example is tennis elbow, a condition that results from repeated, vigorous movement of the arm, such as swinging a tennis racket, using a paintbrush, or pitching a baseball. Runners’ knee can afflict joggers and other athletes. It is usually caused by sprains in the knee ligaments; there is pain and there may be partial or total immobilization of the knee. Achilles tendinitis, as the name suggests, is inflammation of the Achilles tendon in the heel. It is usually the result of excessive physical activity that causes small tears in the tendon. Pain and immobility are symptoms. Tendinitis can occur in other joints as well; elbows and shoulders are common
sites. Tenosynovitis is inflammation of the synovial
membrane that sheathes the tendons in the hand. It may be caused by bacterial infection or may be attributable to overexertion.



Tumors and cancerous growths in muscle tissue are rare. If a lump appears in muscle, it is usually a lipoma, a fatty deposit that is benign. One tumor, called rhabdomyosarcoma, however, is malignant and can be fatal.




Treatment and Therapy

The myopathies are a wide group of diseases, and treatment varies considerably among them. The muscular dystrophies also vary in their treatment methods. Physical therapy is recommended to prevent contractures, the permanent, disfiguring muscular contractions that are a feature of the disease. Orthopedic appliances and surgery are also used. Because these diseases are genetic, it is sometimes recommended that people with a familial history of muscular dystrophy be tested for certain genetic markers that would suggest the possibility of disease in their children.


Myasthenia gravis is treated with drugs that increase the number of neurotransmitters available where nerves and muscles come together. The drugs help improve the transmission of information from the brain to the muscle tissue. In some cases, a procedure called plasmapheresis is used to eliminate blood-borne substances that may contribute to the disease. Surgical removal of the
thymus gland is helpful in alleviating symptoms in some patients.


In treating the many muscle disorders that are caused by athletic activity and excessive wear and tear on the muscle, the R-I-C-E formula is recommended. The acronym stands for rest-ice-compression-elevation: The patient must rest and not use or exercise the limb or muscle involved; an ice pack is applied to the injury; compression is supplied by wrapping a moist bandage snugly over the ice, reducing the flow of fluids to the injured area; and the injured limb is elevated. If there is a fracture involved, the limb must be properly splinted or otherwise immobilized before elevation. The ice pack is held in place for twenty minutes and removed, but the bandage is held in place. Ice therapy can be resumed every twenty minutes.


Heat is also part of the therapy for strains and sprains, but it is not applied until after the initial swelling has gone down, usually after forty-eight to seventy-two hours. Heat raises the metabolic rate in the affected tissue. This brings more blood to the area, carrying nutrients that are needed for tissue repair. Moist heat is preferred, and it can be supplied by an electrical heating pad, a chemical gel in a plastic bag, or hot baths and whirlpools. In using pads and chemical gels, there should be a layer of toweling or other material between the heat source and the body. The temperature for a whirlpool or hot bath should be about 106 degrees Fahrenheit. Only the injured part should be immersed, if possible. As in the ice treatments, heat should be applied for twenty minutes and can be repeated after twenty minutes of rest.


Analgesics are given for pain. Over-the-counter preparations such as aspirin, acetaminophen, or ibuprofen are used most often. Sometimes, when pain is severe, more potent medications are required. Steroids are sometimes prescribed to reduce inflammation, and nonsteroidal anti-inflammatory drugs (NSAIDs) can alleviate both pain and inflammation. If a strained muscle or tendon is seriously torn or otherwise damaged, surgery may be required. Similarly, if a sprain involves torn or detached ligaments, they may have to be surgically repaired.


Muscle spasms and cramps may require both manipulation and the application of heat or cold. The affected limb is gently extended to stretch the contracted muscle. Massage and immersion in a hot bath are useful, as are cold packs.


Tennis elbow, runners’ knee, and
tendinitis respond to R-I-C-E therapy. Ice is applied to the injured site, and the limb is elevated and allowed to rest. When tenosynovitis is caused by bacterial infection, prompt antibiotic therapy may be necessary to avoid permanent damage. When it is attributable to overexertion, analgesics may help relieve pain and inflammation. Rarely, a corticosteroid is used when other drugs fail.


Often, the injured site requires physical therapy for the full range of motion to be restored. The physical therapist analyzes the patient’s capability and develops a regimen to restore strength and mobility to the affected muscles and joints. Physical therapy may involve massage, hot baths, whirlpools, weight training, and/or isometric exercise. Orthotic devices may be required to help the injured area heal.


An important aspect of sports medicine and the treatment of sports-related muscle disorders is prevention. Many painful, debilitating, and immobilizing episodes can be avoided by proper training and conditioning, intelligent exercise practice, and restriction of exertion. Before undertaking any sport or strenuous physical activity, the individual is advised to warm up by gentle stretching, jogging, jumping, and other mild muscular activities. Arms can be rotated in front of the body, over the head, and in circles perpendicular to the ground. Knees can be lifted and pulled up to the chest. Shoulders should be gently rotated to relax upper-back muscles. Neck muscles are toned by gently and slowly moving the head from side to side and in circles. Back muscles are loosened by bending forward and continuing around in slow circles.


If a joint has been injured, it is important to protect it from further damage. Physicians and physical therapists often recommend that athletes tape, brace, or wrap susceptible joints, such as knees, ankles, elbows, or wrists. Sometimes a simple commercial elastic bandage, available in various configurations specific to parts of the body, is all that is required. Neck braces and back braces are used to support these structures.


Benign muscle tumors require no treatment, or may be surgically removed. Malignant tumors may require surgery, radiation, and chemotherapy.




Perspective and Prospects

With the increased interest in physical exercise in the United States has come increasing awareness of the dangers of muscular damage that can arise from improper exercise, as well as of the cardiovascular risks that lie in wait for weekend athletes. Warm-up procedures are universally recommended. Individual exercisers, those in gym classes, professional athletes, and schoolchildren are routinely taken through procedures to stretch and loosen muscles before they start strenuous activity.


Greater attention is being paid to the special needs of young athletes, such as gymnasts. Over the years, new athletic toys and devices have constantly been developed for the young: Skateboards, skates, scooters, and bicycles expose children to a wide range of bumps, falls, bruises, strains, and sprains. Protective equipment and devices have been designed especially for them: Helmets, padding, and special uniforms give children more security in accidents. Similarly, adults should take the time and trouble to outfit themselves correctly for the sports and athletics in which they engage: Joggers should tape, wrap, and brace their joints; and cyclists should wear helmets.


Nevertheless, the incidence of sports- and athletics-related muscular damage is relatively high, pointing to the necessity for increased attention to prevention. The growth of sports medicine as a medical specialty helps considerably in this endeavor. Physicians and nurses in this area are trained to deal with the various problems that arise, and they are often expert commentators on the best means to prevent problems.




Bibliography


Brukner, Peter, and Karim Khan. Brukner & Khan's Clinical Sports Medicine. 4th ed. New York: McGraw-Hill , 2010. Print.



Kirkaldy-Willis, William H., and Thomas N. Bernard, Jr., eds. Managing Low Back Pain. 4th ed. New York: Churchill Livingstone, 1999.



Litin, Scott C., ed. Mayo Clinic Family Health Book. 4th ed. New York: HarperResource, 2009.



McArdle, William, Frank I. Katch, and Victor L. Katch. Exercise Physiology: Energy, Nutrition, and Human Performance. 7th ed. Boston: Lippincott Williams & Wilkins, 2010.



Marieb, Elaine N., and Katja Hoehn. Human Anatomy and Physiology. 9th ed. San Francisco: Pearson/Benjamin Cummings, 2010.



MacAuley, Domhnall. Oxford Handbook of Sport and Exercise Medicine. 2nd ed. Oxford UP, 2012. Print.



Rouzier, Pierre A. The Sports Medicine Patient Advisor. 3rd ed. Valley Stream, New York: SportsMed Press, Print.



Salter, Robert Bruce. Textbook of Disorders and Injuries of the Musculoskeletal System. 3d ed. Baltimore: Williams & Wilkins, 1999.

What is intelligence? |


Introduction

The idea that human beings differ in their capacity to adapt to their environments, to learn from experience, to exercise various skills, and in general to succeed at various endeavors has existed since ancient times. Intelligence is the attribute most often singled out as responsible for successful adaptations. Up to the end of the nineteenth century, notions about what constitutes intelligence and how differences in intelligence arise were mostly speculative. In the late nineteenth century, several trends converged to bring about an event that would change the way in which intelligence was seen and dramatically influence the way it would be studied. That event, which occurred in 1905, was the publication of the first useful instrument for measuring intelligence, the Binet-Simon scale, which was developed in France by Alfred Binet and Théodore Simon.





Although the development of
intelligence tests was a great technological accomplishment, it occurred, in a sense, somewhat prematurely before much scientific attention had been paid to the concept of intelligence. This circumstance tied the issue of defining intelligence and a large part of the research into its nature and origins to the limitations of the tests that had been devised. In fact, the working definition of intelligence that many psychologists have used either explicitly or implicitly in their scientific and applied pursuits is the one expressed by Edwin Boring
in 1923, which holds that intelligence is whatever intelligence tests measure. Most psychologists realize that this definition is redundant and inadequate in that it erroneously implies that the tests are perfectly accurate and able to capture all that is meant by the concept. Nevertheless, psychologists and others have proceeded to use the tests as if the definition were true, mainly because of a scarcity of viable alternatives. The general public has also been led astray by the existence of “intelligence” tests and the frequent misuse of their results. Many people have come to think of the intelligence quotient, or IQ, not as a simple score achieved on a particular test, which it is, but as a complete and stable measure of intellectual capacity, which it most definitely is not. Such misconceptions have led to an understandable resistance toward and resentment of intelligence tests.




Changing Definitions

Boring’s semifacetious definition of intelligence may be the best known and most criticized one, but it is only one among many that have been offered. Most experts in the field have defined the concept at least once in their careers. Two of the most frequently cited and influential definitions are the ones provided by Binet himself and by David Wechsler, author of a series of “second-generation” individual intelligence tests that overtook the Binet scales in terms of the frequency with which they are used. Binet believed that the essential activities of intelligence are to judge well, to comprehend well, and to reason well. He stated that intelligent thought is characterized by direction, knowing what to do and how to do it; by adaptation, the capacity to monitor one’s strategies for attaining a desired end; and by criticism, the power to evaluate and control one’s behavior. In 1975, almost sixty-five years after Binet’s death, Wechsler defined intelligence, not dissimilarly, as the global capacity of the individual to act purposefully, to think rationally, and to deal effectively with the environment.


In addition to the testing experts (psychometricians), developmental, learning, and cognitive psychologists, among others, are also vitally interested in the concept of intelligence. Specialists in each of these subfields emphasize different aspects of it in their definitions and research.


Representative definitions were sampled in 1921, when the Journal of Educational Psychology published the views of fourteen leading investigators, and again in 1986, when Robert Sternberg and Douglas Detterman collected the opinions of twenty-four experts in a book entitled What Is Intelligence? Contemporary Viewpoints on Its Nature and Definition. Most of the experts sampled in 1921 offered definitions that equated intelligence with one or more specific abilities. For example, Lewis Terman equated it with abstract thinking, which is the ability to elaborate concepts and to use language and other symbols. Others proposed definitions that emphasized the ability to adapt or learn. Some definitions centered on knowledge and cognitive components only, whereas others included nonintellectual qualities, such as perseverance.


In comparison, Sternberg and Detterman’s 1986 survey of definitions, which is even more wide ranging, is accompanied by an organizational framework consisting of fifty-five categories or combinations of categories under which the twenty-four definitions can be classified. Some theorists view intelligence from a biological perspective and emphasize differences across species or the role of the central nervous system. Some stress cognitive aspects of mental functioning, while others focus on the role of motivation and goals. Still others, such as Anne Anastasi, choose to look on intelligence as a quality that is inherent in behavior rather than in the individual. Another major perspective highlights the role of the environment, in terms of demands and values, in defining what constitutes intelligent behavior. Throughout the 1986 survey, one can find definitions that straddle two or more categories.


A review of the 1921 and 1986 surveys shows that the definitions proposed have become considerably more sophisticated and suggests that, as the field of psychology has expanded, the views of experts on intelligence may have grown farther apart. The reader of the 1986 work is left with the clear impression that intelligence is such a multifaceted concept that no single quality can define it and no single task or series of tasks can capture it completely. Moreover, it is clear that to unravel the qualities that produce intelligent behavior, one must look not only at individuals and their skills but also at the requirements of the systems in which people find themselves. In other words, intelligence cannot be defined in a vacuum.


New intelligence research focuses on different ways to measure intelligence and on paradigms for improving or training intellectual abilities and skills. Measurement paradigms allow researchers to understand ongoing processing abilities. Some intelligence researchers include measures of intellectual style and motivation in their models.




Factor Analysis

The lack of a universally accepted definition has not deterred continuous theorizing and research on the concept of intelligence. The central issue that has dominated theoretical models of intelligence is the question of whether it is a single, global ability or a collection of specialized abilities. This debate, started in England by Charles Spearman, is based on research that uses the correlations among various measures of abilities and, in particular, the method of
factor analysis, which was also pioneered by Spearman. As early as 1904, Spearman, having examined the patterns of correlation coefficients among tests of sensory discrimination and estimates of intelligence, proposed that all mental functions are the result of a single general factor, which he later designated g. Spearman equated g with the ability to grasp and apply relations. He also allowed for the fact that most tasks require unique abilities, and he named those s, or specific, factors. According to Spearman, to the extent that performance on tasks was positively correlated, the correlation was attributable to the presence of g, whereas the presence of specific factors tended to lower the correlation between measures of performance on different tasks.


By 1927, Spearman had modified his theory to allow for the existence of an intermediate class of factors, known as group factors, which were neither as universal as g nor as narrow as the s factors. Group factors were seen as accounting for the fact that certain types of activities, such as tasks involving the use of numbers or the element of speed, correlate more highly with one another than they do with tasks that do not have such elements in common.


Factor-analytic research has undergone explosive growth and extensive variations and refinements in both England and the United States since the 1920s. In the United States, work in this field was influenced greatly by Truman Kelley, whose 1928 book Crossroads in the Mind of Man presented a method for isolating group factors, and L. L. Thurstone, who by further elaboration of factor-analytic procedures identified a set of about twelve factors that he designated as the “primary mental abilities.” Seven of these were repeatedly found in a number of investigations, using samples of people at different age levels, that were carried out by both Thurstone and others. These group factors or primary mental abilities are verbal comprehension, word fluency, speed and accuracy of arithmetic computation, spatial visualization, associative memory, perceptual speed, and general reasoning.




Organizational Models

As the search for distinct intellectual factors progressed, their number multiplied, and so did the number of models devised to organize them. One type of scheme, used by Cyril Burt, Philip E. Vernon, and others, is a hierarchical arrangement of factors. In these models, Spearman’s g factor is placed at the top of a pyramid and the specific factors are placed at the bottom; in between, there are one or more levels of group factors selected in terms of their breadth and arranged according to their interrelationships with the more general factors above them and the more specific factors below them.


In Vernon’s scheme, for example, the ability to change a tire might be classified as a specific factor at the base of the pyramid, located underneath an intermediate group factor labeled mechanical information, which in turn would be under one of the two major group factors identified by Vernon as the main subdivisions under g—namely, the practical-mechanical factor. The hierarchical scheme for organizing mental abilities is a useful device that is endorsed by many psychologists on both sides of the Atlantic. It recognizes that very few tasks are so simple as to require a single skill for successful performance, that many intellectual functions share some common elements, and that some abilities play a more pivotal role than others in the performance of culturally valued activities.


Another well-known scheme for organizing intellectual traits is the structure-of-intellect (SOI) model developed by J. P. Guilford. Although the SOI is grounded in extensive factor-analytic research conducted by Guilford throughout the 1940s and 1950s, the model goes beyond factor analysis and is perhaps the most ambitious attempt to classify systematically all the possible functions of the human intellect. The SOI classifies intellectual traits along three dimensions—namely, five types of operations, four types of contents, and six types of productions, for a total of 120 categories (5 × 4 × 6). Intellectual operations consist of what a person actually does (for example, evaluating or remembering something), the contents are the types of materials or information on which the operations are performed (for example, symbols, such as letters or numbers), and the products are the form in which the contents are processed (for example, units or relations). Not all the 120 categories in Guilford’s complex model have been used, but enough factors have been identified to account for about one hundred of them, and some have proved very useful in labeling and understanding the skills that tests measure. Furthermore, Guilford’s model has served to call attention to some dimensions of intellectual activity, such as creativity and interpersonal skills, that had been neglected previously.




Competence and Self-Management

Contemporary theorists in the area of intelligence have tried to avoid the reliance on factor analysis and existing tests that have limited traditional research and have tried different approaches to the subject. For example, Howard Gardner, in his 1983 book Frames of Mind: The Theory of Multiple Intelligences, starts with the premises that the essence of intelligence is competence and that there are several distinct areas in which human beings can demonstrate competence. Based on a wide-ranging review of evidence from many scientific fields and sources, Gardner designated seven areas of competence as separate and relatively independent “intelligences.” In his 1993 work Multiple Intelligences, Gardner revised his theory to include an eighth type of intelligence. This set of attributes comprises verbal, mathematical, spatial, bodily/ kinesthetic, musical, interpersonal, intrapersonal, and naturalist skills.


Another theory is the one proposed by Robert Sternberg in his 1985 book Beyond IQ: A Triarchic Theory of Human Intelligence. Sternberg defines intelligence, broadly, as mental self-management and stresses the “real-world,” in addition to the academic, aspects of the concept. He believes that intelligent behavior consists of purposively adapting to, selecting, and shaping one’s environment and that both culture and personality play significant roles in such behavior. Sternberg posits that differences in IQ scores reflect differences in individuals’ stages of developing the expertise measured by the particular IQ test, rather than attributing these scores to differences in intelligence, ability, or aptitude. Sternberg’s model has five key elements: metacognitive skills, learning skills, thinking skills, knowledge, and motivation. The elements all influence one another. In this work, Sternberg claims that measurements derived from ability and achievement tests are not different in kind; only in the point at which the measurements are being made.




Intelligence and Environment

Theories of intelligence are still grappling with the issues of defining its nature and composition. Generally, newer theories do not represent radical departures from the past. They do, however, emphasize examining intelligence in relation to the variety of environments in which people actually live rather than to only academic or laboratory environments. Moreover, many investigators, especially those in cognitive psychology, are more interested in breaking down and replicating the steps involved in information processing and problem solving than they are in enumerating factors or settling on a single definition of intelligence. These trends hold the promise of moving the work in the field in the direction of devising new ways to teach people to understand, evaluate, and deal with their environments more intelligently instead of simply measuring how well they do on intelligence tests. In their 1998 article “Teaching Triarchically Improves School Achievement,” Sternberg and his colleagues note that teaching or training interventions can be linked directly to components of intelligence. Motivation also plays a role. In their 2000 article “Intrinsic and Extrinsic Motivation,” Richard Ryan and Edward Deci provide a review of contemporary thinking about intrinsic and extrinsic motivation. The authors suggest that the use of motivational strategies should promote student self-determination.


The most heated of all the debates about intelligence is the one regarding its determinants, often described as the nature-nurture controversy. The nature side of the debate was spearheaded by ;Francis Galton, a nineteenth century English scientist who had become convinced that intelligence was a hereditary trait. Galton’s followers tried to show, through studies comparing identical and nonidentical twins reared together and reared apart and by comparisons of people related to each other in varying degrees, that genetic endowment plays a far larger role than the environment in determining intelligence. Attempts to quantify an index of heritability for intelligence through such studies abound, and the estimates derived from them vary widely. On the nurture side of the debate, massive quantities of data have been gathered in an effort to show that the environment, including factors such as prenatal care, social-class membership, exposure to certain facilitative experiences, and educational opportunities of all sorts, has the more crucial role in determining a person’s level of intellectual functioning.


Many critics, such as Anastasi (in a widely cited 1958 article entitled “Heredity, Environment, and the Question ’How?’”) have pointed out the futility of debating how much each factor contributes to intelligence. Anastasi and others argue that behavior is a function of the interaction between heredity and the total experiential history of individuals and that, from the moment of conception, the two are inextricably tied. Moreover, they point out that, even if intelligence were shown to be primarily determined by heredity, environmental influences could still modify its expression at any point. Most psychologists now accept this “interactionist” position and have moved on to explore how intelligence develops and how specific genetic and environmental factors affect it.




Bibliography


Alloway, Tracy Packiam, and Ross Alloway. Working Memory: The Connected Intelligence. New York: Psychology Press, 2013. Print.



Fancher, Raymond E. The Intelligence Men: Makers of the IQ Controversy. New York: W. W. Norton, 1987. Print.



Flynn, James R. What Is Intelligence? Beyond the Flynn Effect. New York: Cambridge University Press, 2009. Print.



Gardner, Howard. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books, 2004. Print.



Gardner, Howard. Multiple Intelligences: The Theory in Practice. New York: Basic Books, 2006. Print.



Guilford, Joy Paul. The Nature of Human Intelligence. New York: McGraw-Hill, 1967. Print.



Kaufman, Scott Barry. Ungifted: Intelligence Redefined. New York: Basic, 2013. Print.



Martinez, Michael E. Future Bright: A Transforming Vision of Human Intelligence. New York. Oxford UP, 2013. Print.



Murdoch, Stephen. IQ: A Smart History of a Failed Idea. Hoboken, N.J.: John Wiley & Sons, 2007. Print.



Ryan, R. M., and E. L. Deci. “Intrinsic and Extrinsic Motivation.” Contemporary Educational Psychology 25(2000): 54–67. Print.



Sternberg, Robert J. Successful Intelligence. New York: Plume, 1997. Print.



Sternberg, Robert J.The Triarchic Mind: A New Theory of Human Intelligence. New York: Viking Penguin, 1989. Print.



Sternberg, Robert J., B. Torff, and E. L. Grigorenko.“Teaching Triarchically Improves School Achievement.” Journal of Educational Psychology 90 (1998): 374–84. Print.



Vernon, Philip Ewart. Intelligence: Heredity and Environment. San Francisco: W. H. Freeman, 1979. Print.

Saturday, 1 November 2014

As an object falls in free fall, what energy change is taking place?

Before the object begins falling, it has gravitational potential energy which can be calculated by mgh (mass x acceleration due to gravity x height). As it begins falling due to the force of gravity, that gravitational potential energy is converted into kinetic energy, which can be calculated by 1/2mv^2 (1/2 x mass x velocity x velocity). That is the main way that energy is converted in this situation, but some energy is also converted into...

Before the object begins falling, it has gravitational potential energy which can be calculated by mgh (mass x acceleration due to gravity x height). As it begins falling due to the force of gravity, that gravitational potential energy is converted into kinetic energy, which can be calculated by 1/2mv^2 (1/2 x mass x velocity x velocity). That is the main way that energy is converted in this situation, but some energy is also converted into heat due to fluid friction with the air (commonly called air resistance).  


The Law of Conservation of Energy states that energy can be neither created nor destroyed, only converted from one form to another. In the example of a falling object, the total energy the objects starts with needs to equal the total energy the object ends with. For the sake of learning this concept, let's ignore friction for a bit. The object begins with only potential energy, and the instant before it hits the ground it has only kinetic energy because it has lost all of its height. So, the initial potential energy will equal the kinetic energy of the object right before it hits the ground. The potential energy has been converted into kinetic energy.

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...