Saturday 31 December 2016

What is Ritalin? |


History of Use

Although originally synthesized in 1944, Ritalin was not studied for its therapeutic effects in humans until the mid-1950s. Early on, Ritalin was used to treat narcolepsy (a sleep disorder), depression, and chronic fatigue. By the 1960s it was discovered to produce a calming effect in children who had been diagnosed with symptoms of attention-deficit hyperactivity disorder (ADHD).




When Ritalin is administered orally, its effects are slowed by the gastrointestinal tract, which effectively prevents the user from experiencing a euphoric high. However, when the drug is crushed and snorted or used intravenously, it can lead to intense feelings of pleasure that some have equated with cocaine usage.


Ritalin abuse has been on the rise. This increase has been driven by two primary factors. First, there has been an increase in the number of people diagnosed with ADHD. Second, persons without ADHD have learned that Ritalin can be used as a cognitive enhancer for improving academic performance on tasks that require sustained, focused attention.


Estimates indicate that between 3 and 10 percent of school-aged children in the United States meet the diagnostic criteria for ADHD. This trend has increased the overall availability of the drug. Adolescents and young adults more often abuse Ritalin by snorting it or by ingesting larger quantities to experience exhilaration. In 2010, researcher Eric Racine and his colleague and co-author Cynthia Forlini looked into rates of lifetime nonmedical stimulant use and found that the prevalence for using stimulants, including Ritalin, to augment cognition ranged from 3 to 11 percent of college students.




Effects and Potential Risks

Ritalin increases the presence of the neurotransmitter dopamine in the brain by blocking its reuptake by the cells that release it. Short-term adverse effects include headache, nausea, irregular heartbeat, wakefulness, agitation, anxiety, increased blood pressure, and, in rare instances, seizures. Long-term adverse effects include anxiety and sleeplessness. Initial reports of suppression of growth have been placed in doubt by later studies. Dependence can occur with chronic abuse.




Bibliography


“DrugFacts: Stimulant ADHD Medications: Methylphenidate and Amphetamines.” National Institute on Drug Abuse. Natl. Insts. of Health, Jan. 2014. Web. 28 Oct. 2015.



Iversen, Leslie. Speed, Ecstasy, Ritalin: The Science of Amphetamines. New York: Oxford UP, 2008. Print.



Levinthal, Charles F. Drugs, Behavior, and Modern Society. 8th ed. Boston: Pearson, 2013. Print.



Racine, Eric, and C. Forlini. “Cognitive Enhancement, Lifestyle Choice, or Misuse of Prescription Drugs?” Neuroethics 3 (2010): 1–4. Print.



“Signs and Symptoms of Prescription Drug Use.” Narconon. Narconon International, n.d. Web. 28 Oct. 2015.

Friday 30 December 2016

What is the Minnesota Multiphasic Personality Inventory (MMPI)?


Introduction

The Minnesota Multiphasic Personality Inventory (MMPI/MMPI-2) was developed during the late 1930s, reaching publication in 1943. The authors of the test were Starke R. Hathaway, a psychologist, and J. C. McKinley, a physician to whom Hathaway reported at the University of Minnesota Hospitals. The test was originally developed to aid in the assessment of adult psychiatric patients, both to describe the type and severity of their disturbance and to measure patient change over time. It quickly grew in popularity to become the most widely used and researched psychological test ever published.





Three characteristics distinguished the MMPI from the psychological tests of the 1930s. First, it was developed as a broadband test, that is, a multiphasic test that would assess a number of personality attributes in a single administration. Most personality tests up to that time were more narrow in their focus. Second, this was the first personality test to use an empirical method of selecting test questions. This procedure involved selecting test items that differentiated between persons making up a normal population and persons in the clinical group of interest (such as individuals diagnosed with schizophrenia, depression, or other psychiatric disorders) at a statistically significant level. Third, the MMPI incorporated validity scales, or measures of test-taking attitude that identified tendencies to either underreport or overreport psychopathology.




Restandardization

An important limitation of the original MMPI had to do with its normative sample, or the reference group used to represent the normal population (in contrast to the clinical groups). The original normal reference group consisted primarily of a rural, all white, eighth-grade-educated population who were visiting patients at the University of Minnesota Hospital. Over time, a number of criticisms were made that this group, predominantly Scandinavian in origin, was not representative of the broader United States population. Other problems with the MMPI also developed, including outdated test item content, poorly worded items, or item content objectionable to contemporary test takers (for instance, questions regarding religious beliefs or bodily functions). In response to such concerns, an MMPI restandardization project was begun in 1982, culminating in the publication of the MMPI-2 in 1989. Comparison of the restandardized normal sample to 1990 census data by ethnicity, age, and education indicated that the new normative group was significantly more representative of the United States population than were the original norms, with the exception that well-educated persons were overrepresented. The MMPI-2 also incorporated additional validity measures and newly developed scales reflecting contemporary clinical problems.




Description of the Test

The MMPI-2 is an objectively scored, standardized questionnaire consisting of 567 self-descriptive statements answered as either “true” or “false.” Responses can be either hand- or computer-scored and are summarized on a profile sheet. Interpretation is based on both the configuration of scales on the profile sheet and demographic variables characterizing the test-taker. The basic profile sheet is made up of nine validity measures and ten traditional clinical scales. Fifteen additional “content” scales can also be scored, as well as potentially hundreds of supplementary and research scales. The validity scales measure test-taking attitudes, including such characteristics as consistencies in response patterns and tendencies to exaggerate or minimize psychological problems. The clinical scales are labeled both by a number and with traditional psychiatric diagnostic labels such as depression, paranoia, and schizophrenia. The specific MMPI scale labels may be misleading in that some diagnostic labels are outdated (such as “psychasthenia” or “hysteria”). In addition, the scales do not effectively differentiate diagnostic groups (for instance, an elevation on the paranoia scale is not exclusive to persons with a paranoia diagnosis). It has thus become standard practice to refer to profiles by characteristic scale numbers (such as a “49” profile) and to interpret them according to relevant research rather than by scale labels. The fifteen content scales reflect the client’s endorsement of test items whose content is obvious and descriptive of particular problem areas such as anxiety, health concerns, or family problems. The many supplementary scales measure a wide range of concerns, ranging from addiction proneness to post-traumatic stress disorder to marital distress. The MMPI-2 is appropriate for use only with those aged eighteen years and older. A shorter version of the test, the MMPI-A, is available for use with fourteen- to eighteen-year-old adolescents.


Although the MMPI-2 is still widely administered, there is a revised version published in 2008 known as the MMPI-2 Restructured Form (MMPI-2-RF). The MMPI-2-RF is 338 questions long and scores test-takers on a set of clinical scales which has been altered to better reflect the current understanding of the psychological issues covered by the test.




Bibliography


Ben-Porath, Yossef S. Interpreting the MMPI-2-RF. Minneapolis: U of Minnesota P, 2012. Print.



Butcher, James N., ed. Basic Sources on the MMPI-2. Minneapolis: U of Minnesota P, 2000. Print.



Butcher, James N., ed. International Adaptations of the MMPI-2. Minneapolis: U of Minnesota P, 1996. Print.



Butcher, James N., and Carolyn L. Williams. "Personality Assessment with the MMPI-2: Historical Roots, Internatinoal Adaptations, and Current Challenges." Applied Psychology: Health and Well-Being 1.1 (2009): 105–35. Print.



Butcher, James N., and John R. Graham. Development and Use of the MMPI-2 Content Scales. 3d ed. Minneapolis: U of Minnesota P, 2007.



Caldwell, Alex B. “What Do the MMPI Scales Fundamentally Measure? Some Hypotheses.” Journal of Personality Assessment 76.1 (2001): 1–17. Print.



Dahlstrom, W. G., D. Lachar, and L. W. Dahlstrom. MMPI Patterns of American Minorities. Minneapolis: U of Minnesota P, 1986. Print.



Framingham, Jane. "Minnesota Multiphasic Personality Inventory (MMPI)." Psych Central. Psych Central, 2011. Web. 20 May 2014.



Friedman, A. F., R. Lewak, D. S. Nichols, and J. T. Webb. Psychological Assessment with the MMPI-2. Mahwah: Erlbaum, 2001. Print.

What are fungal infections? |


Types of Fungus

The term “fungus” is a general one for plantlike organisms that do not produce their own food through photosynthesis but live as heterotrophs, absorbing complex carbon compounds from other living or dead organisms. Fungi were formerly classified in the plant kingdom (together with bacteria, all algae, mosses, and green plants); more recently, biologists have realized that there are fundamental differences in cell structure and organization separating the lower plants into a number of groups that merit recognition as kingdoms. Fungi differ from bacteria and actinomycetes in being eukaryotic, that is, in having an organized nucleus with chromosomes within the cell. One division of fungi, which is believed to be distantly related to certain aquatic algae, has spores that swim by means of flagella. These water molds include pathogens of fish and aquatic insect larvae and a few economically important plant pathogens, but none have yet been recorded as causing a defined, nonopportunistic human disease. The other division of fungi lacks flagellated spores at any stage in its
life cycle. It encompasses most familiar fungi, including molds, mushrooms, yeasts, wood-rotting fungi, leaf spots, and all fungi reliably reported to cause disease in humans.




Fungi that lack flagellated stages in their life cycles are further divided into three classes and one form-class according to the manner in which the spores are produced. The first of these, the Zygomycetes (for example Rhizopus, the black bread mold), produce thick-walled, solitary sexual spores as a result of hyphal fusion; they are a diverse assemblage including many parasites of insects. Species in the genus
Mucor cause a rare, fulminating, rapidly fatal systemic disease called mucormycosis, generally in acidotic diabetic patients. The Basidiomycetes, characterized by the production of sexual spores externally on a club-shaped structure called a basidium, include mushrooms, plant rusts (such as stem rust of wheat), and most wood-rotting fungi. There is one important basidiomycetous human pathogen (Filobasidiella neoformans) and a few confirmed opportunists. The Ascomycetes, including most yeasts and lichens, many plant pathogens (such as Dutch elm disease and chestnut blight), and a great diversity of saprophytes growing on wood and herbaceous material, produce sexual spores
in a saclike structure called an ascus. One ascomycete, Piedraia nigra, regularly produces its characteristic fruiting bodies on its human host; others do so in culture. In addition, there is a form-class Deuteromycetes consisting of fungi that produce only asexual spores. Most are suspected of being stages in the life cycle of Ascomycetes, but some are Basidiomycetes or are of uncertain affinity. Human pathogens, at least as they occur on the host or in typical laboratory culture, are mostly Deuteromycetes.


Medical mycology (the study of fungi) would occupy only a single chapter in a book on the relationship of fungi to human affairs. Relatively few fungi have become adapted to living as parasites of human (or even mammalian) hosts, and of these, the most common ones cause superficial and cutaneous mycoses (fungal infections) with annoying but scarcely life-threatening effects. Serious fungal diseases are mercifully rare among people with normally functioning immune systems.


The majority of fungi are directly dependent on green plants as parasites, as symbionts living in a mutually beneficial association with a plant, or as saprophytes on dead plant material. One large, successful group of Ascomycetes lives in symbiotic association with algae, forming lichens. Fungi play a critical ecological role in maintaining stable plant communities. As plant pathogens, they cause serious economic loss, leading in extreme cases to famine. The ability of saprophytic fungi to transform chemically the substrate on which they are growing has been exploited by the brewing industry since antiquity and has been expanded to other industrial processes. Penicillin, other
antibiotics, and some vitamins are extracted from fungi, which produce a vast array of complex organic compounds whose potential is only beginning to be explored and which constitutes a fertile field for those interested in genetic engineering.


This same chemical diversity and complexity also enable fungi to produce mycotoxins—chemicals that have an adverse effect on humans and animals. Saprophytic fungi growing on improperly stored food are a troublesome source of toxic compounds, some of which are carcinogenic. The old adage that “a little mold won’t hurt you” is true in the sense that common molds do not cause acute illness when ingested, but it is poor advice in terms of long-term health.


A mycotoxicity problem of considerable medical and veterinary interest is posed by Ascomycetes of the order Clavicipitaceae, which are widespread on grasses. Some species of grasses routinely harbor systemic, asymptomatic infections by these fungi, which produce compounds toxic to animals that graze on them. From the point of view of the grass, the relationship is symbiotic, since it discourages grazing; from the point of view of range management, the relationship is deleterious to stock. Claviceps purpurea, a pathogen of rye, causes a condition known as ergotism in humans, with symptoms including miscarriage, vascular constriction leading to gangrene of the limbs, and hallucinations. Outbreaks of hallucinatory ergotism are thought by some authors to be responsible for some of the more spectacular perceptions of witchcraft in premodern times. Better control of plant disease and a decreased reliance on rye as a staple grain have virtually eliminated ergotism as a human disease in the twentieth century.


Fungi exhibit a bewildering variety of forms and life cycles; nevertheless, certain generalizations can be made. A fungus starts life as a spore, which may be a single cell or a cluster of cells and is usually microscopic. Under proper conditions, the spore germinates, producing a filament of fungal cells oriented end to end, called a hypha. Hyphae grow into the substrate, secreting enzymes that dissolve structures to provide food for the growing fungus and to provide holes through which the fungus can grow. In an asexually reproducing fungus, some of the hyphae become differentiated, producing specialized cells (spores) that differ from the parent hypha in size and pigmentation and are adapted for dispersal, but that are genetically identical to the parent. In a sexually reproducing fungus, two hyphae (or a hypha and a spore from different individuals) fuse, their nuclei fuse, and meiosis takes place before spores are formed. Spores are often produced in a specialized fruiting body, such as a mushroom.


Fungus spores are ubiquitous. Common saprophytic fungi produce airborne spores in enormous quantities; thus it is difficult to avoid contact with them in all but the most hypersterile environments. In culture, fungi (including pathogenic species) produce large numbers of dry spores that can be transmitted in the air from host to host, making working with fungi in a medical laboratory potentially hazardous.




Fungal Diseases and Treatments

Human fungal diseases are generally placed in four broad categories according to the tissues they attack, and they are further subdivided according to specific pathologies and the organisms involved. The categories of disease are superficial mycoses, cutaneous mycoses, subcutaneous mycoses, and systemic mycoses.


Superficial mycoses affect hair and the outermost layer of the epidermis and do not evoke a cellular response. They include tinea versicolor and tinea nigra, deutermycete infections that cause discolored patches on skin, and black piedra, caused by an ascomycete growing on hair shafts. They can be treated with a topical fungicide, such as nystatin, or, in the case of piedra, by shaving off the affected hair.


Cutaneous mycoses involve living cells of the skin or mucous membrane and evoke a cellular response, generally localized inflammation. Dermatomycoses (dermatophytes), which affect skin and hair, include tinea capitis (ringworm of the scalp), tinea pedis (athlete’s foot), and favus, a scaly infection of the scalp. Domestic animals serve as a reservoir for some cutaneous mycoses. The organisms responsible are generally fungi imperfecti in the genera Microsporum and Trichophyton. Cutaneous mycoses can be successfully treated with topical nystatin or oral griseofulvin.



Candida albicans, a ubiquitous pleomorphic fungus with both a yeast and a mycelial form, causes a variety of cutaneous mycoses as well as systemic infections collectively named
candidiasis. Thrush is a Candida
yeast infection of the mouth that is most common in infants, especially in infants born to mothers with vaginal candidiasis. Vaginal yeast infections periodically affect 18 to 20 percent of the adult female population and more than 30 percent of pregnant women. Candida also causes paronychia, a nailbed infection. Small populations of Candida are normally present in the alimentary tract and genital tract of healthy individuals; candidiasis of the mucous membranes tends to develop in response to antibiotic treatment, which disturbs the normal bacterial flora of the body, or in response to metabolic changes or decreasing immune function.


None of the organisms causing cutaneous mycoses elicits a lasting immune response, so recurring infections by these agents is the rule rather than the exception. Even in temperate climates, under modern standards of hygiene, cutaneous mycoses are extremely common.


Subcutaneous mycoses, affecting skin and muscle tissue, are predominantly tropical in distribution and not particularly common. Chromomycosis and maduromycosis are caused by soil fungi that enter the skin through wounds, causing chronic localized tumors, usually on the feet. Sporotrichosis enters through wounds and spreads through the lymphatic system, causing skin ulcers associated with lymph nodes. Amphotericin B, a highly toxic systemic antifungal agent, has been used to treat all three conditions; potassium iodide is used to treat sporotrichosis, and localized chromomycosis and maduromycosis lesions can be surgically removed.


Systemic mycoses, the most serious of fungal infections, have the ability to become generally disseminated in the body. The main nonopportunistic systemic mycoses known in North America are histoplasmosis, caused by Histoplasma capsulatum; coccidiomycosis, caused by Coccidiodes immitis; blastomycosis, caused by Ajellomyces (or Blastomyces) dermatidis; and cryptococcosis, caused by Cryptococcus (or Filobasidiella) neoformans. Similar infections, caused by related species, occur in other parts of the world.


Coccidiomycosis, also called San Joaquin Valley fever or valley fever, will serve as an example of the etiology of systemic mycoses. The causative organism lives in arid soils in the American southwest; its spores are wind-disseminated. When inhaled, the fungus grows in the lungs, producing a mild respiratory infection that is self-limiting in perhaps 95 percent of the cases. The mild form of the disease is common in rural areas. In a minority of cases, a chronic lung disease whose symptoms resemble tuberculosis develops. There is also a disseminated form of the disease producing meningitis; chronic cutaneous disease, with the production of ulcers and granulomas; and attack of the bones, internal organs, and lymphatic system. A chronic pulmonary infection may become systemic in response to factors that undermine the body’s immune system. Factors involved in individual susceptibility among individuals with intact immune systems are poorly understood.


Histoplasmosis (also known as summer fever, cave fever, cave disease, Mississippi Valley fever, or Ohio Valley disease) is even more common; 90 percent of people tested in the southern Mississippi Valley show a positive reaction to this fungus, indicating prior, self-limiting lung infection. The fungus is associated with bird and bat droppings, and severe cases sometimes occur when previously unexposed individuals are exposed to high levels of inoculum in caves where bats roost. A related organism, Histoplasma duboisii, occurs in central Africa. Blastomycosis causes chronic pulmonary disease, chronic cutaneous disease, and systemic disease, all of which were usually fatal until the advent of chemotherapy with amphotericin B. The natural habitat of the fungus is unclear. Cryptococcus neoformans
occurs in pigeon droppings and is worldwide in distribution. The subclinical pulmonary form of the disease is probably common; invasive disease occurs in patients with collagen diseases, such as lupus, and in patients with weakened immune systems. It is the leading cause of invasive fungal disease in patients with
Acquired immunodeficiency syndrome (AIDS).


Systemic fungal diseases are notoriously difficult to treat. Chemotherapy of systemic, organismally caused diseases depends on finding a chemical compound that will selectively kill or inhibit the invading organism without damaging the host. Therefore, the more closely the parasite species is related biologically to the host species, the more difficult it is to find a compound that will act in such a selective manner. Fungi are, from a biological standpoint, more like humans than they are like bacteria, and antibacterial antibiotics are ineffective against them. If a fungus has invaded the skin or the digestive tract, it can be attacked with toxic substances that are not readily absorbed into the bloodstream, but this approach is not appropriate for a systemic infection. Amphotericin, itraconazole, and fluconazole, the drugs of choice for systemic fungal infections, are highly toxic to humans. Thus, dosage is critical, close clinical supervision is necessary, and long-term therapy may not be feasible.




Perspective and Prospects

Medical mycology textbooks written before 1980 tended to focus on two categories of fungal infection: the common, ubiquitous, and comparatively benign superficial and cutaneous mycoses, frequently seen in clinical practice in the industrialized world, and the subcutaneous and deep mycoses, treated as a rare and/or predominantly tropical problem. Opportunistic systemic infections, if mentioned at all, were regarded as a rare curiosity.


The rising population of patients with compromised immune systems, including cancer patients undergoing chemotherapy, people being treated with steroids for various conditions, transplant patients, and people with AIDS, has dramatically changed this clinical picture. Between 1980 and 1986, more than a hundred fungi, a few previously unknown and the majority common inhabitants of crop plants, rotting vegetable debris, and soil, were identified as causing human disease. The number continues to increase steadily. Compared to organisms routinely isolated from soil and plants, these opportunistic fungi do not seem to have any special characteristics other than the ability to grow at human body temperature; however, the possibility that an opportunistic pathogen might mutate into a form capable of attacking healthy humans is worrisome.


Systemic opportunistic human infections have been attributed to Alternaria alternata and Fusarium oxysporum, common plant pathogens that cause diseases of tomatoes and strawberries, respectively. Several species of Aspergillus, saprophytic molds (many of them thermophilic), have long been implicated in human disease. Colonizing aspergillosis, involving localized growth in the lungs of people exposed to high levels of aspergillus spores (notably agricultural workers working with silage), is not particularly rare among people with normal immune systems, but the more severe invasive form of the disease, in which massive lung lesions form, and disseminated aspergillosis, in which other organs are attacked, almost always involve immunocompromised patients. Ramichloridium schulzeri, described originally from wheat roots, causes “golden tongue” in leukemia patients; fortunately this infection responds to amphotericin B. Scelidosporium inflatum, first isolated from a serious bone infection in an immunocompromised patient in 1984, is being isolated with increasing frequency in cases of disseminated mycosis; it resists standard drug treatment.


Oral colonization by strains of Candida is often the first sign of AIDS-related complex or full-blown AIDS in an individual harboring the
Human immunodeficiency virus (HIV). Drug therapy with fluconazole is effective against oral candidiasis, but relapse rates of up to 50 percent within a month of the cessation of drug therapy are reported. Reported rates of disseminated candidiasis in AIDS patients range from 1 to 10 percent. Invasive procedures such as intravenous catheters represent a significant risk of introducing Candida and other common fungi into the bloodstream of patients.



Pneumocystis jiroveci (formely called Pneumocystis carinii), the organism causing a form of
pneumonia that is the single most important cause of death in patients with AIDS, was originally classified as a sporozoan—that is, as a parasitic protozoan—but detailed investigations of the life cycle, metabolism, and genetic material of Pneumocystis have convinced some biologists that it is actually an ascomycete, although an anomalous one that lacks a cell wall. Unfortunately, while antibiotics and corticosteroids are used to treat the illness, it does not respond to therapy with the antifungal drugs currently in use.


In general, antifungal drug therapy for mycoses in AIDS patients is not very successful. In the absence of significant patient immunity, it is difficult to eradicate a disseminated infection from the body entirely, making a resurgence likely once drug therapy is discontinued. Reinfection is also likely if the organism is a common component of the patient’s environment.


Given the increasing number of lethal systemic fungal infections seen in clinical practice, there is substantial impetus for a search for more effective, less toxic antifungal drugs. A number of compounds, produced by bacteria and chemically dissimilar to both antibacterial antibiotics and the most widely used antifungal compounds, have been identified and are being tested. It is also possible that the plant kingdom, which has been under assault by fungi for all its long geologic history, may prove a source for medically useful antifungal compounds.




Bibliography:


Alcamo, I. Edward. Microbes and Society: An Introduction to Microbiology. 2d ed. Sudbury, Mass.: Jones and Bartlett, 2008.



Biddle, Wayne. A Field Guide to Germs. 3d ed. New York: Anchor Books, 2010.



Carlile, Michael J., Sarah Watkinson, and Graham W. Gooday. The Fungi. 2d ed. San Diego, Calif.: Academic Press, 2008.



Crissey, John Thorne, Heidi Lang, and Lawrence Charles Parish. Manual of Medical Mycology. Cambridge, Mass.: Blackwell Scientific, 1995.



"Fungal Diseases." Centers for Disease Control and Prevention, Nov. 19, 2012.



"Fungal Infections." MedlinePlus, Jan. 31, 2013.



"Fungal Infections." National Institute of Allergy and Infectious Diseases, Apr. 16, 2006.



Kumar, Vinay, Abul K. Abbas, and Nelson Fausto, eds. Robbins and Cotran Pathologic Basis of Disease. 8th ed. Philadelphia: Saunders/Elsevier, 2010.



Mandell, Gerald L., John E. Bennett, and Raphael Dolin, eds. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. 7th ed. New York: Churchill Livingstone/Elsevier, 2010.



Murray, Patrick R., Ken S. Rosenthal, and Michael A. Pfaller. Medical Microbiology. 7th ed. Philadelphia: Mosby/Elsevier, 2013.



Richardson, Malcolm D., and David W. Warnock. Fungal Infection: Diagnosis and Management. 4th ed. Hoboken, NJ: Wiley-Blackwell, 2012.



Rippon, John Willard. Medical Mycology: The Pathogenic Fungi and Pathogenic Actinomycetes. 3d ed. Philadelphia: W. B. Saunders, 1988.



Shaw, Michael, ed. Everything You Need to Know About Diseases. Springhouse, Pa.: Springhouse Press, 1996.



Weedon, David. Skin Pathology. 3d ed. New York: Churchill Livingstone/Elsevier, 2010.

Thursday 29 December 2016

What is experimental psychology? |


Introduction


Wilhelm Wundt
founded the field of psychology, which he termed “experimental psychology,” on establishing his lab at the University of Leipzig in Germany in 1879. Wundt was the first to identify psychology as a separate science, on par with the natural sciences such as biology, physics, and chemistry. Wundt himself was trained as a physiologist and philosopher, and the methods he used in both of those disciplines combined to give structure to the new field. The role of experimental psychology at its founding was to answer philosophical questions using scientific methods. Wundt defined consciousness as the appropriate subject matter for experimentation and devised methods such as introspection (reporting on inner experiences by the subjective observer) to study the activity and structures of the mind (the basis for the school of thought later termed structuralism). Wundt was responsible for removing psychology from the metaphysical realm, providing conclusive evidence that the mind could be studied scientifically. This profoundly affected the development of psychology in the years following, establishing an emphasis on the importance of scientific research methods.







Over the first century of its existence and beyond, psychology came to be defined as the scientific study of consciousness, emotions, and behavior, and experimental psychology is no longer the only type. There now are many other subfields in psychology, such as clinical psychology, social psychology, and developmental psychology, but experimental methods still underlie most of them because that is how knowledge is accumulated in each area. Experimental psychology itself has expanded to include both basic and applied research.




Basic and Applied Research

Basic research, the kind that Wundt himself conducted, is undertaken for the purpose of advancing scientific knowledge, even if the knowledge gained is not directly relevant to improving the lives of individuals. This type of research is more likely to take place in laboratory settings, often on university campuses, using undergraduate students or specially bred lab animals as experimental subjects. These settings do not approximate the natural environment, permitting factors that could interfere with interpretation of the results to be controlled or eliminated, making conclusions more accurate. Examples of basic research include studying animal behavior, examining the perceptual abilities of humans, or determining the factors contributing to aggressive behavior.


Basic research was the only type of research conducted in experimental psychology until the first decade of the twentieth century, when applied psychology was introduced through the American school of thought termed Functionalism. It was at this time that psychologists began being interested not only in how the mind works but also in how the mind works to help individuals interact with their environment. Most of the newer research involved conducting research with humans in their natural environment. For example, school psychologists were trying to find effective tests so that students could be taught at the appropriate levels (the first intelligence tests) and to identify how behavioral problems in the school or home could be controlled. Researchers also were trying to determine the factors that would increase efficiency and satisfaction in the workplace. In addition to these scenarios, researchers now attempt to solve such problems as finding effective ways to teach children with developmental disabilities, identifying new therapy techniques for those with psychological disorders, and developing strategies to increase healthy behaviors such as exercise and decrease unhealthy behaviors such as drug abuse. Applied research results tend to be more generalizable to others, but the relative lack of control sometimes limits the conclusions that can be drawn based on the results, so caution must be taken when recommending procedures from experiments.




The Scientific Method

The methods used for conducting either basic or applied research in experimental psychology are essentially the same as for conducting research in any other science. The first step in the process is identifying a research problem, a question that can be answered by appealing to evidence. Next will be a search for a theory, a general statement that integrates many observations from various research studies and is testable. From the theory is formed a hypothesis, a more precise version of the theory that is a specific prediction about the relationship between the variables in the research being conducted. At this point, the research is designed, which involves decisions about how many and what type of participants will be used, where the research will be conducted, the measurement procedures to be developed, and so on. After the relevant data are collected, they must be analyzed visually or statistically. This allows the drawing of conclusions about the findings, which are communicated to others in the form of presentation or publication. The research process is circular, in that the more questions that are answered the more new questions arise, and that is how science advances.


There are key characteristics that must be present for good scientific research. Objectivity means that research must be free from bias. Data are to be collected, analyzed, interpreted, and reported objectively so that others are free to draw their own conclusions, even if they are different from those of the researchers. Control of factors that may affect the results of the research is necessary if those factors are not the specific ones being studied. For example, control for the effects of gender can be accomplished by ensuring that research samples include approximately the same number of males and females, unless the researcher is interested in looking for potential gender differences in behavior. In that case, the researcher would still want to control for factors such as age, education, or other characteristics that might be relevant. Control allows researchers to be more confident about the accuracy of their conclusions.


Operationism involves defining the variables to be studied in terms of the way they are measured. Many different operational definitions are possible for a particular concept such as aggression or love, and the results of research studies that use different operational definitions when combined provide more complete knowledge than if only one operational definition were used. Finally, replication is a key part of the research process because the aim of science is to accept only knowledge that has been verified by others. This requirement that results be replicable helps prevent bias and furthers objectivity.




Descriptive Versus Experimental Research

Descriptive research is conducted to describe and predict behavior. Often these results are useful on their own, or such studies provide information to be used in future, more controlled, experiments. It can include archival research, an analysis of existing records of behavior, case studies, in-depth analysis of one or a few individuals, naturalistic observation, monitoring the behavior of subjects in their natural environment, or survey research in which individuals report on their own behavior. Descriptive research also includes correlational research, which examines relationships between variables that cannot be manipulated (such as gender, family background, or other personal characteristics that are not changeable). Correlational studies make it possible to predict changes in one variable based on observing changes in another, but as in all descriptive research, it is impossible to know whether or not changes in one variable caused the observed changes in another, so the conclusions to be drawn are limited.


The only type of research that can explain the causes of behavior is true experimental research, because that is the only type of research in which variables can be manipulated to see the observed effects on behavior. The variable that is manipulated is called the independent variable, and the variable that is measured to see the effects of the manipulation is called the dependent variable. Independent variables can be manipulated by measuring the effects of their presence versus absence (for instance, how reaction times differ when alcohol is consumed), their degree (how reaction times change as more alcohol is consumed), or their type (reaction times when alcohol is consumed as compared to when caffeine is consumed). Dependent variables are measured in terms of their latency (how long it takes for a response to occur) or duration (how long a response lasts), force (how strong the response is), rate or frequency (how often a response occurs within a period of time), or accuracy (the correctness of the response). There can be one or more each of the independent and dependent variables in any experiment, although having more variables increases the complexity of the analysis of the results. Every other variable that is present that could have an effect on the dependent variable in addition to the independent variable is considered an extraneous variable. These must be controlled (kept constant) or eliminated so that the researcher can be sure that changes in the dependent variable are due only to changes in the independent variable.




Bibliography


Christensen, Larry. Experimental Methodology. 10th ed. Boston: Allyn, 2007. Print.



Jahoda, Gustav. "Critical Comments on Experimental, Discursive, and General Social Psychology." Jour. for the Theory of Social Behaviour 43.3 (2013): 341–60. Print.



Kantowitz, Barry H., David G. Elmes, and Henry L. Roediger III. Experimental Psychology: Understanding Psychological Research. 9th ed. Belmont: Wadsworth, 2009. Print.



Lundin, Robert W. Theories and Systems of Psychology. 5th ed. Lexington: Heath, 1996. Print.



Myers, Anne, and Christine H. Hansen. Experimental Psychology. Belmont: Thomson, 2012. Print.



Myers, David G. Exploring Psychology. 7th ed. New York: Worth, 2008. Print.



Rose, Anne C. "Animal Tales: Observations of the Emotions in American Experimental Psychology, 1890–1940." Jour. of the History of the Behavioral Sciences 48.4 (2012): 301–17. Print.



Smith, Randolph A., and Stephen F. Davis. The Psychologist as Detective: An Introduction to Conducting Research in Psychology. 5th ed. Upper Saddle River: Prentice, 2009. Print.

What is abstinence-based treatment? |


History

Abstinence-based treatment was first developed at Willmar State Hospital and Hazelden Treatment Center in Minnesota in 1949. The treatment was targeted at “hopeless” alcoholics and was based on the principles of Alcoholics Anonymous
(AA). Borrowing from the twelve-step meetings of AA, developed in the 1930s, these alcoholic treatment centers added residential treatment that included lectures, open discussions, small group therapy, and peer interaction.




First known as the Willmar or Hazelden model, and then the Minnesota model in the 1970s, abstinence-based treatment centers became the predominant model for treating both alcohol and drug abuse in the 1980s. Private treatment in twenty-eight-day residential treatment centers dominated the treatment landscape but was affected by cost-cutting managed-care by the 1990s.


Most abstinence-based treatment now occurs in outpatient settings. Treatment focuses on individualized treatment plans, family involvement, and frequent use of group meetings such as AA, Narcotics Anonymous, and Al Anon. Studies show that more than 90 percent of drug and alcohol treatment programs in the United States are abstinence-based, and most use the twelve-step program of AA as a core principle.




Basic Principles

The first treatment principle is that all addiction, no matter the substance, is caused by lifelong physiological, social, and psychological disease processes. No cure exists for the disease of addiction, but recovery is possible through peer support and positive change. This principle removes the guilt that is associated with addiction and focuses on the disease instead of the addicted person. The addicted person begins by admitting that the disscease makes him or her powerless over drugs and alcohol.


Recovery involves taking responsibility for the disease and making necessary changes in thinking and behavior. This type of cognitive behavioral therapy may include individual and group therapy. Personal change may include recognizing denial and other self-defeating behaviors and replacing these negative thoughts with gratitude, honesty, forgiveness, and humility. For many addicts and alcoholics, key components of successful abstinence include a spiritual awakening, faith in a higher power, and faith in the power of being part of a recovery community. A final principle is that without continued abstinence, addiction is a progressive and ultimately fatal disease.




Basic Components

Diagnosis should begin with a comprehensive evaluation that recognizes that addiction is a social, biological, and psychological disease. The initial phase of treatment may require medically supervised detoxification. Comorbid diseases related to alcohol or drug abuse and dual diagnosis such as bipolar disorder, attention deficit/hyperactivity disorder, or depression should also be recognized and treated.


Treatment for primary addiction may include the use of control-craving drugs, individual cognitive behavioral therapy, group therapy, family therapy, and relapse prevention therapy. Abstinence-based treatment may be adapted to a long period of residential treatment or may occur through outpatient care. Because this treatment considers addiction a lifelong disease, addicts are encouraged to attend after-care programs and twelve-step meetings, where they can benefit from the reinforcement of core principles and the support of other recovering people.




Success and Criticism

Abstinence-based treatment is often criticized for having a low success rate, but because relapse is accepted as part of the natural course of the disease of addiction, it is difficult to give much credence to studies that look at one-year or even five-year success rates. Many addicted people fail initial treatment, have several relapses, and then continue with many years of sustained abstinence. According to the National Institute on Drug Abuse, relapse rates for addictions are similar to those for other chronic diseases, such as diabetes, hypertension, and asthma.


The abstinence-based treatment model also is criticized for being one-size-fits-all; for not allowing other treatment options, such as the harm-reduction model; for not being adaptable to persons who cannot accept the spiritual concept of a higher power; and for encouraging unattainable goals. These criticisms and alternatives are under discussion and study.


Still, most experts agree that abstinence should be the first and primary goal of addiction treatment. In the United States, therefore, abstinence-based treatment remains the treatment of choice for drug and alcohol addiction.




Bibliography


Cherkis, Jason. "Dying to Be Free: There's a Treatment for Heroin Addiction That Actually Works. Why Aren't We Using It?" Huffington Post. TheHuffingtonPost.com, 28 Jan. 2015. Web. 26 Oct. 2015.



Galanter, Marc, Herbert D. Kleber, and Kathleen T. Brady. The American Psychiatric Publishing Textbook of Substance Abuse Treatment. 5th ed. Washington, DC: Amer. Psychiatric Assn., 2015. Print.



Mignon, Sylvia I. Substance Abuse Treatment: Options, Challenges, and Effectiveness. New York: Springer, 2015. Print.



Ries, Richard, and Shannon C. Miller. Principles of Addiction Medicine. Philadelphia: Lippincott, 2009. Print.



Scott, Christy K., et al. “Surviving Drug Addiction: The Effect of Treatment and Abstinence on Mortality.” American Journal of Public Health 101.4 (2010): 737–44. Print. Print.



Spicer, Jerry. The Minnesota Model: The Evolution of the Multidisciplinary Approach to Recovery. Center City: Hazelden, 1993. Print.

Wednesday 28 December 2016

What could people do in order to make nonrenewable resources last longer?

Nonrenewable resources are resources that are in a limited quantity and takes a very long time to regenerate and hence may run out if we aren't careful. The most common nonrenewable resources we use are energy resources such as coal, petroleum and natural gas. These are used for generating electricity and heat as well as for transportation fuel. There are a number of steps people can take to make sure they last longer. Otherwise, with...

Nonrenewable resources are resources that are in a limited quantity and takes a very long time to regenerate and hence may run out if we aren't careful. The most common nonrenewable resources we use are energy resources such as coal, petroleum and natural gas. These are used for generating electricity and heat as well as for transportation fuel. There are a number of steps people can take to make sure they last longer. Otherwise, with the current consumption trend, we may run out of them within a century or so. 


  • Use of renewable resources: The most significant step would be to switch over to renewable resources, such as solar energy, wind energy, wave energy, etc. This way, our consumption patterns can stay the same while we save nonrenewable resources. People can install solar panels on their roofs and/or can have wind turbines (vertical axis) in the backyard.

  • Reducing consumption: We can also try to reduce our energy consumption by taking simple steps, such as carpooling, biking or walking short distances, judicious use of heating, etc. 

  • More efficient systems: We can also use more efficient devices, such as energy star rated electrical devices, hybrid cars, etc. 

Hope this helps. 

What are ego defense mechanisms?


Introduction

Ego defense mechanisms are complex, largely unconscious mental processes that protect people from becoming overwhelmed by strong emotions. Defense mechanisms protect the mind and nervous system just as the immune system protects the body, and they are essential for healthy functioning and adaptation. However, when they are used maladaptively, psychiatric symptoms can develop and result in psychopathology.













At birth, only rudimentary defenses are in place, so infants require substantial protection from external sources (caretakers) to prevent them from becoming overwhelmed by internal and environmental stresses. Over the course of childhood and continuing into adulthood, increasingly complex defense mechanisms develop and are added to an individual’s defense repertoire. As a result, each individual forms a personal defense system from which to automatically draw when emotions threaten to become too stressful. Some defenses work better in certain situations than others, so optimal adaptation in life is related to having more mature defenses, as well as flexibility in using them.




History

The phenomenon of defense mechanisms was not recognized until it was identified in the last decade of the 1800s by Sigmund Freud
, the Austrian founder of psychoanalysis. Freud described defense mechanisms as discrete processes for managing emotion and instincts, but for more than twenty years, he interchangeably used the general term “defense” and the term for one specific defense mechanism, “repression,” which resulted in considerable confusion among his readers. In 1936, Freud clarified that there were many defensive operations used by the ego and referred to a book his daughter, Anna Freud, a famous psychoanalyst in her own right, had just written, entitled Das Ich und die Abwehrmechanismen (1936; The Ego and the Mechanisms of Defense, 1937). Building on this work, other researchers have since described additional defense mechanisms and have elucidated their roles as adaptive processes.




Anxiety

In his seminal work, Sigmund Freud focused primarily on defense mechanisms in their role of protecting the ego from anxiety
resulting from internal conflicts. A conflict is caused when two or more equally powerful influences cannot be satisfied at the same time. It is resolved when one of the influences prevails, but this often leads to frustration because one or more of the other goals is thwarted. Most internal conflicts involve the interactions of the id, ego, and superego. For example, one may have a strong id impulse to overeat, but one’s superego may exert an equally powerful influence to remain thin. Thus, the sight of food may cause one to feel anxious without knowing why, because this conflict may be buried in the unconscious.


Conflicts may be either conscious or unconscious; according to Freud, all conflicts are accompanied by anxiety. Anxiety is an unpleasant emotional response that signals impending danger. It is anticipation of danger to be experienced in the future. Only the ego can feel anxiety, and this anxiety can be unbearable. It can occur in the absence of any objective external threat; even when a real threat exists, the emotional reaction is often much greater than warranted. For example, speaking in front of an audience is, in the real sense, not dangerous, but it can cause extreme anxiety in some people. Frequently, the threat that causes anxiety is unconscious, and the person may not be aware of the source.


Anxiety is a signal to take action, so it is both adaptive and self-regulating. That is, when faced with anxiety, the ego automatically attempts to reduce it, which at the same time should reduce the potential danger. In this regard, fear and anxiety are similar. For example, if a person is attacked, the person can fight the attacker or run away. In both cases, the danger will be removed and the fear will subside. Since one of the main functions of the ego is to maintain survival, its typical response is to take actions that will protect itself and the organism. The ego responds in a defensive manner to all types of anxiety, no matter what their source. In the example above, the mode of reducing fear is overt—that is, it is easily observable whether the person fights or runs away. In other situations, the actions taken by the ego to protect itself are said to be covert, which means they are not directly observable. These covert actions of protecting the ego from anxiety are called ego defense mechanisms. According to Freud, they operate at an unconscious level.




Repression

Freud was especially interested in the process of repression, which begins when the ego fully separates itself from the id but probably does not become fully operational until the phallic
psychosexual stage of development. In repression, the ego blocks or diverts any ideas, thoughts, feelings, or urges that it finds unacceptable or anxiety producing. For example, a person might have a desire to have sex with his or her boss or teacher, but if this wish is totally unacceptable to the superego, it can be repressed into the unconscious. Allowing this wish to become conscious would result in punishment from the person’s superego in the form of guilt, anxiety, or shame. To avoid this psychological response, the ego prevents the idea from ever becoming conscious. Although there is no memory of this impulse, it is never destroyed; in fact, it maintains all of its energy, remaining immediately under the level of awareness with the potential to surface at any time. Because of this, the person may feel ill at ease or anxious but has no awareness concerning the origin of this distress. Furthermore, the repressed energy continues to seek expression, and it often escapes in a disguised form.


The most important disguised forms of repressed material are neurotic symptoms. According to Freud, repressed energy must be released if the organism is to remain healthy. As the ego puts more and more effort into repressing unacceptable drives, it becomes weaker; sooner or later, something has to give in. Symptoms serve as a compromise, because they allow the repressed ideas to be expressed indirectly in a disguised form without arousing anxiety. The symptoms may be either psychological or physical. Physical symptoms are sometimes called conversion reactions because the energy associated with the original repressed idea is converted into physical symptoms such as paralysis or even blindness, which are attributable to psychological causes rather than any real organic impairment. Thus, Freud delineated the manner in which repression can become maladaptive and result in psychopathology, a conceptualization that was extremely innovative for its time.


Freud hit on the notion of repression when he noticed that his patients were resisting his attempts to help them. In this sense, repression is intimately linked to resistance. According to Freud, when he was using hypnosis to treat his patients, this resistance was hidden; however, as soon as the technique of free association replaced hypnosis, resistance was clearly evident, and psychoanalysis was born.


Freud’s concept of repression (which he first called “defense”) appeared in print in 1894. At that time, most of his patients were women who were suffering from an emotional disorder that was then called hysteria. Freud believed that hysteria was caused primarily by the repression of sexual impulses and that it could be cured by means of a “talking” therapy. At the time, it was a giant leap for psychology, because the prevailing viewpoint of the nineteenth century was that emotional disorders were caused by organic or physical factors. Freud’s theory emphasized a psychological cause and cure for emotional disorders, opening a new area of exploration and setting the stage for clinical psychology and psychiatry.




Post-Freudian Theories

Freud wrote about various defense mechanisms in a number of his works, but his daughter, Anna Freud, is credited with bringing them all together in her book The Ego and the Mechanisms of Defense. In it, she describes the original nine defense mechanisms—repression, regression, undoing, isolation, turning against self, reaction formation, reversal, projection, and introjection—and also adds sublimation and displacement. Over the years, other defense mechanisms, such as denial, rationalization, identification, intellectualization, and idealization, were added. New knowledge was added as well, including the importance of defense with regard to other emotions, such as anger, and the differences between defenses due to the ages at which they first develop, as seen in Joseph Sandler and Anna Freud’s book The Analysis of Defense: The Ego and the Mechanisms of Defense Revisited (1985).


In 1977, George E. Vaillant, a professor of psychiatry at Harvard University, published Adaptation to Life, a landmark study on the mental health and adaptation of a highly select group of male college graduates over a thirty-five-year period of adulthood. In his book, Vaillant documents important shifts in defensive styles during adult development, and he also demonstrates that individual differences in the types of defenses used were dramatically related to variance between the best and worst outcomes, especially with regard to measures of social, occupational, and psychological adjustment. Vaillant believed that there were innumerable defenses, but he selected eighteen of what he thought were the most salient mechanisms and organized them into four levels according to their hypothesized maturity and importance with regard to the development of psychopathology:


•Level 1: Psychotic Mechanisms (delusional projection, denial of external reality, and distortion)


•Level 2: Immature Mechanisms (projection, schizoid fantasy or withdrawal, hypochondriasis, passive-aggressive behavior, and acting out)


•Level 3: Neurotic Defenses (intellectualization, repression, displacement, reaction formation, dissociation)


•Level 4: Mature Mechanisms (altruism, humor, suppression, anticipation, sublimation)


Level 1 defenses were noted as common in childhood prior to age five, in dreams of healthy individuals at all ages, and in psychotic types of psychopathology. Level 2 mechanisms were common in healthy children between the ages of three and fifteen and in some types of adult psychopathology, such as severe depression and personality disorders. Level 3 defenses were deemed common in healthy people of all ages after the age of three, in mastering acute adult stress, and in neurotic disorders. Level 4 defenses were listed as common in healthy individuals from age twelve on.


With regard to the study participants, Vaillant found that as adolescents, they were twice as likely to use immature defenses as mature ones, but by middle life, they were four times as likely to use mature defenses rather than immature ones. This developmental shift was not equally obtained by everyone, however. Rather, the thirty men with the best outcomes (termed “generative”) had virtually stopped using immature mechanisms by midlife, with roughly equal use of neurotic and mature defenses. The men with the worst outcomes (termed “perpetual boys”), on the other hand, failed to show any significant shift in defenses after adolescence. Thus, Vaillant demonstrated that ego development, including maturation of defense mechanisms, was distinct from physical maturation as well as from cognitive or intellectual development and that the level of defense maturation was directly related to life adjustment.


Vaillant was especially struck by the importance of suppression as an adaptive defense mechanism. He defined suppression as the conscious or subconscious decision to deliberately postpone attending to conscious conflicts or impulses without avoiding them. This mechanism allows individuals to effectively cope with stress
when it is optimal to do so. Vaillant delineated the evolution of this defense as beginning with denial before age five, followed by repression from five to adolescence, with suppression emerging during late adolescence and adulthood when defense maturation is optimal.


Thus, Vaillant helped to better delineate the relationship between the healthy and adaptive need for ego defense mechanisms and the psychopathological outcomes that occur when they are used maladaptively. Moreover, he demonstrated that their development over time is part of the maturation process. Unfortunately, this study involved a highly select group of men and no women, so generalizations to the larger population are difficult to make.




Applications of Defense Mechanisms

In spite of the difficulty with generalization, the body of information regarding defenses underscores the importance of teaching children and adolescents to use increasingly mature mechanisms. Research has shown that this can be done effectively with social and emotional literacy programs, for example, in school classrooms. This application primarily involves prevention and has been growing in use since about 1990.


Applications regarding interventions with individuals showing maladaptive defense use, on the other hand, have been used much longer than prevention. Sigmund Freud developed psychoanalysis at the turn of the twentieth century with this in mind, and other forms of psychotherapy have since evolved that also embrace the importance of defense mechanisms in the development of psychopathology.


One example from psychoanalytic theory provides an illustration of how complex this topic really is. Freud believed that many neurotic symptoms are associated with the sex drive. For example, a man with an unusually strong superego may repress all sexual impulses. Through the process of reaction formation, these impulses may be converted into compulsive hand washing. According to psychoanalytic theory, the symptoms serve as a substitute for the sexual gratification that he is not allowed to obtain in real life. This is an unconscious process, and the man has no idea of the connection between the symptoms and his sex drive. When a person’s behavior is dominated by defense mechanisms, or when symptoms become severe, there may be a need for psychotherapy. The goal of therapy is not to eliminate defense mechanisms but rather to strengthen the ego so that it uses more mature processes and can respond to conflicts in a more adaptive and productive manner.


One of the objectives of psychoanalytic therapy is to uncover repressed material that is responsible for the unconscious conflicts or symptoms, which in turn facilitate the development of suppression. In a sense, people relive their lives in the therapy room so the conflict can be traced to its origin. To help the patient do this, the psychoanalyst uses two major techniques within the important context of the therapeutic relationship. The first is called free association. This involves having the patient talk about anything and everything that enters his or her mind, no matter how trivial or embarrassing it may be. This technique is based on the idea that thoughts and ideas do not enter one’s mind accidentally. There is usually an important reason for their appearance, and eventually thoughts that are related to the conflict are revealed. The second technique is interpretation, which can involve analyzing dreams, actions, feelings of the patient for the analyst, and so on. Freud was especially interested in dreams, the “royal road to the unconscious.” During sleep, ego defense mechanisms are weakened; therefore, many unconscious conflicts or desires may emerge—although still in a disguised form that needs to be interpreted by the therapist.


Although brief interventions can sometimes help people cope better with life’s stresses, therapy usually takes a long time, because maturation is generally a slow and complex process. Repression is especially difficult, because once material is repressed, the ego sets up a counterforce that prevents it from becoming conscious in the future. This counterforce is called resistance. It is responsible for a person unconsciously resisting treatment, as removing the symptoms only serves to return the ego to the original anxiety-producing conflict.


In the example above, once the resistance is overcome, the therapist may determine that the compulsive hand-washing behavior is rooted in an unresolved Oedipus complex. In this case, the man’s sexual attraction to his mother was repressed, and eventually all sexual impulses were treated in the same way. Giving careful consideration to timing, the therapist voices an interpretation, which is the method by which the unconscious meaning of a person’s thoughts, behaviors, or symptoms is divulged. One interpretation is not enough to cure the patient, but a slow process of “working through,” which involves many interpretations and reinterpretations, finally leads to insight. This last step occurs when a person fully understands and accepts the unconscious meaning of his or her thoughts and behaviors; at this point, the symptoms often disappear.




Examples of Selected Defense Mechanisms

Regression involves reducing anxiety or other strong feelings by attempting to return to an earlier and less stressful stage of development and engaging in the immature behavior or thinking characteristic of that stage. The most basic type of regression is sleep, which occupies most of the time of infants. For example, in response to an anxiety-producing test, a person might sleep through the alarm and thus miss the test (and avoid anxiety). Other examples of regression are a child engaging in thumb sucking when a new sibling is born and an adult engaging in smoking, both of which have their roots in the oral stage of infancy. Regression is one of the first defense mechanisms to emerge, beginning in the first year of life.


Projection is when one first represses one’s own unacceptable or dangerous impulses, attitudes, or behaviors and then assigns them to other persons. For example, a person may blame others for his or her failures. Freud believed that this occurs unconsciously, but some modern psychoanalysts believe that it can occur consciously as well. An example would be a married man with an unconscious desire to have an affair accusing his wife of having done so.


Denial occurs when the ego does not acknowledge anxiety-producing reality. For example, a person may not “see” that his or her marriage is falling apart and may behave as if nothing is wrong; a good student may “forget” that he or she failed a test in school. A form of psychotic denial is the example of a woman who continued to sleep with her husband’s corpse for several days after he had died.


Rationalization occurs when the ego tries to excuse itself logically from blame for unacceptable behaviors. For example, a student declares that he or she failed a test because roommates kept him or her up the night before, or a person gets drunk because he or she had such a “tough day” at the office.


Isolation is the process that separates unpleasant memories from emotions that were once connected to them. In this case, the ideas remain, but only in isolated form. For example, one might vividly remember a childhood situation of being spanked by one’s father but not recall the intense negative feelings one had toward him at that time because such feelings would be painful. This defense mechanism probably begins to emerge in the anal psychosexual stage, but it fully develops between ages three and five.


Introjection is also called identification. It involves modeling or incorporating the qualities of another person, such as one’s parents or teachers. Sometimes people do this with people that they fear; by doing so, the fear associated with them is reduced. Anna Freud calls this “identification with the aggressor.” For example, little boys identify with their fathers to reduce the castration anxiety associated with the Oedipus complex. As a result, boys adopt the social, moral, and cultural values of the father, all of which become incorporated into the superego.


Reaction formation occurs when a person expresses a repressed unconscious impulse by its directly opposite behavior. Hate may be replaced by love, or attraction by repulsion. The original feeling is not lost, but it does not become conscious. For example, a reaction formation to strong sexual impulses may be celibacy, or a parent who unconsciously hates his or her child may “smother” it by being overly protective. Reaction formation is another defense mechanism that is closely related to repression.


Sublimation involves channeling the power of instincts and emotions into scientific or artistic endeavors such as writing books, building cities, doing research, or landing a person on the moon. Freud believed that sublimation was especially important for building culture and society.




Summary

Defense mechanisms were initially discovered and studied in terms of their role in psychiatric symptom formation when used maladaptively. Unfortunately, this led many people to believe that defense mechanisms themselves were dysfunctional, which is not true. As Vaillant and others have shown, defenses are necessary for adaptation, survival, and happiness, but some are more effective for different stages of life than others, and maturational shifts in the development of ego defenses can have profound effects on social, emotional, and occupational adjustment.


On the positive side, Freud’s conceptualization of defense mechanisms led directly to his formulation of psychoanalysis, which was the first major personality theory and treatment method in psychology. Virtually all personality theories and treatment methods since then have been directly or indirectly influenced by the notions of defense and resistance. In addition, the concept of defense mechanisms has become an important part of Western language and culture.




Bibliography


Appignanesi, Richard. Freud for Beginners. Illus. Oscar Zarate. New York: Writer and Readers, 1994. Print.



Beresford, Thomas P. Psychological Adaptive Mechanisms: Ego Defense Recognition in Practice and Research. New York: Oxford UP, 2012. Print.



Diehl, Manfred, et al. "Change in Coping and Defense Mechanisms across Adulthood: Longitudinal Findings in a Eurpoean American Sample." Developmental Psychology 50.2 (2014): 634–48. Print.



Freud, Anna. The Ego and the Mechanisms of Defense. New York: International UP, 1974. Print.



Freud, Sigmund. The Standard Edition of the Complete Psychological Works of Sigmund Freud. Ed. James Strachey. 24 vols. London: Hogarth, 1953–74. Print.



Metzger, Jesse A. "Adaptive Defense Mechanisms: Function and Transcendence." Journal of Clinical Psychology 70.5 (2014): 478–88. Print.



Perry, J. Christopher. "Anomalies and Specific Functions in the Clinical Identification of Defense Mechanisms." Journal of Clinical Psychology 70.5 (2014): 406–18. Print.



Sandler, Joseph, and Anna Freud. The Analysis of Defense: The Ego and the Mechanisms of Defense Revisited. New York: International UP, 1985. Print.



Thurschwell, Pam. Sigmund Freud. 2nd ed. London: Routledge, 2009. Print.



Vaillant, George E. Adaptation to Life: How the Best and the Brightest Came of Age. Cambridge: Harvard UP, 1985. Print.

If Bill Benty is the most rounded character, how is this depth accomplished when Molly Wells takes over at the end of "After the Baptism" by Carol...

The character of Bill Benty is developed fully so that his sister Molly Wells can act as a dramatic foil to him.

Bill Benty, the paternal grandfather, orchestrates the plan to manage the differences among the relatives and friends who will attend the baptism of his son's daughter so that there will be no unpleasantness before or after the ceremony at their home where a reception is to be held. Benty is the consummate manager, who advises his wife that the biggest concern when the two sides of families get together is the "blood-letting issues." This "unbeatable, humane, wise, experienced administrator" feels that the issues that concern each side need to be aired prior to the big occasion. Therefore, he invites his son and his son's wife, along with their baby, to dinner the week before in order to discuss the baptismal ceremony and what will happen afterwards at the reception in their home. He tells his wife,



"If the issues can be solved to anyone's satisfaction, just solve them. But if they can't be solved at all, have the big fight about them a week ahead. Then everybody is sick of fighting by the time you have the occasion itself."



In addition to this plan, Bill advises his son, "Look for the pleasant moments, son. Whenever you can." As a further insurance, Bill informs his daughter-in-law's father, Mr. Oppeldahl, that he will have a bottle of Scotch especially for him, so the man will be content at the party after the baptism as he drinks his "life-restoring glasses" of the liquor.


All seems to be going smoothly as the guests enjoy lobster and champagne and chat amicably. However, Mr. Oppeldahl drinks more Scotch than he should and loses all inhibition. When a guest asks Ms. Wells, who is a godparent, why she cried at the baptism when she has never attended church, Bill tries to intervene, but Oppeldahl loudly encourages Molly Wells to tell her "long story" as she has called it, saying he will tell one is she does not. Bill goes for what he thinks is "the pleasant moment" and encourages Molly, rather than Oppeldahl, to speak.


Molly Wells surprises everyone with revelations of her past life with her husband Jamie. She describes how they lived in the Blue Ridge Mountains and were very close with the freshness of nature surrounding them. She felt safe with Jamie in this simple life, but he became very ill with cancer and died. After he died in the hospital, Molly refused to leave his bedside and witnessed his hand moving as the muscles drew up in death. This movement, she remarks, was the first that he ever made without her.



"I told you about this because I was so surprised to find how my life was not simple at all: it was all tied up in the flesh, this or that about the flesh. And how is flesh ever safe? So when you took that palm oil," she finished, glancing across at Father Geoffrey, "and pronounced little Molly here safe—safe!—in our Lord Jesus Christ forever.... Well, I simply began to cry!"



Molly's story is so unexpected, so unplanned, so raw and candid, that it completely shatters all of her foil's, Bill Benty's, careful designs to have everything run smoothly. Certainly, Molly brings more to the reception than any other person because, as she has commented regarding Jamie, "no one in my family could ever observe and think that clearly." Indeed, she enlightens the group. The guests are stunned and sit silently. Normally, the narrator injects, this kind of story would bring everyone to his or her feet as departure from an uncomfortable situation would be desired. But, unexpectedly, the rain begins to fall.



Then the rain continued so strongly it cleaned the air and made the whole family and their friends feel quiet and tolerant. They felt the classic old refreshment we always hope for in water.



The party have been baptized and cleansed of their sins of pettiness as they have received redemption through empathy.

Tuesday 27 December 2016

What is leptin? |


Structure and Functions

Leptin (from the Greek leptos, meaning “thin”) is a protein hormone with important effects in regulating body weight, metabolism, and reproductive function. It is the product of the obese (ob) gene occurring on chromosome 7 in the human. Leptin is produced primarily by adipocytes (white fat cells). It is also produced by cells of the epithelium of the stomach and in the placenta. It appears that as adipocytes increase in size because of accumulation of triglycerides (fat molecules), they synthesize more and more leptin. However, the mechanism by which leptin production is controlled is largely unknown. It is likely that a number of hormones modulate leptin output, including corticosteroids and insulin.





Disorders and Diseases

At first leptin was assumed to be simply a signaling molecule involved in limiting food intake and increasing energy expenditure. Studies published as early as 1994 showed a remarkable difference in weight gain in mice deficient in leptin (mice with a nonfunctional ob gene). Daily injections of leptin into these animals resulted in a reduction of food intake within a few days and a 50 percent decrease in body weight within a month.


More recent studies in the human have not been as promising. It appears that leptin’s effects on body weight are mediated through effects on hypothalamic (brain) centers that control feeding behavior and hunger, body temperature, and energy expenditure. If leptin levels are low, appetite is stimulated and use of energy limited. If leptin levels are high, appetite is reduced and energy use stimulated. The most likely target of leptin in the hypothalamus is inhibition of neuropeptide Y, a potent stimulator of food intake. However, this inhibition alone could not account for the effects seen, and studies looking at other hormones are under way.


Leptin also affects reproductive function in humans. It has long been known that very low body fat in human females is associated with cessation of menstrual cycles, and the onset of puberty is known to correlate with body composition (fat levels) as well as age. Several studies have suggested that leptin stimulates hypothalamic output of gonadotropin-releasing hormone, which in turn causes increases of luteinizing and follicle-stimulating hormones from the anterior pituitary gland. These hormones stimulate the onset of puberty. Prepubertal mice treated with leptin become thin and reach reproductive maturity earlier than control mice. One report has also indicated that humans with mutations in the ob gene that prevent them from producing leptin not only become obese but also fail to achieve puberty.


Leptin has been identified in placental tissues; newborn babies show higher levels than those found in their mothers. Leptin has also been found in human breast milk. Together, these findings suggest that leptin aids in intrauterine and neonatal growth and development, as well as in regulation of neonatal food intake.


Finally, leptin appears to have a role in immune system function. Studies have suggested a role for leptin in production of white blood cells and in the control of macrophage function. Mice that lack leptin have depressed immune systems, but the mechanisms for this remain unclear.




Perspective and Prospects

Although early reports claimed that leptin could be useful in treating human obesity, clinical reports to date have not looked promising. It appears that deficiencies in leptin production are a rare cause of human obesity. However, since most obese individuals have plenty of leptin available, additional leptin will have no effect. In those individuals with a genetic deficiency of leptin, clinical use would require either daily injections of leptin or gene therapy. At this point neither of these options looks particularly promising.




Bibliography


Barinaga, Marcia. “Obesity: Leptin Receptor Weighs In.” Science 271 (January 5, 1996): 29.



Castracane, V. Daniel, and Michael C. Henson, eds. Leptin. New York: Springer, 2011.



Goodman, H. Maurice. Basic Medical Endocrinology. 4th ed. Boston: Academic Press/Elsevier, 2009.



Hemling, Rose M., and Arthur t. Belkin. Leptin: Hormonal Functions, Dysfunctions, and Clinical Uses. New York: Nova Science, 2011.



Henry, Helen L., and Anthony W. Norman, eds. Encyclopedia of Hormones. 3 vols. San Diego, Calif.: Academic Press, 2003.



Holt, Richard I. G., and Neil A. Hanley. Essential Endocrinology and Diabetes. 6th ed. Chichester, West Sussex: Wiley-Blackwell, 2012.



Rink, Timothy J. “In Search of a Satiety Factor.” Nature 372 (December 1, 1994): 372–373.



Society for Neuroscience. "Food for Thought: Obesity and Addiction." BrainFacts, April 20, 2012.

What is cognitive dissonance? |


Introduction


Cognitive dissonance theory, developed by social psychologist Leon Festinger, suggests that there is a basic human tendency to strive for consistency between and among cognitions. Cognitions are defined as what people know about their attitudes and behaviors. An attitude is defined as one’s positive or negative evaluations of a person, place, or thing. If an inconsistency does arise—for example, if an individual does something that is discrepant with his or her attitudes—cognitive dissonance is said to occur. Dissonance is an uncomfortable state of physiological and psychological tension. It is so uncomfortable, in fact, that when individuals are in such a state, they become motivated to rid themselves of the feeling. This can be done by restoring consistency to the cognitions in some way.






What exactly does dissonance feel like? Although it is difficult to describe any kind of internal state, the reactions one has when one hurts the feelings of a loved one or breaks something belonging to someone else are probably what Festinger meant by dissonance.




Restoring Consonance

When in a state of dissonance, there are three ways a person can restore consistency or, in the language of the theory, consonance. Consonance is defined as the psychological state in which cognitions are not in conflict. One way to create consonance is to reduce the importance of the conflicting cognitions. The theory states that the amount of dissonance experienced is a direct function of the importance of the conflicting cognitions. Consider, for example, a person who actively pursues a suntan. The potential for dissonance exists with such behavior, because the cognition “I am doing something that is increasing my chances for skin cancer” may be in conflict with the cognition “I would like to remain healthy and live a long life.” To reduce dissonance, this person may convince him- or herself that he or she would rather live a shorter life filled with doing enjoyable and exciting things than live a longer, but perhaps not so exciting, life. The inconsistency still exists, but the importance of the inconsistency has been reduced.


A second way to reduce dissonance is to add numerous consonant cognitions, thus making the discrepancy seem less great. The suntanner may begin to believe he or she needs to be tan to be socially accepted because all of his or her friends have tans. The tanner may also begin to believe that suntanning makes him or her look healthier and more attractive and, indeed, may even come to believe that suntanning does promote health.


The last method Festinger proposed for reducing dissonance is the simplest, and it is the one that caught the attention of many social psychologists. It is simply to change one of the discrepant cognitions. The suntanner could either stop suntanning or convince him- or herself that suntanning is not associated with an increased risk of skin cancer. In either case, the inconsistency would be eliminated.


This latter possibility intrigued social psychologists because it offered the possibility that people’s behaviors could influence their attitudes. In particular, it suggested that if someone does something that is inconsistent with his or her attitudes, those attitudes may change to become more consistent with the behavior. For example, imagine that a person wanted a friend to favor a particular candidate in an upcoming election, and the friend favored the opposing candidate. What would happen if this person convinced the friend to accompany him or her to a rally for the candidate the friend did not support? According to the theory, the friend should experience some degree of dissonance, as the behavior of attending a rally for candidate X is inconsistent with the attitude “I do not favor candidate X.” To resolve the inconsistency, the friend may well begin to convince him- or herself that candidate X is not so bad and actually has some good points. Thus, in an effort to restore consonance, the friend’s attitudes have changed to be more consistent with behavior.




Dissonance-Induced Attitude Change

Changes in behavior cannot always be expected to lead to changes in attitudes. Dissonance-induced attitude change—that is, the adjustment of one's attitude in an effort to be consistent with a behavior—is likely to happen only under certain conditions. For one, there must not be any external justification for the behavior. An external justification is an environmental cause that might explain the inconsistency. If the friend was paid a hundred dollars to attend the rally for the candidate or was promised a dinner at a fancy restaurant, he or she most likely would not have experienced dissonance, because he or she had a sufficient external justification. Dissonance is most likely to occur when no external justification is present for a behavior.


Second, dissonance is most likely to occur when individuals believe that the behavior was done of their own free will—that is, when they feel some sort of personal responsibility for the behavior. If the friend had been simply told that he or she was being taken out for an exciting evening and was not told that they were going to this candidate’s rally until they got there, the friend most likely would not have experienced dissonance.


Third, dissonance is more likely to occur when the behavior has some sort of foreseeable negative consequences. If the friend knew that each person who attended the rally was required to pay a donation or hand out pamphlets for the candidate and yet still elected to go, he or she would probably have experienced considerable dissonance; now the friend is not only attending a rally for a candidate he or she opposes but also actively campaigning against his or her preferred candidate.




Effect of Rewards

Perhaps the most-researched application of dissonance theory concerns how attitudes are affected by rewarding people for doing things in which they do not believe. In one study, Festinger and J. M. Carlsmith had students perform a boring screw-turning task for one hour. They then asked the students to tell another student waiting to do the same task that the task was very interesting. In other words, they asked the students to lie. Half the students were offered twenty dollars to do this; the other half were offered one dollar. After the students told the waiting student that the task was enjoyable, the researchers asked them what they really thought about the screw-turning task. The students who were paid twenty dollars said they thought the screw-turning task was quite boring. The students who were paid only one dollar, however, said that they thought the task was interesting and enjoyable.


Although surprising, these findings are precisely what dissonance theory predicts. When a student informed a waiting student that the task was enjoyable, the possibility for dissonance arose. The cognition “This task was really boring” is inconsistent with the cognition “I just told someone that this task was quite enjoyable.” The students paid twenty dollars, however, had a sufficient external justification for the inconsistency, so there was no dissonance and no need to resolve any inconsistency. The students paid one dollar, however, did not have the same external justification; most people would not consider a dollar to be sufficient justification for telling a lie, so these students were in a real state of dissonance. To resolve the inconsistency, they changed their attitudes about the task and convinced themselves that the task was indeed enjoyable, thereby achieving consonance between attitudes and behavior. Thus, the less people are rewarded for doing things they might not like, the more likely it is that they will begin to like them.




Effect of Punishment

Dissonance theory makes equally interesting predictions about the effects of punishment. In a study by Elliot Aronson and Carlsmith, a researcher asked preschool children to rate the attractiveness of toys. The researcher then left the room, but, before leaving, he instructed the children not to play with one of the toys they had rated highly attractive. This became the “forbidden” toy. The researcher varied the severity of the punishment with which he threatened the children if they played with the forbidden toy. For some children, the threat was relatively mild: the researcher said he would be upset. For others, the threat was more severe: the researcher said that he would be angry, would pack up the toys and leave, and would consider the child a baby.


Both threats of punishment seemed to work, as no children played with the forbidden toy. When the researcher asked the children later to rerate the attractiveness of the toys, however, it was apparent that the severity of the threat did make a difference. For children who were severely threatened, the forbidden toy was still rated as quite attractive. For the mildly threatened children, however, the forbidden toy was rated as much less attractive.


By not playing with the forbidden toy, children were potentially in a state of dissonance. The cognition “I think this is an attractive toy” is inconsistent with the cognition “I am not playing with the toy.” Those in the severe-threat condition had a sufficient external justification for the discrepancy. Hence, there was no dissonance and no motivation to resolve the inconsistency. Those in the mild-threat condition had no such external justification for the inconsistency, so they most likely felt dissonance, and they resolved it by convincing themselves that the toy was not so attractive. Thus, perhaps surprisingly, the more mild the threats used to get children not to do something, the more likely it is that they will come to believe that it is not something they even want to do.




Role of Decision Making

A last type of everyday behavior for which dissonance theory has implications is decision making. According to the theory, many times when one makes a decision, particularly between attractive alternatives, dissonance is likely to occur. Before making a decision, there are probably some features of each alternative that are attractive and some that are not so attractive. When the decision is made, two sets of dissonant cognitions result: “I chose something that has unattractive qualities” and “I did not choose something that has attractive qualities.” To resolve this dissonance, people tend to convince themselves that the chosen alternative is clearly superior to the unchosen alternative. Because of this, although each alternative was seen as equally attractive before the decision was made, after the decision, the chosen alternative is seen as much more attractive.


For example, Robert Knox and James Inkster went to a racetrack and asked a sample of people who were waiting in line to place their bets how confident they were that their horse was going to win. They then asked a sample of people who were leaving the betting window the same question. As might have been predicted by now, a bettor was much more confident about a horse’s chances after having placed the bet. Before placing a bet, there is no dissonance. After actually placing money on the horse, the potential for dissonance (“I placed money on a horse that might lose and I did not bet on a horse that might win”) arises. To avoid or resolve this dissonance, bettors become much more confident that their horse will win and, by default, more confident that other horses will not.




Prominent Influence in Psychology

Cognitive dissonance theory was introduced in 1957, at a time when social psychologists' interest in the motives underlying people’s attitudes and behaviors was at a peak. Although dissonance theory has emerged as perhaps the best-known and most-researched theory in social psychology, when it was first developed it was one of a handful of theories, now collectively known as cognitive consistency theories, that proposed that people are motivated to seek consistency among and between thoughts, feelings, and behaviors.


There are numerous explanations for why cognitive dissonance theory has become as important as it has, but two seem particularly intriguing. One concerns the intellectual climate in psychology during the time the theory was introduced. At the time, research in most fields of psychology, including social psychological research on attitude change, was influenced by learning theory. Learning theory suggests that behavior is a function of its consequences: people do those things for which they are rewarded and do not do those things for which they are not rewarded or for which they are punished. Therefore, according to this perspective, to significantly change any form of behavior, from overt actions to attitudes and beliefs, some kind of reward or incentive needs to be offered. The bigger the incentive (or the stronger the punishment), the more change can be expected. Research on attitude change, therefore, also focused on the role of rewards and punishment. What made dissonance theory stand out was its prediction that sometimes less reward or incentive will lead to more change. This counterintuitive prediction, standing in stark contrast to the generally accepted ideas about the roles of reward and punishment, brought immediate attention to dissonance theory not only from the social-psychological community but also from the psychological community in general, and it quickly vaulted the theory to a position of prominence.


A second reason dissonance has become such an important theory has to do with its particular influence on the field of social psychology. Before the theory was introduced, social psychology was identified with the study of groups and intergroup relations. Dissonance theory was one of the first social psychological theories to emphasize the cognitive processes occurring within the individual as an important area of inquiry. As a result, interest in the individual waxed in social psychology, and interest in groups waned. Indeed, the study of groups and intergroup relations began, in part, to be considered the province of sociologists, and the study of the individual in social settings began to define social psychology. Thus, dissonance theory can be credited with significantly changing the focus of research and theory in social psychology.




Bibliography


Allahyani, Mariam Hameed Ahmed. "The Relationship between Cognitive Dissonance and Decision-Making Styles in a Sample of Female Students at the University of Umm Al Qura." Education 132.3 (2012): 641–63. Print.



Aronson, Elliot. “The Theory of Cognitive Dissonance: A Current Perspective.” Advances in Experimental Social Psychology. Vol. 4. Ed. Leonard Berkowitz. New York: Academic, 1969. 2–34. Print.



Cooper, Joel. Cognitive Dissonance: Fifty Years of a Classic Theory. Newbury Park: Sage, 2007. Print.



Festinger, Leon. A Theory of Cognitive Dissonance. Stanford: Stanford UP, 1957. Print.



Gawronski, Bertram. "Back to the Future of Dissonance Theory: Cognitive Consistency as a Core Motive." Social Cognition 30.6 (2012): 652–68. Print.



Harmon-Jones, Eddie, and Judson Mills, eds. Cognitive Dissonance: Progress on a Pivotal Theory in Social Psychology. Washington: APA, 1999. Print.



Martinie, Marie-Amélie, Laurent Milland, and Thierry Olive. "Some Theoretical Considerations on Attitude, Arousal and Affect during Cognitive Dissonance." Social and Personality Psychology Compass 7.9 (2013): 680–88. Print.



McClure, John. Explanations, Accounts, and Illusions: A Critical Analysis. Cambridge: Cambridge UP, 1991. Print.



Tavris, Carol, and Elliot Aronson. Mistakes Were Made (but Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts. Orlando: Harcourt, 2007. Print.

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...