Saturday 30 July 2016

What are death and dying's effects on mental states?


Introduction

People are unique while they are alive, and that uniqueness extends to death and dying. The manner in which people encounter and cope with a terminal disease and the dying process holds endless variations.











The Hippocratic philosophy of medicine declares that a physician must act in the best interests of the patient seeking care. The goal of medical care is to overcome sickness and relieve suffering, thus preserving life. Sometimes, however, it is necessary to add to a patient’s suffering to achieve ultimate relief, as with cancer treatments such as chemotherapy and radiation, and surgery that may result in periods of debilitation. These treatments are generally acceptable if there is a reasonable promise that they may ultimately reduce or eradicate a disease or condition. When only a small possibility of survival exists, however, patients may decide to end or forgo a particular course of treatment. That decision is generally made by the patient and the family in conjunction with the medical team. Religious and philosophical factors as well as age, family values, and family history may enter into the decision. Generational differences may also affect how the patient and the family approach or ultimately accept a terminal diagnosis.




Advance Directives

Death is a natural event, but end-of-life experiences are often shaped by medical, demographic, and cultural trends. Medical professionals have a duty to keep terminally injured or ill patients alive as long as possible by powerful medicines, machines, and aggressive medical care unless the patient desires otherwise. Often, however, patients have not expressed their desires in advance of becoming terminally ill or injured. If patients can no longer speak for themselves, other people must make decisions for them, frequently contrary to what the patients themselves would have wanted. This dilemma can be solved if a person writes a living will or advance directive, a document in which a person’s desires in the case of a terminal illness or injury are recorded in advance of entering such a state.


Advance directives, including durable powers of attorney (DPOA) for health care and do-not-resuscitate orders (DNR), allow legally competent individuals to express their wishes for future health decisions in the event that they are unable to participate directly and actively in medical decisions regarding their care. Patients can also designate a person or surrogate to act as decision maker. Advance directives are valuable because most family members find it difficult, if not overwhelming, to make complex choices about end-of-life care for a loved one. If patients have communicated their wishes about end-of-life care, however, their wishes can be respected. Advance directives are recognized in all fifty states in the United States and are legally binding if executed in accordance with state guidelines.


Advance directives are applicable only in situations in which patients are unable to participate in decisions regarding their health care. Decisions by legally competent patients always supersede written directives. In addition, people may revise or revoke advance directives as long as they remain able to participate in making medical decisions.



Legally competent
patients have certain rights, including the right to refuse treatment, the right to discontinue unwanted treatment that has already begun, the right to refuse nutrition and hydration even if that hastens death, and the right to change physicians. If the patient is incompetent, the proxy decision maker can inform the medical team about the patient’s wishes as enunciated in the advance directive.




Dying and the Hospice Movement

Most people do not die in a way of their choosing. During the fifteenth century, the Roman Catholic Church introduced a body of literature called ars moriendi, or the “art of dying,” which centered on the concept that people must be aware of and prepare for death during their entire life (this view held that a person’s entire lifetime is a preparation for death). People believed that the only possible attitude toward death was to let it happen once symptoms appeared. The only choice in death was to die in the best way possible, having made peace with God. Over centuries, that concept has evolved into the idea of a “good death,” and programs such as hospices have developed to manage the process of dying and make it as tranquil as possible. Evidence indicates that if they retain their awareness, the dying wish to be treated as human beings until the moment of their death. Preserving the dignity of the dying often means including them in discussions about the decision-making process surrounding their deaths and including them as family members. For a “good death,” or death with dignity, the dying should be treated with compassion, tenderness, dignity, and honesty.


Medical professionals are taught that listening is an important way of gathering information and assessing a patient’s physical and psychological condition. Moreover, listening also is a means of providing comfort. Even when the dying can no longer speak, it is widely believed that they can hear, so continuing to speak to the dying may provide physical or spiritual comfort.


The hospice movement, which began in the late 1980s, provides
palliative care (comforting rather than curing) for the dying. The dying are given humane and compassionate care with the goal of keeping the patient pain-free and alert as long as possible. The focus of palliative care is not on death but on compassionate, specialized care for the patient’s remaining life. Palliative care may be delivered in a hospital setting while treatments are being given or in a hospice or home setting. Both hospice and palliative care are individualized to suit the particular patient.




The Aging Process and Death

The aging process is explained by two main theories: the wear-and-tear theory, which attributes aging to the progressive damage to cells and organs through the process of carrying out their normal everyday functions, and the genetic theory, which holds that aging involves the existence of a genetically predetermined life span that controls the longevity of individual cells, organs, and entire organisms. Environmental factors such as pollutants and toxins in the atmosphere are believed to slowly damage genetic information transmitted by cells, resulting in errors in a cell’s function and leading to its death. Such mutations and cell death are also thought to be caused by free radicals in the atmosphere (unstable compounds that can damage cells) and impeded linkages in people’s deoxyribonucleic acid (DNA). These changes in the organism manifest themselves as aging.


As people age, their bodies change and decrease in complexity, becoming less efficient at carrying out basic processes. For example, as arteries narrow, they begin to lose their ability to carry oxygen and nutrients, and they are less resilient after injury. The ultimate cause of death is generally the result of a progression that involves the entire body: the aging process.


Infection (often in the form of pneumonia) is exceeded only by atherosclerosis (commonly referred to as “hardening of the arteries”) as the leading cause of death of people eighty-five years of age or older. Alzheimer’s disease (a form of dementia) is the progressive degeneration and loss of large numbers of nerve cells in those portions of the brain associated with memory, learning, and judgment. Striking more than 12 percent of the United States population over the age of sixty-five, Alzheimer’s disease is projected to reach staggering proportions and strain resources. Other leading causes of death are cancer and stroke.




Dying

An innate life force compels the body to continue living, despite the ravages of disease. Ultimately, however, this life force diminishes until it stops completely and irreversibly. As the body begins the dying process, sleeping increases, food and beverage intake gradually decrease, breathing becomes labored and shallow (dyspnea), and periods of apnea (the absence of breathing) become longer and more frequent. Cyanosis, or a bluish discoloration of the skin due to the lack of oxygen and an increase of carbon monoxide, may indicate an impaired circulatory system. Convulsions may also occur as blood pressure falls, oxygen supply to the brain diminishes, and brain cells malfunction. Decaying flesh may also emit an odor, and fever and sweating may occur. The patient may become restless as an increased heart rate attempts to compensate for the lack of oxygen. The exhausted heart ultimately slows and then stops completely. Hearing and vision decrease, and brain activity slows. The so-called death rattle and foaming at the mouth are also indications of the shutting down of the body.


When death has occurred, the person will no longer respond to word or touch. The eyes will be fixed and the eyelids slightly open, the jaw will be relaxed and slightly open, and the skin will assume a dull and lifeless appearance. Medical or clinical death, when the heartbeat and respiration cease, is the oldest means of determining death. Brain death is the newest criterion for determining that death has occurred. Tiny electrodes are placed on the patient’s scalp to detect electrical activity in the brain through means of an electroencephalogram (EEG). A flat EEG indicates that brain cells are dead. When deprived of oxygen, brain cells die within four to six minutes. A person can live indefinitely in a persistent vegetative state if the brain stem is still functioning, although there is much debate about whether that condition constitutes life.


Despite many signs, it sometimes remains difficult for physicians to declare unequivocally that death has occurred. For that reason, in 1968, the Ad Hoc Committee of the Harvard Medical School published what has become known as the “Harvard Guidelines.” It is recommended that a patient be declared dead only after having been monitored twice during a twenty-four-hour period in which no changes appear: unresponsiveness of vital signs, no movement or spontaneous breathing, no motor reflexes (pupils unresponsive to light), and a flat EEG.




Kübler-Ross Stages of Death

Elisabeth Kübler-Ross, a Swiss psychiatrist, revolutionized care of the terminally ill. Credited with helping to end the taboo in Western culture regarding open discussions and studies of death, she helped change the care of many terminally ill patients by making death less psychologically painful. She encouraged health care professionals to speak openly to dying patients about their experiences in facing death, thereby learning from them. This was a revolutionary step because dying was equated with failure by the medical profession.


In her best seller On Death and Dying (1969), Kübler-Ross identified five stages of death based on interviews with patients and health care professionals. The first stage, denial and isolation, occurs when patients are first confronted with a terminal diagnosis and declare that it just cannot be true. Despite overwhelming medical evidence to the contrary, patients will rationalize, thinking that X rays or pathology reports were mixed up and that they can get a more positive diagnosis elsewhere. Patients seek examination and reexamination. Denial acts as a buffer, allowing patients time to collect themselves and digest the shocking news. Denial as a temporary defense is gradually replaced by partial acceptance.


The second stage involves anger, when patients question why they have a terminal condition and feel resentment, envy, and rage. They begin to face reality and direct hostility toward family, friends, and doctors.


The third stage involves bargaining; patients seek to extend their lives in exchange for doing good deeds. Bargaining is an attempt to postpone death, according to Kübler-Ross, and must include a prize “for good behavior.” Most bargains are made with higher powers (God, in the case of Christians and Jews) and generally remain secret or mentioned only to a chaplain or other religious leader.


The fourth stage involves depression, when people become despondent because they realize that death is imminent and bargaining is unrealistic. Anger and rage are soon replaced by a sense of great loss. Depression involves past losses as well as impending losses (anticipatory grief).


The fifth stage is acceptance, reached when people admit that everything possible has been done. Patients assume a “so-be-it” attitude, neither depressed nor angry. They typically are able to express previous feelings, such as envy for the living and healthy, and anger at those who do not have to face their destiny so soon. Having already mourned meaningful people and places, patients are able to contemplate the coming end of life with quiet and often detached expectation. Acceptance is almost void of feelings, and as peace comes to patients, their interests diminish. Nonverbal communication between family members, patients, and staff assumes a greater significance. Reassurance that the dying person is not alone is important.


Developed initially as a model for helping to understand how dying patients cope with death, the Kübler-Ross model and its five phases have been adopted by many as the stages that survivors experience during the grieving process. The concept also provides insight and guidance for adjusting to personal trauma and change, and for helping others cope with emotional upheaval, whatever the cause.


However, controversy surrounds the categorization of death and dying proposed by Kübler-Ross. Sherwin B. Nuland, in How We Die: Reflections on Life’s Final Chapter (1994), states that experienced clinicians know that many patients do not progress overtly beyond the denial stage and that many patients actually continue denying the inevitable despite repeated attempts by physicians to clarify the issue. Other critics (such as Edward Schneidman) fault Kübler-Ross’s interviewing techniques, claiming that they rely on intuition, and argue that her conclusions are highly subjective. Others claim that one process does not apply universally to everyone and that patients do not progress smoothly from one stage to the next.




Thanatology


Thanatology
is the science that studies the events surrounding death and the social, legal, and psychological aspects of death. Health professionals including psychiatrists, forensic pathologists, advanced practice nurses, veterinarians, sociologists, and psychologists are the main members of the thanatology community. Thanatologists may study the cause of deaths, the legal implications of death such as autopsy requirements, and the social aspects surrounding death. Grief, burial customs, and social attitudes about death are frequent subjects. Thanatology also overlaps with forensics when it focuses on the changes that occur in the body in the period near death and afterward.


Some social issues explored by thanatologists, such as euthanasia and abortion, are subject to ethical and legal controversy. Laws set burial, cremation, and embalming requirements and determine rights over the bodies of the deceased. Clinical autopsies are generally required in cases of unexplained or violent death, when suicide or drug overdose is suspected, or when requested by the deceased’s family when a medical error is suspected or to confirm certain diseases.




Bibliography


Beresford, Larry. The Hospice Handbook: A Complete Guide. Boston: Little, 1993. Print.



Daoust, Ariane, and Eric Racine. "Depictions of 'Brain Death' in the Media: Medical and Ethical Implications." Jour. of Medical Ethics 40.4 (2014): 253–59. Print.



Despelder, Lynne Ann, and Albert Lee Strickland. The Last Dance: Encountering Death and Dying. 5th ed. Mountain View: Mayfield, 1999. Print.



Green, James W. Beyond the Good Death: The Anthropology of Modern Dying. Philadelphia: U of Pennsylvania P, 2008. Print.



Kelly, Christine M. J. "What is a Good Death?" New Bioethics 20.1 (2014): 35–52. Print.



Kessler, David. The Needs of the Dying: A Guide for Bringing Hope, Comfort, and Love to Life’s Final Chapter. 10th ed. New York: Harper, 2007. Print.



Knox, Jean. Death and Dying. Philadelphia: Chelsea House, 2001. Print.



Kübler-Ross, Elisabeth. On Death and Dying. 1969. Reprint. New York: Routledge, 2009. Print.



L., G. "Death." New Scientist 20 Oct. 2012: 32–36. Print.



Mappes, Thomas A., and David DeGrazia. Biomedical Ethics. 6th ed. Boston: McGraw, 2006. Print.



Nuland, Sherwin B. How We Die: Reflections on Life’s Final Chapter. New York: Knopf, 1994. Print.



Parnia, Sam. What Happens When We Die: A Groundbreaking Study into the Nature of Life and Death. Carlsbad, Calif.: Hay House, 2006. Print.



Wanzer, Sidney H., and Joseph Glenmullen. To Die Well: Your Right to Comfort, Calm, and Choice in the Last Days of Life. Cambridge: Da Capo, 2007. Print.

Friday 29 July 2016

What is Arabidopsis thaliana? |


Natural History

Although common as an introduction into America and Australia, Arabidopsis thaliana (often referred to simply by its genus name, Arabidopsis) is found in the wild throughout Europe, the Mediterranean, the East African highlands, and eastern and central Asia (where it probably originated). Since Arabidopsis is a low winter annual (standing about thirty-five centimeters, according to the Missouri Botanical Garden), it flowers in disturbed habitats from March through May. Arabidopsis was first described by Johannes Thal (hence the thaliana as the specific epithet) in the sixteenth century in Germany’s Harz Mountains, but he named it Pilosella siliquosa. Undergoing systematic revisions and several name changes, the little plant was finally called Arabidopsis thaliana in 1842.









Several characteristics of Arabidopsis make it a useful model organism. First, it has a short life cycle; it goes from germination of a seed to seed production in only six weeks to three months (different strains have different generation times). Each individual plant is prolific, yielding thousands of seeds. Genetic crosses are easy to do, for Arabidopsis normally self-crosses (so recessive mutations are easily made homozygous and expressed), but it is also possible to outcross. Second, the plants are small, comprising a flat rosette of leaves from which emerges a flower stalk that grows up to about 35 centimeters (13.8 inches) high. These plants are easy to grow and manipulate, so many genetic screens can be done on petri dishes with a thousand seedlings examined inside just one dish. Also, the genome of Arabidopsis is relatively small, with 125 million base pairs (Mbp), about 33,600 genes, and five chromosomes containing all the requisite information to encode an entire plant (similar to the functional complexity of the fruit fly
Drosophila melanogaster
, long a favorite model organism among geneticists). Yet in comparison to the genome of corn (Zea mays), which the National Center for Biotechnology Information estimated in 2009 to be at least 2,400 Mbp, Arabidopsis has a genome almost twenty times smaller. The sequence of the Arabidopsis genome was completed in 2000. Furthermore, Arabidopsis is easily transformed using the standard vector
Agrobacterium tumefaciens. to introduce foreign genes. In the floral-dip method, immature flower clusters are dipped into a solution of Agrobacterium containing the DNA to be introduced and a detergent. The flowers then develop seeds, which are collected and studied. This transformation method is rapid because there is no need for tissue culture and plant regeneration. Arabidopsis is easy to study under the light microscope because young seedlings and roots are somewhat translucent. There are collections of T-DNA (transfer DNA from Agrobacterium) tagged strains and insertional mutation strains. There are also a large number of other mutant lines and genomic resources available for Arabidopsis at stock centers, and a cooperative multinational research community of academic, government, and industrial laboratories exists, all working with Arabidopsis.




History of Experimental Work with Arabidopsis

The earliest report of a mutant probably was made in 1873 by A. Braun, and Freidrich Laibach first compiled the unique characteristics of Arabidopsis thaliana as a model organism for genetics in 1943 (publishing the correct chromosome number of five much earlier, in 1907, later confirmed by other investigators). Erna Reinholz (a student of Laibach) submitted her thesis in 1945, published in 1947, on the first collection of x-ray–induced mutants. Peter Langridge established the usefulness of Arabidopsis in the laboratory in the 1950s, as did George Redei and other researchers, including J. H. van der Veen in the Netherlands, J. Veleminsky in Czechoslovakia, and G. Robbelen in Germany in the 1960s.


Maarten Koorneef and his coworkers published the first detailed genetic map for Arabidopsis in 1983. A genetic map allows researchers to observe approximate positions of genes and regulatory elements on chromosomes. The 1980s saw the first steps in analysis of the genome of Arabidopsis. Tagged mutant collections were developed. Physical maps, with distances between genes in terms of DNA length, based on restriction fragment length polymorphisms (RFLPs), were also made. The physical maps allow genes to be located and characterized, even if their identities are not known.


In the 1990s, scientists outlined long-range plans for Arabidopsis through the Multinational Coordinated Arabidopsis Genome Research Project, which called for genetic and physiological experimentation necessary to identify, isolate, sequence, and understand Arabidopsis genes. In the United States, the National Science Foundation (NSF), US Department of Energy (DOE), and Agricultural Research Service (ARS) funded work done at Albany directed by Athanasios Theologis. NSF and DOE funds went also to Stanford, Philadelphia, and four other US laboratories. Worldwide communication among laboratories and shared databases (particularly in the United States, Europe, and Japan) were established. Transformation methods became much more efficient, and a large number of Arabidopsis mutant lines, gene libraries, and genomic resources have been made and are now available to the scientific community through public stock centers. The expression of multiple genes has been followed, too. Teresa Mozo provided the first comprehensive physical map of the Arabidopsis genome, published in 1999; she used overlapping fragments of cloned DNA. These fundamental data provide an important resource for map-based gene cloning and genome analysis. The Arabidopsis Genome Initiative, an international effort to sequence the complete Arabidopsis genome, was created in the mid-1990s, and the results of this massive undertaking were published on December 14, 2000, in Nature.




Comparative Genomics

With full sequencing of the genome of Arabidopsis completed, the first catalog of genes involved in the life cycle of a typical plant became available, and the investigational emphasis shifted to functional and comparative genomics. Scientists began looking at when and where specific genes are expressed in order to learn more about how plants grow and develop in general, how they survive in the changing environment, and how the gene networks are controlled or regulated. Potentially, this research can lead to improved crop plants that are more nutritious, more resistant to pests and disease, less vulnerable to crop failure, and capable of producing higher yields with less damage to the natural environment. Since many more people die from malnutrition in the world than from diseases, the Arabidopsis genome takes on a much more important consideration than one might think. Of course, plants are fundamental to all ecosystems, and their energy input into those systems is essential and critical.


Already the genetic research on Arabidopsis has boosted production of staple crops such as wheat, tomatoes, and rice. The genetic basis for every economically important trait in plants—whether pest resistance, vegetable oil production, or even wood quality in paper products—has been under intense scrutiny in Arabidopsis.


Although Arabidopsis is considered a weed, it is closely related to a number of vegetables, including broccoli, cabbage, brussels sprout, and cauliflower, which are very important to humans nutritionally and economically. A mutation observed in Arabidopsis has resulted in its floral structures assuming the basic shape of a head of cauliflower. This mutation in Arabidopsis, not surprisingly, is referred to simply as “cauliflower” and was isolated by Martin Yanofsky’s laboratory. The analogous gene from the cauliflower plant was examined, and it was discovered the cauliflower plant already had a mutation in this gene. From the study of Arabidopsis, therefore, researchers have uncovered why a head of cauliflower looks the way it does.


In plants, there is an ethylene-signaling pathway (ethylene is a plant hormone) that regulates fruit ripening, plant senescence, and leaf abscission. The genes necessary for the ethylene-signaling pathway have been identified in Arabidopsis, including genes coding for the ethylene receptors. As expected, a mutation in these ethylene receptors would cause the Arabidopsis plant to be unable to sense ethylene. Ethylene receptors have now been uncovered from other plant species from the knowledge gained from Arabidopsis. Harry Klee’s laboratory, for example, has found a tomato mutation in the ethylene receptor, which prevents ripening. When the mutant Arabidopsis receptor is expressed in other plants, moreover, the transformed plants also exhibit this insensitivity to ethylene and the lack of ensuing processes associated with it. Therefore, the mechanism of ethylene perception seems to be conserved in plants, and modifying ethylene receptors can induce change in a plant.


Once the sequence of Arabidopsis was determined, there was a coordinated effort to determine the functions of the genome (functional genetics). The Arabidopsis Information Resource (TAIR) is an online repository of Arabidopsis genomic data. The November 2010 TAIR 10 Arabidopsis genome annotation indicated 27,416 protein-coding genes, 4,827 pseudogenes or transposable elements, and 1,359 noncoding RNAs, for a total of 33,602 genes. There are ongoing studies of the genome to determine the patterns of transcription, epigenetic (methylation) patterns, proteomics, and metabolic profiling. Arabidopsis is a model organism for plant molecular biology and genetics, for the understanding of plant flower development, and for determining how plants sense light. Ongoing Arabidopsis projects include determining genome-wide transcription networks of TGA factors (transcription regulators), an analysis throughout the genome of novel Arabidopsis genes predicted by comparative genomics, and completing the expression catalog of the Arabidopsis transcriptome using real-time PCR (RT-PCR). The e-journal The Arabidopsis Book (TAB), produced by the American Society of Plant Biologists, summarizes the current understanding of Arabidopsis biology. TAB includes articles on such subjects as cell cycle division, peroxisome biogenesis, seed dormancy and germination, guard cell signal transduction, the cytoskeleton, mitochondrial biogenesis, and meiosis.


Advances in evolutionary biology and medicine are expected from Arabidopsis research, too. Robert Martienssen of Cold Spring Harbor Laboratory has indicated the completion of the Arabidopsis genome sequence has a major impact on human health as well as plant biology and agriculture. Surprisingly, some Arabidopsis genes are extremely similar or even identical to human genes linked to certain illnesses. No doubt there are many more mysteries to unravel with the proteome analysis of Arabidopsis (analysis of how proteins function in the plant), and the biological role of the tens of thousands of Arabidopsis genes will keep scientists busy for some time to come.




Key Terms




Brassicaceae


:

the mustard family, a large, cosmopolitan family of plants with many wild species, some of them common weeds, including widely cultivated edible plants like cabbage, cauliflower, radish, rutabaga, turnip, and mustard




genetic map

:

a “map” showing distances between genes in terms of recombination frequency




TILLING (targeting induced local lesions in genomes)

:

a method used to create mutations throughout the genome by chemical mutagenesis, followed by the polymerase chain reaction (PCR) method to amplify regions of the genome, denaturing high-pressure liquid chromatography (HPLC) to screen for mutants, and finally determining the phenotype





Bibliography


Borevitz, Justin O., and Joseph R. Ecker. “Plant Genomics: The Third Wave.” Annual Review of Human Genetics 5 (2004): 443–77. Print.



Griffiths, Anthony J. F., Susan R. Wessler, Sean B. Carroll, and John Doebley. Introduction to Genetic Analysis. 10th ed. New York: Freeman, 2012. Print.



MacLachlan, Allison. "One More Way Plants Help Human Health." Inside Life Science. National Institute of General Medical Sciences, National Institutes of Health, 19 Nov. 2012. Web. 21 July 2014.



Memelink, Johan. “The Use of Genetics to Dissect Plant Secondary Pathways.” Current Opinion in Plant Biology 8 (2005): 230–35. Print.



Salinas, Julio, and José J. Sánchez-Serrano, eds. Arabidopsis Protocols. 3rd ed. New York: Humana, 2014. Print.



TAIR. "Genome Snapshot." Arabidopsis.org. Phoenix Bioinformatics Corp., 22 Nov. 2010. Web. 21 July 2014.



Zhang, X., et al. “Agrobacterium-Mediated Transformation of Arabidopsis thaliana Using Floral Dip Method.” Nature Protocols 1.2 (2006): 641–46. Print.

Thursday 28 July 2016

What is MDMA? |


History of Use

MDMA was first synthesized in Germany in 1912 for Merck Pharmaceuticals, which patented the substance in 1914. MDMA originally was intended to be an anticoagulant but was ineffective for that purpose. During World War I, however, the drug was taken as an appetite suppressant.




In the 1970s, attempts were made to use MDMA to facilitate psychotherapy. It was then that the drug was found to have hallucinogenic properties, which in 1985 led to its being declared an illegal substance. MDMA was classified as a schedule I controlled substance. Class I drugs are those that have a high potential for abuse and have no recognized medical value.


Because MDMA increases energy, endurance, and arousal, it became widely used in underground dance clubs in England in the 1980s because it allowed people to stay up and dance all night. These all-night events came to be called raves, and the drug itself became known as a club drug, party drug, or recreational drug. Its use in this manner spread to the United States around 1990. MDMA has remained popular in the club scene because it is both a hallucinogen and a psychoactive stimulant.




Effects and Potential Risks

MDMA is derived from methamphetamine and differs from it chemically in only one way that makes it resemble the hallucinogen mescaline. As such, it has the characteristics of both a stimulant and a hallucinogen. Its action is explained mainly by its effects on the serotonin pathways in the body, since it affects serotonin reuptake.


Most of the short-term effects of ecstasy are attributable to the psychological changes from increased serotonin in the brain. These effects include feelings of pleasure, mood elevation, and heightened perception. Negative short-term effects include difficulty thinking clearly, agitation, and physical symptoms such as sweating, dry mouth, tachycardia (rapid heartbeat), fatigue, muscle spasms (especially jaw-clenching), and increased temperature.


Some ecstasy users engage in behavior known as “stacking,” or taking multiple doses of ecstasy in one night. This may occur if the person wishes the positive effects of the drug to continue as they begin to wear off. Stacking can result in serious or even fatal physical problems. High blood pressure, extreme elevation of temperature, or cardiac arrhythmias may occur, sometimes resulting in death.


MDMA use often leads to aftereffects too, including depression, restlessness, and difficulty sleeping. Evidence shows that with long-term MDMA use, serotonin levels remain low in the brain, thus affecting brain function over time.


At the same time, as of 2014, scientists had been working with the pure form of MDMA in clinical trials approved by the Food and Drug Administration in the hopes of finding a way to use the drug as part of a psychotherapy treatment for post-traumatic stress disorder (PTSD). The studies aimed to determine whether MDMA's potential medical benefits could outweigh negative health effects and often involved war veterans suffering from PTSD. In 2015, the Drug Enforcement Administration had approved the plan from the Multidisciplinary Association for Psychadelic Studies to conduct trials regarding the use of MDMA in treating anxiety in the terminally ill.




Bibliography


Baylen, Chelsea A., and Harold Rosenberg. “A Review of the Acute Subjective Effects of MDMA/Ecstasy.” Addiction 101 (2006): 933–47. Print.



Chason, Rachel. "Studies Ask Whether MDMA Can Cure PTSD." USA Today. USA Today, 11 July 2014. Web. 28 Oct. 2015.



De la Torre, R., et al. “Non-Linear Pharmacokinetics of MDMA (‘Ecstasy’) in Humans.” British Journal of Clinical Pharmacology 49.2 (2000): 104–9. Print.



Eisner, Bruce. Ecstasy: The MDMA Story. Berkeley: Ronin, 1993. Print.



Mills, Edward M., et al. “Uncoupling the Agony from the Ecstasy.” Nature 426 (2003): 403–4. Print.



Wing, Nick. "DEA Approves Study of Psychadelic Drug MDMA in Treatment of Seriously Ill Patients." Huffington Post. TheHuffingtonPost.com, 18 Mar. 2015. Web. 28 Oct 2015.

What is attention deficit hyperactivity disorder (ADHD)?


Causes and Symptoms

Studies indicate that 2 to 10 percent of children may have attention deficit hyperactivity disorder (ADHD), depending on the diagnostic criteria used and the population studied. The cause of ADHD is unknown, although the fact that it often occurs in families suggests some degree of genetic inheritance. Boys are two times more likely to be affected than girls. ADHD is usually diagnosed when a child enters school, but it may be discovered earlier. According to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), published by the American Psychiatric Association in 2013, several of an individual's symptoms must be present prior to the age of twelve years for the diagnosis of ADHD.


An abnormality in the central dopaminergic and noradrenergic tone is thought to be the pathophysiologic basis for ADHD. Some genetic causes of ADHD that have been suggested include a possible mutation of the dopamine D4 receptor (DRD4) gene or a phenotypic variation in the catechol-O-methyltransferase (COMT) gene. Other risk factors being explored include head injury before the age of two years; exposure to emotionally traumatic situations such as abuse, neglect, or violence; and childhood exposure to environmental contaminants such as lead and organophosphate pesticides, substances such as alcohol and nicotine in utero, or secondhand smoke in childhood. Some studies have suggested that a high level of television viewing between the ages of one and three years is modestly associated with ADHD. Other studies suggest that food dyes and preservatives such as artificial colors or sodium benzoate may increase hyperactivity. Other possible factors that may increase the risk of ADHD include maternal urinary tract infection during pregnancy, premature birth, complex congenital heart disease, and Turner syndrome.



Individuals who do not have ADHD may, at times, display some of the symptoms of this disorder, but those who are diagnosed with ADHD must display symptoms most of the time and across multiple settings—in school, at home, and/or during other activities. According to the DSM-5, a child must display six or more ADHD symptoms for six months or longer to be diagnosed with the disorder, while adults must display five or more symptoms to be eligible for diagnosis. Prior to diagnosis, the referring pediatrician and specialist should rule any undetected hearing or vision problems, any learning disabilities, undetected seizures, and anxiety and depression that may be causing ADHD-like symptoms. The symptoms of ADHD are usually grouped into three main categories: inattention, hyperactivity, and impulsiveness.


Individuals who have symptoms of inattention often make careless mistakes or do
not pay close attention to details in school, social settings, or at work. They
may have problems sustaining attention over time and frequently do not seem to
listen when spoken to, especially in groups. Individuals with ADHD have difficulty
following instructions and often fail to finish chores or schoolwork. They do not
organize well and may have messy rooms and desks at school. They also frequently
lose things necessary for school, work, or other activities. Because they have
trouble sustaining attention, individuals with ADHD dislike tasks that require
this skill and will try to avoid them. One of the key symptoms is distractibility,
which means that people with ADHD are often paying attention to extraneous sights,
sounds, smells, and thoughts rather than focusing on the task that they should be
doing. ADHD may also be characterized by forgetfulness in daily activities,
despite numerous reminders about such common, everyday activities as dressing,
hygiene, manners, and other behaviors. People with ADHD seem to have a poor sense
of time; they are frequently late or think that they have more time to do a task
than they really do.


Not all individuals with ADHD have symptoms of hyperactivity, but many have
problems with fidgeting, or squirming. It is common for these individuals to be
constant talkers, often interrupting others. Other symptoms of hyperactivity
include leaving their seat in school, work, church, or similar settings and moving
around excessively in situations where they should be still. Some people with ADHD
seem to be driven by a motor or are continuously on the go.


Individuals with ADHD may also have some symptoms of impulsiveness, such as
blurting out answers before questions are completed. Another example of
impulsiveness would be or intruding upon others in conversation or in some
activity. They may also have difficulty standing in lines or waiting for their
turn.


It is important to recognize that children with ADHD are not bad children who are
hyperactive, impulsive, and inattentive on purpose. Rather, they are usually
bright children who would like to behave better and to be more successful in
school, in social life with peers, and in family affairs, but they simply cannot.
One way to think about ADHD is to consider it a disorder of the ability to inhibit
impulsive, off-task, or undesirable attention. Consequently, an individual with
ADHD cannot separate important from unimportant stimuli and cannot sort
appropriate from inappropriate responses to those stimuli. It is easy to
understand how someone whose brain is trying to respond to a multitude of stimuli,
rather than sorting stimuli into priorities for response, will have difficulty
focusing and maintaining attention to the main task.


Individuals with ADHD may also have a short attention span, particularly for
activities that are not fun or entertaining. They will be unable to concentrate
because they will be distracted by peripheral stimuli. They may also have poor
impulse control so that they seem to act on the spur of the moment. They may be
hyperactive or clumsy, resulting in their being labeled “accident-prone.” They may
also have problems completing tasks that require a lot of organization and
planning—often first seen when the individual is in the third grade or beyond.
They may display attention-demanding behavior and/or show resistant or
overpowering social behaviors. Last, children with ADHD often act as if they were
younger, and “immaturity” is a frequent label. Along with this trait, they have
wide mood swings and are seen as emotional.


Many experts think that ADHD may be related to problems with brain development. Studies have shown that the prefrontal cortex, striatum, and cerebellum in the brains of individuals with ADHD are less activated on functional magnetic resonance imaging (fMRI) than age-matched controls without ADHD. These regions of the brain are rich in dopaminergic and noradrenergic pathways and are associated with executive function. Other researchers have proposed the hypothesis that a developmental abnormality of the inferior frontal gyrus might cause the inhibition difficulties seen in ADHD.


Hyperactive symptoms may improve in adolescence, although adolescents with ADHD may continue to have problems with impulsive behavior and inattention. They may have considerable difficulty complying with rules and following directions. They may be poorly organized, causing problems both with starting projects and with completing them. Adolescents with ADHD may have problems in school in spite of average or above-average potential. They may have poor self-esteem and a low frustration tolerance. Because of these and other factors related to ADHD, they may also be at greater risk of developing substance use problems and other mental health problems. ADHD may also persist into adulthood, in which case it is referred to as adult ADHD. The same diagnostic criteria apply, including the presence of the disorder since childhood.


Several other neurologic or psychiatric disorders have symptoms that can overlap with ADHD, so accurate diagnosis can be difficult. When an individual is suspected of having ADHD, he or she should have a thorough medical interview with, and physical examination by, a physician familiar with child development, ADHD, and related conditions. A psychological evaluation to determine intelligence quotient (IQ) and areas of learning and performance strengths and weaknesses should be obtained. A thorough family history and a discussion of family problems such as divorce, violence, alcoholism, or drug abuse should be part of the evaluation, as symptoms of ADHD may arise after a significant and sudden change in a child's life. Other conditions that might be found to exist along with ADHD, or to be the underlying cause of symptoms thought to be ADHD, include oppositional defiant disorder, conduct disorder (usually seen in older children), depression, anxiety, or a substance abuse disorder.


"Attention-deficit disorder (ADD) with or without hyperactivity" was first defined
in the third edition of the American Psychiatric Association's Diagnostic
and Statistical Manual of Mental Disorders
(1980), or DSM-III, and its
definition has evolved since then. The name ADD was changed to ADHD in the revised
edition of the DSM-III-R (1987). In 1998, the National Institutes of Health
(NIH) held a Consensus Development Conference on the
Diagnosis and Treatment of Attention Deficit Hyperactivity Disorder. While most
experts supported the ADHD diagnosis criteria, the final report noted a need for
further research into the validity of the diagnosis. The fifth edition of the
DSM(DSM-5), published in 2013, updated the definition of ADHD
to reflect the growing body of evidence that shows the condition can last beyond
childhood in order to help clinicians diagnose and treat adults with ADHD.
Approximately 50 percent of children with ADHD continue to have ADHD into
adulthood.




Treatment and Therapy

Treatment and therapy for ADHD will usually begin with the diagnostic process. Generally, treatment will begin with some combination of counseling, education, and behavioral therapy. Behavioral therapy may be administered by a parent, teacher, or counselor. In some cases, family counseling may be indicated. This counseling may be help family members learn about ADHD. Parent training interventions may help to improve some symptoms of ADHD in children. Family counseling also may be recommended when family issues are thought to be related to the type or severity of symptoms that the child may be experiencing. For instance, if the family is undergoing a stressful event, such as a divorce, serious loss, or death, or other problems such as economic stress, then the symptoms of ADHD may worsen. Therefore, treatment may focus on trying to minimize the impact of such stressors on the child. In addition, neurofeedback may reduce inattentive behaviors and impulsivity.


If behavioral and nonpharmacologic interventions do not lead to improvement and if there is a moderate to severe functional disturbance caused by ADHD, medications may be considered. Stimulants have the best evidence for the treatment of ADHD; stimulant medications include methylphenidate (Ritalin), extended-release dexmethylphenidate (Focalin), and amphetamines (Adderall). These medications are generally thought to be safe and effective, although they can have such adverse effects as headache, stomachache, mood changes, heart rate changes, appetite suppression, and interference with falling asleep. All children receiving medication must be monitored at regular intervals by a physician.


Nonstimulant medications are the second-line pharmacological treatment of ADHD,
especially if stimulant medications are ineffective or poorly tolerated. Other
medications that may be used for ADHD include atomoxetine (Strattera),
antidepressants such as desipramine or bupropion, and alpha-2 adrenergic agonists
such as clonidine and extended-release guanfacine.


Costs and risks for adverse effects should be discussed with the physician who has
made the diagnosis of ADHD before implementing any treatment, to ensure safety and
a reasonable expectation of efficacy.




Perspective and Prospects

Attention deficit hyperactivity disorder remains controversial due to the subjective nature of its symptoms and the possible overdiagnosis and overtreatment of the disorder. Historically, experts have estimated that ADHD affects between 2 and 10 percent of the general population. However, in March 2013, the New York Times reported that data from the Centers for Disease Control and Prevention showed that approximately 11 percent of children between the ages of four and seventeen have been diagnosed with ADHD, representing a 16 percent increase since 2007 and a 53 percent rise over the last decade. This follows a general increase in rates of diagnosis beginning in the 1970s. ADHD experts caution against attributing the increase to a single factor, including misdiagnosis and increased pharmacological treatment of milder forms of the disorder.


Many experts also suggest that shifts in how ADHD is diagnosed, increasing tendencies to prescribe medications, and a rise in public awareness and media attention have all contributed to an increase in diagnoses if not an increase in actual cases. These trends have also led some researchers to warn against overdiagnosis of ADHD, cautioning that not all children with high energy or difficulty focusing necessarily have the disorder, especially at very young ages. Some even challenge the concept of ADHD altogether, claiming that it is a case of applying a medical diagnosis to a range of behaviors that may go against social norms but have historically not been seen as a medical issue. Still, the majority of medical professionals do recognize ADHD as a legitimate condition, if one surrounded by continued controversies and misunderstandings.


For individuals with ADHD, the disorder is a real issue that can cause great harm
and impairment if not recognized and managed correctly. Diagnosis should be based
on family history, careful examination, and thorough psychological assessment.
Treatment should always begin with behavioral interventions before medication.


Individuals with ADHD in the United States can share experiences and resources
through organizations that assist families dealing with attention deficit
hyperactivity disorder. The national organization Children and Adults with
Attention Deficit/Hyperactivity Disorder (CHADD) has state and local chapters
helping individuals and families cope with the condition. CHADD chapters often
have libraries and provide resources on ADHD management. The Learning Disabilities
Association of America (LDA) also has state and local chapters helping schools and
families cope with a wide range of learning disabilities, including ADHD.




Bibliography


American Psychiatric
Association. Diagnostic and Statistical Manual of Mental Disorders:
DSM-5
. Arlington: Author, 2013. Print.



Accardo, Pasquale J.,
ed. Attention Deficits and Hyperactivity in Children and Adults:
Diagnosis, Treatment, Management
. 2nd ed. New York: Dekker,
2000. Print.



Asherson, Philip. Handbook for
Attention Deficit Hyperactivity Disorder in Adults
. London:
Springer Healthcare, 2013. Print.



"Attention Deficit Hyperactivity Disorder
(ADHD)." Natl. Inst. of Mental Health. US Dept. of Health
and Human Services, n.d. Web. 5 Aug. 2015.



Barkley, Russell A.,
ed. Attention-Deficit Hyperactivity Disorder: A Handbook for
Treatment and Diagnosis
. 4th ed. New York: Guilford, 2014.
Print.



Breggin, Peter R.
Talking Back to Ritalin. Rev. ed. Cambridge: Perseus,
2001. Print.



McGough, James J. ADHD. New York: Oxford UP, 2014. Print.



Phelan, Thomas W.
All About Attention Deficit Disorder: Symptoms, Diagnosis, and
Treatment—Children and Adults
. 2nd ed. London: Gardner’s, 2006.
Print.



Quinn, Patricia O.,
ed. AD/HD and the College Student: The Everything Guide to Your Most Urgent Questions. Washington, DC: Magination, 2012. Print.



Quinn, Patricia O.,
and Judith M. Stern. Putting on the Brakes: Understanding and Taking
Control of Your ADD or ADHD
. 2nd ed. Washington: Magination,
2008. Print.



Ramsay, J. Russell.
Nonmedication Treatments for Adult ADHD: Evaluating Impact on
Daily Functioning and Well-Being
. Washington: American
Psychological Association, 2009. Print.



Rief, Sandra F.
The ADHD Book of Lists: A Practical Guide for Helping Children
and Teens with Attention Deficit Disorders
. San Francisco:
Jossey, 2003. Print.



Silva, Desiree, et al. "Environmental Risk
Factors by Gender Associated with Attention-Deficit/Hyperactivity Disorder."
Pediatrics 133.1 (2014): E14–E22. Web. 21 Aug.
2014.



Steiner, Naomi J., et al. "In-School
Neurofeedback Training for ADHD: Sustained Improvements from a Randomized
Control Trial." Pediatrics 133.3 (2014): 483–92. Web. 21
Aug. 2014.



Van Dongen-Boomsma, Martine, et al. "A
Randomized Placebo-Controlled Trial of Electroencephalographic (EEG)
Neurofeedback in Children with Attention-Deficit/Hyperactivity Disorder."
Journal of Clinical Psychiatry 74.8 (2013): 821–27.
Print.



"What Is ADHD?" KidsHealth. Nemours Foundation, 2015. Web. 5 Aug. 2015.

What is ecological psychology? |


Introduction

Respected scientific organizations have been warning of pending and even imminent ecological disasters in their publications. Exemplary among these are the Intergovernmental Panel on Climate Change, established by the United Nations (Climate Change 2013, 2008), the National Research Council of the US National Academy of Sciences and the Royal Society (“Understanding and Responding to Climate Change,” 2014), the international Union of Concerned Scientists ("How to Avoid Dangerous Climate Change," 2007), and the Worldwatch Institute (State of the World 2013: Is Sustainability Still Possible?, 2013). Such conclusions have even been heralded in the popular media, for example by Al Gore in his 2006 book and film An Inconvenient Truth, for which he won the Nobel Prize.








Problems cited include global climate change caused by greenhouse gases; weakening of plant and aquatic life by acid rain; the destruction of the ozone layer by chlorofluorocarbons (CFCs); the chemical pollution of soil and groundwater and, increasingly, even oceans; the consequences of deforestation for global warming and depletion of oxygen; the exhaustion and destruction of fisheries and other habitats; and the consequent species extinction and reduction in planetary biodiversity.


The field of ecology, or environmentalism, arose over the last third of the twentieth century to solve these problems before they become cataclysmic. Psychologists have joined this quest, forming an interdisciplinary approach that brings psychological expertise to ecological issues. This collaboration was formed on the basis of several different aims, with the result that the field has divided into branches pursuing a variety of goals. The primary impetus driving the rapid growth of this collaboration was the simple recognition that environmental problems are caused by human action. Once environmental destruction was linked to human behavior, the attitudes, thoughts, and beliefs underlying those behaviors became a subject of great interest. That interest soon led to the realization that the dynamics of these behavior-guiding attitudes are the least understood aspect of these problems and therefore solving the environmental crisis requires addressing its underlying human basis, a project for which psychology is uniquely situated.




Developments

The field of ecological psychology has just begun to coalesce and is still defining itself—even naming itself—so no univocally accepted tasks or even labels yet exist. Support for particular developments emerges and changes quickly. Nevertheless, it appears the field has attained three major branches, with a conservation, therapeutic, and holistic focus. Each direction offers richly innovative prospects that will probably become increasingly significant.




The Conservation Focus


Conservation, the largest of these emphases, is devoted to researching the attitudes, beliefs, and behaviors that contribute to the environmental crises and to discovering how to change them to effectively promote the key conservation actions: reduce, reuse, and recycle. Before psychology’s involvement, most efforts to bring about such behavioral change were simply information-intensive mass media advertising campaigns to promote greater participation in sustainable living. These campaigns made little use of psychological research or expertise and were generally ineffective.


The involvement of psychological researchers led to clarification of the roles of beliefs and assumptions in environmentally relevant behaviors. Among these findings are the effects of cognitive presuppositions and perceptual biases on the persuasive efficacy of warnings of environmental consequences, on assessments about risk, and on judgments and prejudgments about relative cost-benefit issues. Such research shows how these presuppositions and biases lead to misjudgments that ultimately culminate in choices and behaviors that undermine conservation. Common biases include, for example, failing to include the indirect costs in considering the environmentally destructive potential of one’s actions and unduly discounting the significance of the long-term consequences by overrating the importance of the short-term consequences. As research proceeds, a variety of controversies are emerging, which are yet to be resolved. Among these debates are the relative importance for behavioral change of shifting a person’s values and beliefs versus altering the behavioral contexts in which a person operates, and the relative impact of individual versus corporate actions in worsening environmental problems.


Among many prominent researchers in this area are George Howard, Paul Stern, Stuart Oskamp, and Doug McKenzie-Mohr. More typically called "environmental psychology," the focus of this branch is the least controversial and so is already widely accepted by mainstream psychology. For example, American Psychologist, the flagship journal of the American Psychological Association (APA) often publishes related articles and dedicated a special issue in May–June 2011 to global climate change and psychology's role in dealing with it. In addition, the association’s initiative on what it termed “society’s grand challenges” includes a segment devoted specifically to global climate change. Other examples of the scope of this branch include such publications as the Journal of Environmental Psychology, Robert B. Bechtel’s Handbook of Environmental Psychology (2003), and The Oxford Handbook of Environmental and Conservation Psychology (2012).




The Therapeutic Focus

The second major branch of ecological psychology emphasizes the therapeutic value of the relationship of the person with the natural world. This focus, sometimes called "ecotherapy," extends beyond mainstream psychotherapy and draws heavily from work in humanistic psychology. It is based on understanding the importance of a person’s relationship with the natural world for the individual’s psychological well-being. Ecotherapy holds that a deficiency in the person-world relationship can make a person psychologically dysfunctional and that enhancement of this relationship may improve an individual’s psychological deficiencies.


Features of the natural world that facilitate mental health include awe, harmony, balance, aliveness, at-homeness, and openness. Research has shown that deepening a person’s relationship with the natural world can result in the relief of a wide range of psychopathological symptoms, including anxieties, depressions, addictions, and violence. In addition to benefiting people with mental health problems, a therapeutic connection with the natural world has been found to provide many generally beneficial psychological changes, such as empowerment, inner peace, aliveness, compassion, decreased fatigue, mental clarity, enhancement of creativity, relaxation, stress reduction, restoration of well-being, and relief of alienation. Such work with more psychologically healthy people is sometimes called "ecoeducation." Research has begun to explore the benefits of a therapeutic relationship with nature for healthy child and adolescent development.


Researchers in this branch develop, apply, and assess practices designed to enhance the quality of a person’s relationship with the natural world, with the idea of developing a repertoire of effective modalities. These usually take the form of specific sets of exercises intended to train or enhance a person’s capacity for more deeply, fully, and openly sensing specific features of the natural world. Often these exercises are undertaken during extended stays outdoors in wilderness settings. Some practices are also borrowed from indigenous cultures whose relationships with the natural world are not as altered by the artifices of modern life in the more industrialized world. For example, some practices include a “vision quest” component, in which a portion of the person’s time in the wilderness is spent alone, with the aim of discovering a significant insight. Sometimes, therapeutic goals are achieved by the enactment of a reciprocal interaction, in which one is both “nurtured by” the earth and “nurturing of” earth.


Among the most prominent innovators in this branch are Michael Cohen, founder of the Institute of Global Education and the director of Project Nature Connect; John Davis and Steven Foster, directors of the School of Lost Borders; Laura Sewell; Paul Shepherd; and Howard Clinebell. Universities that offer programs in ecological psychology, such as Naropa University, City University of New York, the Institute of Global Education, and John F. Kennedy University, , tend to emphasize this area.




The Holistic Focus

In contrast with the other two branches, both of which are applied in one way or another, the third branch represents “deep” ecological psychology: a fundamental inquiry into the foundational meanings and significance of the relationship between humans and nature. The holistic focus aims at nothing less than the study of the depletion and restoration of the fullness of the human spirit by healing the disconnection of person and world. This radical ontological inquiry typically takes the name ecopsychology and aligns with the movement within environmentalism known as deep ecology as formulated by Arne Naess in his article in Deep Ecology for the Twenty-First Century (1995), edited by George Sessions. It also draws from developments in contemporary physics that emphasize a systems approach or wholeness perspective. Two physicists have contributed greatly: Fritjof Capra, founder of the Center for Ecoliteracy and author of The Hidden Connections: Integrating the Biological, Cognitive, and Social Dimensions of Life into a Science of Sustainability (2002), and David Bohm, the author of Wholeness and the Implicate Order (1981).


As the most radical approach, this holistic focus is the least established within traditional psychology; it tends to draw support mainly from phenomenological and transpersonal psychologists. Its basic premises, increasingly demonstrated by research, are the interconnectedness of all aspects of the world within a reciprocal and synergistic whole and the value of experiencing this holism as a way to overcome the dualistic perspective that disconnects humans from nature, an alienation in which nature is seen as a sort of storehouse of commodities to be exploited. This alternative holistic vision has a breadth and depth that extends to offering and receiving implications from spiritual traditions, especially the more explicitly nondualistic ones. These have most commonly been the Native American, Wiccan, and Buddhist traditions, although they have increasing involved Christianity as well.


Longstanding leaders are Theodore Roszak, the founder of the Ecopsychology Institute at California State, Hayward, who coined the term “ecopsychology”; Ralph Metzner, the founder of the Green Earth Foundation; and Joanna Macy, a Buddhist activist and ecofeminist. Scholars of note include Elizabeth Roberts, Andy Fisher, Warwick Fox, Mary Gomes, and Allen Kanner. Many organizations, including the International Community of Ecopsychology, have formed to support this work.




Bibliography


Clayton, Susan D., ed. The Oxford Handbook of Environmental and Conservation Psychology. New York: Oxford UP, 2012. Print.



Fisher, Andy. Radical Ecopsychology: Psychology in the Service of Life. 2nd ed. Albany: State U of New York P, 2013. Print.



Gardner, Gerald, and Paul C. Stern. Environmental Problems and Human Behavior. 2nd ed. Boston: Pearson, 2002. Print.



Howard, George S. Ecological Psychology: Creating a More Earth-Friendly Human Nature. Notre Dame: U of Notre Dame P, 1997. Print.



Metzner, Ralph. Green Psychology: Transforming Our Relationship to the Earth. Rochester: Inner Traditions, 1999. Print.



Roszak, Theodore, Mary Gomes, and Allen Kanner, eds. Ecopsychology. San Francisco: Sierra Club, 1995. Print.



Sewell, Laura. Sight and Sensibility: The Ecopsychology of Perception. Los Angeles: Tarcher, 1999. Print.



Winter, Deborah D. Ecological Psychology: Healing the Split between Planet and Self. Mahwah: Erlbaum, 2003. Print.



Winter, Deborah D., and Susan M. Koger. The Psychology of Environmental Problems: Psychology for Sustainability. New York: Psychology, 2011. Digital file.

Wednesday 27 July 2016

What is the history of alternative medicine?


Overview

Alternative medicine (AM), often coupled with the term “complementary,” is the practice of various healing techniques in place of traditional medicine. AM is not commonly taught in medical schools, and most AM practices are not covered by health insurance in the United States. AM practices are derived from ancient methods and beliefs and from social behaviors, spirituality, and newer approaches. AM bases good health on a balance of body systems (mental, spiritual, and physical), whereas conventional medicine views good health as the absence of disease.


Much of AM, with the exception of herbal supplements, is based on all aspects of the person being intertwined. This principle is called holism. It is believed that disharmony that undermines balance among these aspects can stress the body and lead to illness. Therefore, in an effort to alleviate sickness, therapies focus on bolstering the body’s own defenses while restoring balance. Similar to Western medicine, however, AM emphasizes proper nutrition and preventive practices.


Before the 1990s, AM was dismissed by most American medical professionals, mostly because there was no supporting scientific evidence of its therapeutic effects. With an increasing number of AM practitioners, and with health consumer acceptance, it has become more common to integrate alternative therapies into mainstream health care. AM journals, organizations, courses of study, Web sites, and government-supported clinical trials are now common in the United States.




Mechanisms of Action

Only theories exist on the mechanisms of action for alternative remedies, so many advocates believe that the scientific method does not apply to this type of practice. Instead, AM advocates rely on anecdotes and theories, which include the theory that AM defies biologic mechanisms and should, therefore, be understood as less harmful than conventional methods. In many cases, simply publishing anecdotes in popular books and magazines is enough evidence for the general support of therapeutic claims.


Oftentimes, alternative remedies are discovered through trial and error. A specific alternative method may work for one person but not for another. Practitioners sometimes have to try several different approaches for the same issue in different persons. Also, one type of approach could be useful for several different health issues.


Language is another obstacle to understanding the way alternative therapies
work. For instance, there are no direct translations for the types of energy in
Ayurvedic
medicine known as vata,
pitta, and kapha, making it impossible to
integrate these types of components into controlled scientific trials for the
purpose of determining a mechanism of action.




Uses

Alternative medicine is commonly used for relatively minor health problems (such as fatigue, insomnia, or back pain). For the most part, AM is utilized for health enhancement in a relatively healthy patient.


An increasingly popular application of alternative therapies is in integrative
medicine, which is the combination of alternative and conventional remedies.
Integrative
medicine is emerging into mainstream medical practice because
of supporting clinical evidence of its benefits. One example of integrated
medicine is the use aromatherapy to minimize nausea after a course of
chemotherapy.




Early History

The term “alternative medicine” has been in use since the late eighteenth
century. The Greek physician Hippocrates, known as the founder of
medicine, introduced this concept during a time when humans were questioning
whether or not the practice of medicine is an art. Furthermore, Hippocrates
believed the mind and body both play a role in the healing process. Ironically
enough, the mind/body healing process is essentially the basis of many alternative
therapies.


Several healing systems existed in the nineteenth century. Treatment procedures
ranged from bleeding and purging to folk medicine and quackery. Many of these
approaches were dangerous and often fatal, leading people to revolt against these
extreme measures of medical practice. By midcentury, the general public showed its
disappointment with standard therapies and began to turn to alternative methods.
As a result, the first alternative medicine system in the West was implemented by
Samuel
Thomson, who used botanicals for healing. The plant drugs, he
believed, either evacuated or heated the body. After his death in 1840, the
Thomsonianism system fell from use.



Homeopathy was promoted by Samuel
Hahnemann, a German physician who treated many disease
symptoms with a series of drug dilutions. The term “allopathy” was coined by
Hahnemann while he was in the United States. Mainstream medicine adopted allopathy
as a standard medical term, and it has remained a part of health-care
terminology.


Also at midcentury, Americans were introduced to hydropathy. This Austrian treatment called for a variety of baths (usually cold) to eliminate toxins and for strict lifestyle changes (such as in diet, exercise, and sleep). Several other popular remedies during this time were magnetism and hypnosis healing, which was introduced by Franz Mesmer.


Because of so many AM options, New York-based Wooster Beach decided to combine the various treatment approaches that were based on clinical expertise, calling his new approach eclectic medicine. Eclectic medicine advocated for care that incorporates more than one type of therapy or method. A modern form of eclectic medicine is acupuncture with chiropractic or osteopathic care. Eclectic medicine was well received from 1820 through the 1930s.


The second generation of alternative medical systems began in the second half
of the nineteenth century. In the 1870s, Andrew Taylor
Still pioneered the technique of musculoskeletal
manipulation, better known as osteopathy. Following closely was
Daniel David
Palmer, who introduced chiropractic
medicine. By the late nineteenth century, osteopathic and chiropractic schools
were offering formal training. Naturopathy, using the body’s natural
healing powers, also became increasingly popular near the end of this century.




The Twentieth Century

By 1900, about 20 percent of all practitioners were AM physicians. Upon the discovery of novel drugs, such as antibiotics, in the 1930s and 1940s, the once highly acclaimed alternative therapies became nearly obsolete. Even doctors of osteopathic and chiropractic medicine were forced not to treat patients, and schools that once offered training in these disciples had to close their doors.


With immigration on the rise, especially in the 1970s, American physicians
began to discover acupuncture, Chinese herbal medicine, and Ayurvedic
medicine.


The philosophy of healing now faced much questioning by American physicians.
Controversy erupted between medical doctors and AM practitioners. AM was denounced
as unscientific, and AM practice was considered unethical. The American Medical
Association’s code of ethics even prohibited medical doctors
from consulting with persons who used alternative remedies.


By the late twentieth century, physicians were again allowed to consult with AM practitioners, and osteopathy and chiropractic were more and more accepted by the medical mainstream. The general public had become dissatisfied with traditional medicine. Americans felt that health care was impersonal, that pharmaceuticals caused harm, and that medical care was costly.


In 1992, the National Institutes of Health established the Office of
Alternative Medicine (now called the National Center for Complementary and Alternative
Medicine) in an effort to examine and report on the efficacy
of alternative methods. By 1995, the first journal dedicated to alternative
therapies and health was in circulation. The notion of mind/body healing was
regaining respect in mainstream medical practice.


A 1998 government report documented that four out of every ten American adults had used some type of alternative therapy in 1997. In addition, more than $20 billion was spent by Americans on alternative health care. By 2002, three out of four American adults had used some type of alternative remedy. With alternative medicine on the rise in the United States, the need for evidence-based alternative methods became clear.




Scientific Evidence

Testing alternative therapies for scientific relevance presents several
challenges. First, many therapies existed long before the development of Western
scientific, analytical methods. For instance, chiropractic procedures were
discovered before scientific understandings of the nervous system. Second,
mechanisms of action and proposed outcomes of alternative therapies are not
clearly understood. Third, interventions may be a combination of treatments. For
example, an Ayurvedic practitioner may prescribe herbal supplements, yoga, and
dietary restrictions. The problem here is determining what intervention cured a
certain problem. Finally, designing standardized placebo-controlled clinical
trials is difficult. An example of this is the challenge of
trying to create artificial yoga, chiropractic, or Tai Chi procedures.


New methods and study designs are needed to investigate alternative therapies for scientific support. It is encouraging to know, however, that thousands of trials are under way.




Conclusions

The combination of limited knowledge of the effects of AM and its increased use by health consumers produces a dangerous situation. Many products and procedures are not regulated, leading to the potential for risks. Herb-drug interactions may occur because of contamination or because of the poor quality of ingredients. Finally, not all AM practitioners are licensed or formally trained.




Bibliography


Alternative Medicine Center. http://www.altmed.net. A user-friendly guide to alternative medicine.



Goldberg, Burton. Alternative Medicine: The Definitive Guide. Tiburon, Calif.: Future Medicine, 1998. This book provides an overview of many different alternative medicine approaches.




Journal of Alternative and Complementary Medicine. http://www.liebertpub.com. A Web-based journal for practitioners seeking to integrate alternative medicine into their practice.



Marti, James E. The Alternative Health and Medicine Encyclopedia. 2d ed. Detroit: Visible Ink Press, 1997. This edition offers more than three hundred therapies for more than seventy disease states. It is easy to read and offers basic facts on a variety of alternative therapies.



Micozzi, Marc. Fundamentals of Complementary and Alternative Medicine. 3d ed. St. Louis, Mo.: Saunders/Elsevier, 2006. This book offers good background on the foundation and context of alternative therapies. Each entry also includes a list of further readings and related organizations.



Nash, Barbara. From Acupressure to Zen: An Encyclopedia of Natural Therapies. Upland, Pa.: Diane, 1998. This book provides an overview of basic information on many different alternative medicine approaches and natural therapies.



National Center for Complementary and Alternative Medicine. http://nccam.nih.gov. A U.S. government site that offers research-based information on complementary and alternative therapies.

What is race? |


Conflicting Definitions of Race

Few ideas have had such a contentious history as the use of the term “race.” Categorization relied on consideration of salient traits such as skin color, body form and hair texture to classify humans into distinct subcategories. The term “race” is currently believed to have little biological meaning, in great part because of advances in genetic research. Studies have revealed that a person’s genes cannot define their ethnic heritage and that no gene exists exclusively within one race/ethnocultural group. Biomedical scientists remain divided on their opinion about “race” and how it may be used in treating human genetic conditions.








For a racial or subspecies classification scheme to be objective and biologically meaningful, researchers must decide carefully which heritable characteristics (passed to future generations genetically) will define the groups. Several principles are considered. First, the unique traits must be discrete and not continually changing by small degrees between populations. Second, everyone placed within a specific race must possess the selected trait’s defining variant. All the selected characteristics are found consistently in each member of the group. For example, if blue eyes and brown hair are chosen as defining characteristics, everyone designated as belonging to that race must share both of those characteristics. Individuals placed in other races should not exhibit this particular combination. Third, individuals of the same race must have descended from a common ancestor, unique to those people. Many shared characteristics present in individuals of a race may be traced to that ancestor by heredity. Based on the preceding defining criteria (selection of discrete traits, agreement of traits, and common ancestry), pure representatives of each racial
category should be detectable.


Most researchers maintain that traditional races do not conform to scientific principles of subspecies classification. For example, the traits used to define traditional human races are rarely discrete. Skin color, a prominent characteristic employed, is not a well-defined trait. Approximately eleven genes influence skin color significantly, but fifty or so are likely to contribute. Pigmentation in humans results from a complex series of biochemical pathways regulated by amounts of enzymes (molecules that control chemical reactions) and enzyme inhibitors, along with environmental factors. Moreover, the number of melanocytes (cells that produce melanin) do not differ from one person to another, while their level of melanin production does. Like most complex traits involving many genes, human skin color varies on a continuous gradation. From lightest to darkest, all intermediate pigmentations are represented. Color may vary widely even within the same family. The boundary between black and white is an arbitrary, humanmade border, not one imposed by nature.


In addition, traditional defining racial characteristics, such as skin color and facial characteristics, are not found in all members of a race. For example, many Melanesians, indigenous to Pacific islands, have pigmentation as dark as any human but are not classified as “black.” Another example is found in individuals of the Cherokee Nation that have Caucasoid facial features and very dark skin, yet have no European ancestry. When traditional racial characteristics are examined closely, many groups are left with no conventional racial group. No “pure” genetic representatives of any traditional race exist.


Common ancestry must also be considered. Genetic studies have shown that Africans do not belong to a single “black” heritage. In fact, several lineages are found in Africa. An even greater variance is found in African Americans. Besides a diverse African ancestry, on average 13 percent of African American ancestry is Northern European. Yet all black Americans are consolidated into one race.


The true diversity found in humans is not patterned according to accepted standards of a subspecies. Only at extreme geographical distances are notable differences found. Human populations in close proximity have more genetic similarities than distant populations. Well-defined genetic borders between human populations are not observed, and racial boundaries in classification schemes are most often formed arbitrarily.




History of Racial Classifications

Efforts to classify humans into a number of distinct types date back at least to the 19th Dynasty of Ancient Egypt. The sacred text Book of Gates described four distinct groups: “Egyptians,” “Asiatics,” “Libyans,” and “Nubians” were defined using both physical and geographical characteristics. Applying scientific principles to divide people into distinct racial groups has been a goal for much of human history.


In 1758, the founder of biological classification, Swedish botanist Carolus Linnaeus, arranged humans into four principal races: Americanus, Europeus, Asiaticus, and Afer. Although geographic location was his primary organizing factor, Linnaeus also described the races according to subjective traits such as temperament. Despite his use of archaic criteria, Linnaeus did not give superior status to any of the races.


Johann Friedrich Blumenbach, a German naturalist and admirer of Linnaeus, developed a classification with lasting influence. Blumenbach maintained that the original forms, which he named “Caucasian,” were those primarily of European ancestry. His final classification, published in 1795, consisted of five races: Caucasian, Malay, Ethiopian, American, and Mongolian. The fifth race, the Malay, was added to Linnaeus’s classification to show a step-by-step change from the original body type.


After Linnaeus and Blumenbach, many variations of their categories were formulated, chiefly by biologists and anthropologists. Classification “lumpers” combined people into only a few races (for example, black, white, and Asian). “Splitters” separated the traditional groups into many different races. One classification scheme divided all Europeans into Alpine, Nordic, and Mediterranean races. Others split Europeans into ten different races. No one scheme of racial classification came to be accepted throughout the scientific community.




Genetics and Theories of Human Evolution

Advances in DNA technology have greatly aided researchers in their quest to reconstruct the history of Homo sapiens and its diversification. Analysis of human DNA has been performed on both nuclear and mitochondrial DNA. The nucleus is the organelle that contains the majority of the cell’s genetic material. Mitochondria are organelles responsible for generating cellular energy. Each mitochondrion contains a single, circular DNA molecule. Research suggests that Africa was the root of all humankind and that humans first arose there 100,000 to 200,000 years ago. Several lines of research, including DNA analysis of humanoid fossils, provide further evidence for this theory.


Many scientists are using genetic markers to decipher the migrations that fashioned past and present human populations. For example, DNA comparisons revealed three Native American lineages. Some scientists believe one migration crossed the Bering Strait, most likely from Mongolia. Another theory states that three separate Asian migrations occurred, each bringing a different lineage.




Genetic Diversity Among Races

Three primary forces produce the genetic components of a population: natural selection, nonadaptive genetic change, and mating between neighboring populations. The first two factors may result in differences between populations, and reproductive isolation, either voluntary or because of geographic isolation, perpetuates the distinctions. Natural selection refers to the persistence of genetic traits favorable in a specific environment. For example, a widely held assumption concerns skin color,
primarily a result of the pigment melanin. Melanin offers some shielding from ultraviolet solar rays. According to this theory, people living in regions with concentrated ultraviolet exposure have increased melanin synthesis and, therefore, dark skin color conferring protection against skin cancer. Individuals with genes for increased melanin have enhanced survival rates
and reproductive opportunities. The reproductive opportunities produce offspring that inherit those same genes for increased melanin. This process results in a higher percentage of the population with elevated melanin production genes. Therefore, genes coding for melanin production are favorable and persist in these environments.


The second factor contributing to the genetic makeup of a population is nonadaptive genetic change. This process involves random genetic mutations (alterations). For example, certain genes are responsible for eye color. Individuals contain alternate forms of these genes, or alleles, which result in different eye color. Because these traits are impartial to environmental influences, they may endure from generation to generation. Different populations will spontaneously produce, sustain, and delete them.


The third factor, mating between individuals from neighboring groups, tends to merge traits from several populations. This genetic mixing often results in offspring with blended characteristics.


Several studies have compared the overall genetic complement of various human populations. On average, any two people of the same or a different race diverge genetically by a mere 0.1 percent. It is estimated that only 0.012 percent contributes to traditional racial variations. Hence, most of the genetic differences found between a person of African descent and a person of European descent are also different between two individuals with the same ancestry. The genes do not differ. It is the proportion of individuals expressing a specific allele that varies from population to population.


Upon closer examination, it was found that the continent of Africa is unequaled with respect to cumulative genetic diversity. Numerous races are found in Africa, Khoisan Africans of southern Africa being the most distinct. Therefore two people of different ethnicities who do not have recent African ancestry (for example Northern Europeans and South East Asians) have more similar genetics than any two distinct African ethnic groups. This finding supports theories of early human migration in which humans first evolved in Africa and a subset left the continent, experienced a population bottleneck, and then established the human populations around the world.




Human Genome Diversity Project and Advances in Research

Many scientists are attempting to reconcile the negativities associated with racial studies. The Human Genome Diversity Project (HGDP), was initiated by Stanford University in 1993 and functions independently from the Human Genome Project. The HGDP aims to collect and store DNA from ethnically diverse populations around the world, creating a library of samples to represent global human diversity. Results of future studies may aid in gene therapy treatments and greater success with organ transplantation. As a result, a more thorough understanding of the genetic diversity and unity in the species Homo sapiens will be possible.


At the population level, human diversity is greatest within racial/cultural groups rather than between them. Originally, geneticists who studied genetic diversity of human populations were limited to data from very few genetic loci (locations in the genome that are of interest); however, recent studies are able to simultaneously analyze hundreds to thousands of loci. It is currently estimated that 90 percent of genetic variation in human beings is found within each purported racial group, while differences between the groups only equate to the remaining 10 percent.


A second method of studying human genetic diversity is to compare ethnically diverse individuals and search for similarities and differences in their genomes. Early studies involved only a few dozen genetic loci and as a result did not find individuals to cluster (group together) based on their geographic origin. Recent studies, however, were able to analyze substantially more genetic loci and resulted in data with stronger statistical power. These studies focused on individuals from three distinct geographic areas: Europe, sub-Sahran Africa, and East Asia. Indeed, individuals clustered or shared more genetic similarities with others of the same geographic region. Participants from Africa were found to have the greatest diversity, which is in agreement with population studies. Another cluster consisted exclusively of Europeans, and a third comprised the Asian individuals. However, when individuals from neighboring regions were also analyzed, such as South Indians, the analysis showed similarities to both East Asians and Europeans. This finding may be explained by the numerous migrations between Europe and India during the past ten thousand years.
Many individuals did not cluster with their geographic cohorts, demonstrating that individuals are not easily categorizeable into neat groups of races but tend to share more genetic similarities with people from their region.


Race or an individual’s ancestry can sometimes provide useful information in medical decision making, as gender or age often do. Certain genetic conditions are more common among ethnocultural groups. For example, hemochromatosis is more prevalent within Northern Europeans and Caucasians, whereas sickle-cell disease is more often found in Africans and African Americans. Meanwhile other genetic diseases are equally prevalent across racial groups, as seen in spinal muscular atrophy (SMA). If a disease-causing gene is common, then it is likely to be relatively ancient and thus shared across ethnicities. Moreover, some genetic conditions remain prevalent in populations because they provide an adaptive advantage to the individual, as seen in sickle-cell disease carriers being protected against malarial infection. Likewise, an individual’s response to drugs may be mediated by their genetic makeup. A gene called
CYP2D6
is involved in the metabolism or breakdown of many important drugs such as codeine and morphine. Some individuals have no working copy of this gene whereas others have one or two copies that function properly. The majority of individuals with no working copies are of European heritage (26 percent), whereas fewer Asian (6 percent) and African populations (7 percent) fall into this category. Thus it may be tempting to make medical decisions based on a patient’s ethnic heritage; however, this may lead to inaccurate diagnoses (missing sickle-cell disease in an Asian individual) or inappropriate drug administration (prohibiting a Caucasian person from taking codeine). Ideally, each individual should have medical decisions made based on their genetic makeup in lieu of their ethnic heritage. Future patients may be able to first request an analysis of their genome, which would aid their physicians in making some genetically appropriate medical
decisions.




Sociopolitical Implications

Race is often portrayed as a natural, biological division, the result of geographic isolation and adaptation to local environment. However, confusion between biological and cultural classification obscures perceptions of race. When individuals describe themselves as “black,” “white,” or “Hispanic,” for example, they are usually describing cultural heredity as well as biological similarities. The relative importance of perceived cultural affiliations or genetics varies depending on the circumstances. Examples illustrating the ambiguities are abundant. Nearly all people with African American ancestry are labeled black, even if they have a white parent. In addition, dark skin color designates one as belonging to the black race, including Africans and aboriginal Australians, who have no common genetic lineage. State laws, some on the books until the late 1960’s, required a “Negro” designation for anyone with one-eighth black heritage (one black great-grandparent).


Unlike biological boundaries, cultural boundaries are sharp, repeatedly motivating discrimination, genocide, and war. In the early and mid-twentieth century, the eugenics movement, advocating the genetic improvement of the human species, translated into laws against interracial marriage, sterilization programs, and mass murder. Harmful effects include accusations of deficiencies in intelligence or moral character based on traditional racial classification.


The frequent use of biology to devalue certain races and excuse bigotry has profound implications for individuals and society. Blumenbach selected Caucasians (who inhabit regions near the Caucasus Mountains, a Russian and Georgian mountain range) as the original form of humans because in his opinion they were the most beautiful. All other races deviated from this ideal and were, therefore, less beautiful. Despite Blumenbach’s efforts not to demean other groups based on intelligence or moral character, the act of ranking in any form left an ill-fated legacy.


In conclusion, race remains a contentious issue both in many fields of science and within the greater society. Recent genomic studies at both the individual and the population level have shown that the majority of human genetic composition is universal and shared across all ethnocultural groups. Shared genetics is most commonly found in individuals who originate from the same geographic region. However, there is no scientific support for the concept of distinct, “pure,” and nonoverlapping races. Unfortunately throughout human history, the use and abuse of the term “race” has been pursued for sociopolitical gains or to justify bigotry toward and abuses of individuals. It is now known that human genetic diversity is a continuum, with natural selection, nonadaptive genetic change, and mating as the true driving forces for human genetic diversity.




Key terms



eugenics

:

a movement concerned with the improvement of human genetic traits, predominantly by the regulation of mating




Human Genome Diversity Project

:

an extension of the Human Genome Project in which DNA of native people around the world is collected for study




population

:

a group of geographically localized, interbreeding individuals




race

:

a collection of geographically localized populations with well-defined genetic traits





Bibliography


Cavalli-Sforza, Luigi L. The Great Human Diasporas: A History of Diversity and Evolution. Translated by Serah Thorne. Reading, Mass.: Addison-Wesley, 1995. Argues that humans around the world are more similar than different.



_______, et al. The History and Geography of Human Genes. Princeton, N.J.: Princeton University Press, 1996. Often referred to as a “genetic atlas,” this volume contains fifty years of research comparing heritable traits, such as blood groups, from more than one thousand human populations.



Fish, Jefferson M., ed. Race and Intelligence: Separating Science from Myth. Mahwah, N.J.: Lawrence Erlbaum, 2002. An interdisciplinary collection disputing race as a biological category and arguing that there is no general or single intelligence and that cognitive ability is shaped through education.



Garcia, Jorge J. E. Race or Ethnicity? On Black or Latino Identity. Ithaca, N.Y.: Cornell University Press, 2007. Essays discuss whether racial identity matters and consider issues associated with assimilation, racism, and public policy.



Gates, E. Nathaniel, ed. The Concept of “Race” in Natural and Social Science. New York: Garland, 1997. Argues that the concept of race, as a form of classification based on physical characteristics, was arbitrarily conceived during the Enlightenment and is without scientific merit.



Gibbons, A. “Africans’ Deep Genetic Roots Reveal Their Evolutionary Story.” Science 324 (2009): 575. Describes the largest study ever conducted of African genetic diversity, which reveals Africans are descendants from 14 distinct ancestral groups that often correlate with language and cultural groups.



Gould, Stephen Jay. The Mismeasure of Man. Rev. ed. New York: W. W. Norton, 1996. Presents a historical commentary on racial categorization and a refutation of theories espousing a single measure of genetically fixed intelligence.



Graves, Joseph L., Jr. The Emperor’s New Clothes: Biological Theories of Race at the Millennium. New Brunswick, N.J.: Rutgers University Press, 2001. Argues for a more scientific approach to debates about race, one that takes human genetic diversity into account.



Herrnstein, Richard J., and Charles Murray. The Bell Curve: Intelligence and Class Structure in America. New York: Free Press, 1994. The authors maintain that IQ is a valid measure of intelligence, that intelligence is largely a product of genetic background, and that differences in intelligence among social classes play a major part in shaping American society.



Jorde, L. B., and S. P. Wooding. “Genetic Variation, Classification, and ’Race.’” Nature Genetics 36, no. 11 (2004): S28. A review article that provides an overview of human variation and discusses whether current data support historic ideas of race, and what these findings imply for biomedical research and medicine.



Kevles, Daniel J. In the Name of Eugenics: Genetics and the Uses of Human Heredity. Cambridge, Mass.: Harvard University Press, 1995. Discusses genetics both as a science and as a social and political perspective, and how the two often collide to muddy the boundaries of science and opinion.



Royal, C., and G. Dunston. “Changing the Paradigm from ’Race’ to Human Genome Variation.” Nature Genetics 36 (2004): S5-S7. Commentary suggests we begin to think outside the box and see ethnic groups as genomic diversity rather than distinct races.



Valencia, Richard R., and Lisa A. Suzuki. Intelligence Testing and Minority Students: Foundations, Performance Factors, and Assessment Issues. Thousand Oaks, Calif.: Sage Publications, 2000. Historical and multicultural perspective on intelligence and its often assumed relation with socioeconomic status, home environment, test bias, and heredity.

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...