Saturday 31 August 2013

How could I start an argumentative essay about how racism still exists?

Evidence for the continued existence of racism can be found on a number of different levels. Examining racial differences in incarceration rates and rates of police brutality, contemporary racial hate groups, and representation in American media might be a good starting point for finding a variety of different types of racism.Some evidence for the continued existence of racism can be found in Michelle Alexander's book, The New Jim Crow. In this book, Alexander uses...

Evidence for the continued existence of racism can be found on a number of different levels. Examining racial differences in incarceration rates and rates of police brutality, contemporary racial hate groups, and representation in American media might be a good starting point for finding a variety of different types of racism.

Some evidence for the continued existence of racism can be found in Michelle Alexander's book, The New Jim Crow. In this book, Alexander uses statistics to prove that African Americans are incarcerated for drug crimes at much higher rates than  white Americans, despite the fact that white Americans are more likely to commit drug crimes. Alexander argues that differing incarceration rates have developed a "racial caste system" to deny opportunity to African Americans. African Americans are additionally more likely to be assaulted and even killed by police officers than non-black Americans.

Other evidence for the existence of racism includes the continued existence of hate groups such as the KKK, and the lack of representation of nonwhite people in American media. According to UCLA's 2015 Hollywood Diversity Report, people of color are greatly underrepresented in all media jobs; although minorities constitute 40% of the U.S. population, white lead actors in broadcast TV shows number nonwhite leads by a ratio of six to one.

What is Morgellons disease? |


Causes and Symptoms

Morgellons disease is a pattern of dermatologic symptoms first described several centuries ago. Patients typically complain of insectlike sensations, such as persisting itching, stinging, biting, pricking, burning, and crawling. They often have skin
lesions that can vary from very minor to disfiguring. Some patients, however, have no visible changes in the skin. In some cases, fiberlike material can be obtained from the skin lesions; patients describe this material as “fibers,” “fiber balls,” and “fuzz balls.” In other cases, “granules” can be removed from the skin, described by the patients as “seeds,” “eggs,” and “sands.” The majority of patients report disabling fatigue, reduced capacity to exercise, joint pain, and sleep disturbances. Additional symptoms may include hair loss, neurological symptoms, weight gain, recurrent fever, orthostatic intolerance, tachycardia, decline in vision, memory loss, and endocrine abnormalities (such as diabetes type 2, Hashimoto’s thyroiditis, hyperparathyroidism, or adrenal hypofunction).







The disease may occur at any age and has a large geographic distribution. It occurs in both males and females. Cases of elderly women living alone seem more frequently reported. Physical stress was reported to be a common precursor. Rural residence and exposure to unhygienic conditions (contact with soil or waste products) are often described. Results from routine laboratory tests are often variable and inconsistent.


The vast majority of these patients have been diagnosed with psychosomatic illness. Prior psychiatric diagnosis (such as bipolar disorder, paranoia, schizophrenia, depression, and drug abuse) has been recorded in more than 50 percent of patients . Patients are obsessively focused on the skin symptoms in terms of complaints and measures to eradicate the disease and to prevent contagion. They usually seek help from between ten and forty physicians and complain of being not understood or taken seriously. Usually, patients are intensely anxious and not open to the idea that they may have a psychological or neurological pathology. They often experience extreme frustration.



Morgellons disease has also been reported in association with conditions that are characterized by itching, such as renal disease, malignant lymphoma, or hepatic disease.


The etiology of Morgellons disease remains under investigation. So far, no examinations, biopsies, and tests have been able to provide evidence supporting any possible cause. Skin biopsies from patients with Morgellons disease typically reveal nonspecific pathology or inflammatory process/reaction with no observable pathogen. In a 2012 study from the US Centers for Disease Control and Prevention (CDC) researchers did not find any evidence that Morgellons is caused by either an environment substance or an infectious agent.




Treatment and Therapy

The management of patients with Morgellons disease is symptomatic and supportive. It can include skin care with baths, topical ointments, and emollients. It is important for the treating physician (in most cases, a dermatologist) to refer the patient to a psychiatrist or to prescribe appropriate psychoactive medication. Long-term treatment with pimozid (0.5 to 2 milligrams once daily) has been suggested. Risperidone and aripiprazone have also been reported to be efficient. Patients should be convinced that the medication may be needed for months or years.




Perspective and Prospects

Morgellons disease was initially described in France, in 1674, by Sir Thomas Browne. “The Morgellons” was the term used to describe dermal complaints such as hairlike extrusions and sensations of movement beneath the skin reported by children. By the early seventeenth century, this condition was thought to be caused by the parasite
Dranculus (later called Dracontia), and the suggested treatment consisted of filament removal from the skin. Michel Ettmuller produced the only known drawing dating from 1682 of “The Morgellons,” the objects associated with what was then believed to be a parasitic infestation in children.


The name “Morgellons disease” was created in 2002 to describe patients presenting with this clinical set of symptoms and to provide an alternative to “delusion of parasitosis.” Although the condition was first described many centuries ago, much attention has recently been given to the disease because of the Internet, mass media, and the online support group Morgellons Research Foundation at www.morgellons.com. There is still a discussion whether Morgellons disease is very similar, if not identical, to “delusion of parasitosis.” Thus, whether Morgellons disease is a delusional disorder or even a disease has been a mystery for more than three hundred years. So far, research about Morgellons is sparse and limited. General practitioners, mental health professionals, and the general public need to be aware of the signs and symptoms of this mysterious condition. Some authors suggest the term “syndrome” instead of “disease.”




Bibliography


"CDC Study of an Unexplained Dermopathy." Centers for Disease Control and Prevention, January 25, 2012.



Harvey, William T., et al. “Morgellons Disease: Illuminating an Undefined Illness—A Case Series.” Journal of Medical Case Reports no. 3 (2009): 8243.



Fair, Brian. "Morgellons: Contested Illness, Diagnostic Compromise and Medicalisation." Sociology of Health & Illness 32, no. 4 (May 2010): 597–612.



Koblenezer, Caroline S. “The Challenge of Morgellons Disease.” Journal of the American Academy of Dermatology 55 (2006): 920–22.



"Morgellons Disease: Managing a Mysterious Skin Condition." Mayo Clinic, April 11, 2012.



Savely, Virginia R., Mary M. Leitao, and Raphael B. Stricker. “The Mystery of Morgellons Disease: Infection or Delusion?” American Journal of Clinical Dermatology 7, no. 1 (2006): 1–5.

How was Uncle Tom's Cabin by Harriet Beecher Stowe used in the slavery abolition movement?

The popularity of Harriet Beecher Stowe's Uncle Tom's Cabinhelped galvanize the slavery abolition movement just prior to the Civil War. The novel focused on the impact of slavery on individuals, adding a personal element to the national conversation about the political and economic impacts of abolishing slavery. It provided perspective on the experiences of enslaved families and mothers to free white readers who may not have previously considered the humanity of enslaved people. This...

The popularity of Harriet Beecher Stowe's Uncle Tom's Cabin helped galvanize the slavery abolition movement just prior to the Civil War. The novel focused on the impact of slavery on individuals, adding a personal element to the national conversation about the political and economic impacts of abolishing slavery. It provided perspective on the experiences of enslaved families and mothers to free white readers who may not have previously considered the humanity of enslaved people. This emotional appeal to abolitionism became an important part of the anti-slavery movement just before the beginning of the Civil War.


Another impact of the novel was the development of Christian theology as an argument against slavery. Stowe's argument resonated with readers because she thoroughly explored the nature of Christianity as it relates to slavery, concluding that Christianity is incompatible with enslaving people. This interpretation added another layer to the argument against slavery and gave the abolitionist movement another argument for the immediate abolition of slavery. The novel's popularity during a time of tension between slavery supporters and abolitionists added emotional and religious elements to the conversation, galvanizing abolitionists to begin resisting the institution of slavery.

Friday 30 August 2013

What is hysteria? |


Introduction

The concept of hysteria has a rich history that dates back to early civilizations. Ancient Egyptian papyri provide the first medical records of hysteria. Egyptian physicians believed that the somatic and emotional problems of certain unstable women were caused by a migratory uterus. They prescribed the topical use of sweet- or foul-smelling herbs to entice or repel the uterus back to its original position. This theme of sexual etiology has pervaded theories of hysteria throughout the centuries.















Greco-Roman Views

There is considerable continuity between Egyptian and Greco-Roman views of hysteria. Hippocrates, often considered the founder of medicine, included the condition in the Corpus Hippocraticum (fifth to third centuries b.c.e.; The Genuine Works of Hippocrates, 1849), and solidified its connection with the uterus by assigning the appellation “hysteria,” which is derived from the Greek term for the organ, “hystera.” The Greeks were the first to connect hysteria with sexual activity; they believed that the condition occurred primarily in adult women deprived of sexual relations for extended periods, resulting in the migration of the uterus. Aromatic remedies were also prescribed by the Greeks, but the recommended remedy was to marry and, if possible, become pregnant. Some skeptics of the day denied the motility of the uterus. For example, the Roman physician Galen proposed that hysteria was instead caused by the retention of a substance analogous to sperm in the female, which was triggered by long-term abstinence.




The Dark Ages

As the Middle Ages approached, magical thinking and superstition increased. Some Christian writers, especially Saint Augustine, condemned sex as the work of such unholy spirits as incubi, succubi, and witches. Numerous behavioral afflictions, particularly the peculiar and transient symptoms of hysteria, were viewed as the result of witchcraft. Many hysterics became victims of the witch craze, a long and dark chapter in Western history. The Malleus Maleficarum (c. 1486; Malleus Maleficarum, 1928), a manual whose title means “the witches’ hammer,” was written by two Dominican monks, Heinrich Kraemer and Jakob Sprenger. This book outlined the “telltale” signs of witchcraft, which were widely used and regarded as diagnostic by Middle Age inquisitors. Hysterical patients became both accusers, who came forth with complaints that spells had been cast on them, and confessors, who were willing to implicate themselves by weaving accounts of their participation in strange sexual rituals and witchcraft.




The Renaissance to the Victorian Era

With the arrival of the Renaissance, views on hysteria changed to accommodate natural causes. Medical writers of the day recognized the brain as the source of the affliction. As a result, hysteria soon became a topic of interest for neurologists. In addition, physicians suggested emotional contributions to hysteria, including melancholy (which resembles modern depression). Largely as a result of the writings of physician Thomas Sydenham
, hysteria came to be considered an affliction of the mind. At this time, some proposed a male analogue to hysteria termed “ hypochondriasis”.


Throughout history, the symptoms of hysteria have reflected prevailing sociocultural norms and expectations. In the nineteenth century, ideal women were physically and emotionally delicate, which was reflected in their greater susceptibility to hysteria and in the nature of their symptoms. Over time, infrequent but spectacular hysterical paroxysms gave way to milder chronic symptoms. Fainting spells, euphemistically called “the vapors,” were accepted as a natural reaction of the vulnerable female to emotional distress. Some clinicians of this era considered hysteria a form of “moral insanity” and emphasized hysterical patients’ penchant for prevarication, flamboyant emotional displays, and nearly constant need for attention. Others viewed hysterics with a patronizing compassion, as unfortunate victims of the natural weakness of femininity.




Hypnosis and Psychoanalytic Underpinnings

Conceptions of hysteria were shaped substantially by the work of French neurologist Jean-Martin Charcot. Charcot emphasized the importance of suggestibility in the etiology of hysterical behavior; he found that under hypnosis, some hysterical patients’ symptoms could be made to appear or disappear largely at will. Charcot was also the first to assign significance to a pathogenic early environment in producing hysterical episodes.


The young Austrian neurologist Sigmund Freud
began his career by studying the blockage of sensation by chemicals. This interest extended to hysteria (known for its anesthetic symptoms), which brought him into contact with Viennese internist Josef Breuer
. Breuer’s account of the famous hysterical patient Anna O. and her treatment provided the early foundations of psychoanalytic theory. Breuer found that, under hypnosis, Anna recalled the psychological trauma that had ostensibly led to her hysteria. Moreover, he found that her symptoms improved or disappeared after this apparent memory recovery. Freud studied hypnosis under Charcot and extended Breuer’s concepts and treatments to develop his own theory of hysteria. He reintroduced sexuality into the etiology of hysteria, particularly the notion of long-forgotten memories of early sexual trauma. Freud himself eventually concluded that most or all of these “recovered” memories were fantasies or confabulations, a view shared by many modern memory researchers.




Current Status

The term “hysteria” has long been regarded as vague and needlessly pejorative, and it is no longer a part of the formal diagnostic nomenclature. The broad concept of hysteria was splintered with the appearance of the third edition of the
Diagnostic and Statistical Manual of Mental Disorders
(DSM-III) in 1980 and is currently encompassed by a broad array of conditions, including somatic disorders, dissociative disorders, and histrionic personality disorder. The separation of somatic from dissociative disorders in the current diagnostic system is controversial, because these two broad groupings of disorders often covary substantially with one another. Some researchers have argued that somatic and dissociative disorders should be reunited under a single broad diagnostic umbrella.


Somatic disorders are a group of ailments in which the presence of physical symptoms suggests a medical condition but in which the symptoms are involuntarily psychologically produced. Somatic symptom disorder, conversion disorder, illness anxiety disorder (formerly known as hypochondriasis), and factitious disorder are the major conditions in this group. Dissociative disorders are characterized by disruptions in the integrated functioning of consciousness, memory, identity, or perception. Dissociative amnesia, dissociative fugue, dissociative identity disorder (formerly multiple personality disorder), and depersonalization/derealization disorder belong to this category. The causes of some dissociative disorders, particularly dissociative identity disorder, are controversial, as some writers maintain that these conditions are largely a product of inadvertent therapeutic suggestion and prompting. This controversy has been fueled by the fact that diagnoses of dissociative identity disorder have become much more frequent.


Histrionic personality disorder (HPD), formerly hysterical personality disorder, is the most direct descendent of the concept of hysteria. This disorder involves excessive emotionality and attention-seeking behaviors and is often a correlate of somatic and perhaps dissociative disorders. Due to its roots in diagnoses of hysteria and the fact that it is more commonly diagnosed in women, HPD remains controversial. While the claim that it is used to pathologize normal female behavior is widely regarded to be untrue, there is a somewhat better-respected theory that HPD is not actually distinct from antisocial personality disorder, but is the result of societal factors causing antisocial personality disorder to manifest differently in women. There was some speculation in the psychological community that the two disorders would be merged in the DSM-5, but HPD remains a separate diagnosis.


A paucity of behavior-genetic studies leaves the relative contribution of genetic and environmental factors to somatic and dissociative disorders a mystery. The precise sociocultural expressions of hysteria are also unclear. Many authors have suggested that hysteria has been manifested in a plethora of different conditions over time and across cultures, including dissociative identity disorder, somatoform disorders, purported demonic possession, mass hysteria, and even such religious practices as glossolalia (speaking in tongues). According to these authors, such seemingly disparate conditions are all manifestations of a shared predisposition that has been shaped by sociocultural norms and expectancies.“Hysteria” as a diagnostic label is no longer accepted, but its protean manifestations may be here to stay.




Bibliography


Bartholomew, Robert E., Robert J. M. Rickard, and Glenn Dawes. Mass Hysteria in Schools: A Worldwide History since 1566. Jefferson: McFarland, 2014. Print.



Bollas, Christopher. Hysteria. New York: Routledge, 2000. Print.



Chodoff, Paul, and Henry Lyons. “Hysteria, the Hysterical Personality, and 'Hysterical' Conversion.” American Journal of Psychiatry 114 (1958): 734-740. Print.



Hustvedt, Asti. Medical Muses: Hysteria in Nineteenth-Century Paris. New York: Norton, 2011. Print.



Kraemer, Heinrich Institoris, and Jakob Sprenger. Malleus Maleficarum. Trans. Montague Summers. New York: Blom, 1970. Print.



Pickren, Wade E., and Alexandra Rutherford. A History of Modern Psychology in Context. Hoboken: Wiley, 2010. Print.



Sarkar, Jaydip, and Gwen Adshead. Clinical Topics in Personality Disorder. London: Royal College of Psychiatrists, 2012. Print.



Scull, Andrew. Hysteria: The Disturbing History. Oxford: Oxford UP, 2011. Print.



Shapiro, David. “Hysterical Style.” Neurotic Styles. New York: Basic, 2000. Print.



Spanos, Nicholas P. Multiple Identities and False Memories: A Sociocognitive Perspective. Washington, DC: American Psychological Assn., 1996. Print.



Veith, Ilza. Hysteria: The History of a Disease. Northvale: Aronson, 1993. Print.



Widiger, Thomas A. The Oxford Handbook of Personality Disorders. Oxford: Oxford UP, 2012. Print.



Yarom, Nitza. The Matrix of Hysteria: Psychoanalysis of the Struggle Between the Sexes Enacted in the Body. New York: Routledge, 2005. Print.

What are quasi-experimental designs? |


Introduction

The feature that separates psychology from an area such as philosophy is its reliance on the empirical method for its truths. Instead of arguing deductively from premises to conclusions, psychology progresses by using inductive reasoning in which psychological propositions are formulated as experimental hypotheses that can be tested by experiments. The outcome of the experiment determines whether the hypothesis is accepted or rejected. Therefore, the best test of a hypothesis is one that can be interpreted unambiguously. True experiments are considered the best way to test hypotheses, because they are the best way to rule out plausible alternative explanations (confounds) to the experimental hypothesis. True experiments are studies in which the
variable whose effect the experimenter wants to understand, the independent variable, is randomly assigned to the experimental unit (usually a person); the researcher observes the effect of the independent variable by responses on the outcome measure, the dependent variable.




For example, if one wanted to study the effects of sugar on hyperactivity in children, the experimenter might ask, “Does sugar cause hyperactive behavior?” Using a true experiment, one would randomly assign half the children in a group to be given a soft drink sweetened with sugar and the other half a soft drink sweetened with a sugar substitute. One could then measure each child’s activity level; if the children who were assigned the sugar-sweetened drinks showed hyperactivity as compared to the children who received the other drinks, one could confidently conclude that sugar caused the children to show hyperactivity. A second type of study, called a correlational study, could be done if one had investigated this hypothesis by simply asking or observing which children selected sugar-sweetened drinks and then comparing their behavior to the children who selected the other drinks. The correlational study, however, would not have been able to show whether sugar actually caused hyperactivity. It would be equally plausible that children who are hyperactive simply prefer sugar-sweetened drinks. Such correlational studies have a major validity weakness in not controlling for plausible rival alternative hypotheses. This type of hypothesis is one that is different from the experimenter’s preferred hypothesis and offers another reasonable explanation for experimental results. Quasi-experimental designs stand between true experiments and correlational studies in that they control for as many threats to validity as possible.




Plausible Alternative Explanations


Experimental and Quasi-Experimental Designs for Research (1966), by Donald T. Campbell and Julian Stanley, describes the major threats to validity that need to be controlled for so that the independent variable can be correctly tested. Major plausible alternative explanations may need to be controlled when considering internal validity. (“Controlled” does not mean that the threat is not a problem; it means only that the investigator can judge how probable it is that the threat actually influenced the results.)


An external environmental event may occur between the beginning and end of the study, and this historical factor, rather than the treatment, may be the cause of any observed difference. For example, highway fatalities decreased in 1973 after an oil embargo led to the establishment of a speed limit of 55 miles per hour. Some people believed that the cause of the decreased fatalities was the 55-mile-per-hour limit. If the oil embargo caused people to drive less because they could not get gasoline or because it was higher priced, however, either of those events could be a plausible alternative explanation. The number of fatalities may have declined simply because people were driving less, not because of the speed-limit change.


Maturation occurs when natural changes within people cause differences between the beginning and end of the study. Even over short periods of time, for example, people become tired, hungry, or bored. It may be these changes rather than the treatment that causes observed changes. An investigation of a treatment for sprained ankles measured the amount of pain people had when they first arrived for treatment and then measured their pain again four weeks after treatment. Finding a reduction in reported pain, the investigator concluded that the treatment was effective. Since a sprained ankle will probably improve naturally within four weeks, however, maturation (in this case, the natural healing process) is a plausible alternative explanation.


Testing is a problem when the process of measurement itself leads to systematic changes in measured performance. A study was done on the effects of a preparatory course on performance on the American College Test (ACT), a college entrance exam. Students were given the ACT, then given a course on improving their scores, then tested again; they achieved higher scores, on the average, the second time they took the test. The investigator attributed the improvement to the prep course, when actually it may have been simply the practice of taking the first test that led to improvement. This plausible alternative explanation suggests that even if the students had not taken the course, they would have improved their scores on the average on retaking the ACT. The presence of a control group (a group assembled to provide a comparison to the treatment group results) would improve this study.


A change in the instruments used to measure the dependent variable will also cause problems. This is a problem particularly when human observers are rating behaviors directly. The observers may tire, or their standards may shift over the course of the study. For example, if observers are rating children’s “hyperactivity,” they may see later behavior as more hyperactive than earlier behavior not because the children’s behavior has changed but because, through observing the children play, the observers’ own standards have shifted. Objective measurement is crucial for controlling this threat.


Selection presents a problem when the results are caused by a bias in the choice of subjects for each group. For example, a study of two programs designed to stop cigarette smoking assigned smokers who had been addicted for ten years to program A and smokers who had been addicted for two years to program B. It was found that 50 percent of the program B people quit and 30 percent of the program A people quit. The investigators concluded that program B is more effective; however, it may be that people in program B were more successful simply because they were not as addicted to their habit as the program A participants.


Mortality, or attrition, is a problem when a differential dropout rate influences the results. For example, in the preceding cigarette study, it might be that of one hundred participants in program A, ninety of them sent back their post-test form at the end of the study; for program B, only sixty of the participants sent their forms back. It may be that people who did not send their forms back were more likely to have continued smoking, causing the apparent difference in results between programs A and B.


When subjects become aware that they are in a study, and awareness of being observed influences their reactions, reactivity has occurred. The famous Hawthorne studies on a wiring room at a Western Electric plant were influenced by this phenomenon. The investigators intended to do a study on the effects of lighting on work productivity, but they were puzzled by the fact that any change they made in lighting—increasing it or decreasing it—led to improved productivity. They finally decided it was the workers’ awareness of being in an experiment that caused their reactions, not the lighting level.


Statistical regression is a problem that occurs when subjects are selected to be in a group on the basis of their extreme scores (either high or low) on a test. Their group can be predicted to move toward the average the next time they take the test, even if the treatment has had no effect. For example, if low-scoring students are assigned to tutoring because of the low scores they achieved on a pretest, they will score higher on the second test (a post-test), even if the tutoring is ineffective.




External Threats to Validity

External threats to validity constitute the other major validity issue. Generally speaking, true experiments control for internal threats to validity by experimental design, but external threats may be a problem for true experiments as well as quasi-experiments. Since a scientific finding is one that should hold true in different circumstances, external validity (the extent to which the results of any particular study can be generalized to other settings, people, and times) is a very important issue.


An interaction between selection and treatment can cause an external validity problem. For example, since much of the medical research on the treatment of diseases has been performed by selecting only men as subjects, one might question whether those results can be generalized to women. The interaction between setting and treatment can be a problem when settings differ greatly. For example, can results obtained on a college campus be generalized to a factory? Can results from a factory be generalized to an office? The interaction of history and treatment can be a problem when the specific time the experiment is carried out influences people’s reaction to the treatment. The effectiveness of an advertisement for gun control might be judged differently if measured shortly after an assassination or a mass murder received extensive media coverage.




Examining Social Phenomena and Programs


Quasi-experimental designs have been most frequently used to examine the effects of social phenomena and social programs that cannot be or have not been investigated by experiments. For example, the effects of the public television show
Sesame Street
have been the subject of several quasi-experimental evaluations. One initial evaluation of Sesame Street concluded that it was ineffective in raising the academic abilities of poor children, but a reanalysis of the data suggested that statistical regression artifacts had contaminated the original evaluation and that Sesame Street had a more positive effect than was initially believed. This research showed the potential harm that can be done by reaching conclusions while not controlling for all the threats to validity. It also showed the value of doing true experiments whenever possible.


Many of the field-research studies carried out on the effects of violent television programming on children’s aggressiveness have used quasi-experimental designs to estimate the effects of violent television. Other social-policy studies have included the effects of no-fault divorce laws on divorce rates, of crackdowns on drunken driving on the frequency of drunken driving, and of strict speed-law enforcement on speeding behavior and accidents. The study of the effects of speed-law enforcement represents excellent use of the “interrupted time series” quasi-experimental design. This design can be used when a series of pretest and post-test data points is available. In this case, the governor of Connecticut abruptly announced that people convicted of speeding would have their licenses suspended for a thirty-day period on the first offense, sixty days on a second offense, and longer for any further offenses. By comparing the number of motorists speeding, the number of accidents, and the number of fatalities during the period before the crackdown with the period after the crackdown, the investigators could judge how effective the crackdown was. The interrupted time series design provides control over many of the plausible rival alternative hypotheses and is thus a strong quasi-experimental design. The investigators concluded that it was probable that the crackdown did have a somewhat positive effect in reducing fatalities, but that a regression artifact may also have influenced the results. The regression artifact in this study would be a decrease in fatalities simply because there was such a high rate of fatalities before the crackdown.




Use in Organizational Psychology


Organizational psychology has used quasi-experimental designs to study such issues as the effects of strategies to reduce absenteeism in businesses, union-labor cooperation on grievance rates, and the effects of different forms of employee ownership on job attitudes and organizational performance. The last study compared three different conversions to employee ownership and found that employee ownership had positive effects on a company to the extent that it enhanced participative decision making and led to group work norms supportive of higher productivity. Quasi-experimental studies are particularly useful in those circumstances where it is impossible to carry out true experiments but policymakers still want to reach causal conclusions. A strong knowledge of quasi-experimental design principles helps prevent incorrect causal conclusions.




Research Approach

Psychology has progressed through the use of experiments to establish a base of facts that support psychological theories; however, there are many issues about which psychologists need to have expert knowledge that cannot be investigated by performing experiments. There are not too many social situations, outside a university laboratory, where a psychologist can randomly assign individuals to different treatments. For example, psychologists cannot dictate to parents what type of television programs they must assign their children to watch, they cannot tell the managers of a group of companies how to implement an employee stock option plan, and they cannot make a school superintendent randomly assign different classes to different instructional approaches. All these factors in the social environment vary, and quasi-experimental designs can be used to get the most available knowledge from the natural environment.


The philosophy of science associated with traditional experimental psychology argues that unless a true experiment is done it is impossible to reach any causal conclusion. The quasi-experimental view argues that a study is valid unless and until it is shown to be invalid. What is important in a study is the extent to which plausible alternative explanations can be ruled out. If there are no plausible alternative explanations to the results except for the experimenter’s research hypothesis, then the experimenter’s research hypothesis can be regarded as true.




Evolution of Practice

The first generally circulated book that argued for a quasi-experimental approach to social decision making was William A. McCall’s How to Experiment in Education, published in 1923. Education has been one of the areas where there has been an interest in and willingness to carry out quasi-experimental studies. Psychology was more influenced by the strictly experimental work of Ronald A. Fisher that was being published at around that time, and Fisher’s ideas on true experiments dominated psychological methods through the mid-1950s.


The quasi-experimental view gained increasing popularity during the 1960s as psychology was challenged to become more socially relevant and make a contribution to understanding the larger society. At that time, the federal government was also engaged in many social programs, sometimes collectively called the War on Poverty, which included housing programs, welfare programs, and compensatory educational programs. Evaluation of these programs was needed so that what worked could be retained and what failed could be discontinued. There was an initial burst of enthusiasm for quasi-experimental studies, but the ambiguous results that they produced were discouraging, and this has led many leading methodologists to re-emphasize the value of true experiments.


Rather than hold up the university-based laboratory true experiment as a model, however, they called for implementing social programs and other evaluations using random assignment to treatments in such a way that stronger causal conclusions could be reached. The usefulness of true experiments and quasi-experiments was also seen to be much more dependent on psychological theory: the pattern of results obtained by many different types of studies became a key factor in the progress of psychological knowledge. The traditional laboratory experiment, on which many psychological theories are based, was recognized as being very limited in external validity, and the value of true experiments—carried out in different settings, with different types of people, and replicated many times over—was emphasized. Since politicians, business managers, and other social policymakers have not yet appreciated the advantages in knowledge to be gained by adopting a true experiment approach to social innovation, quasi-experimental designs are still an important and valuable tool in understanding human behavior.




Bibliography


Abbott, Martin, and Jennifer McKinney. Understanding and Applying Research Design. Hoboken: Wiley, 2013. Digital file.



Campbell, Donald Thomas, and Julian C. Stanley. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally, 1966. Reprint. Belmont: Wadsworth, 2011. Print.



Cook, Thomas D., and Donald T. Campbell. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand, 1979. Print.



Cronbach, Lee J. Designing Evaluations of Educational and Social Programs. San Francisco: Jossey, 1987. Print.



Kerlinger, Fred N., and Howard B. Lee. Foundations of Behavioral Research. 4th ed. Belmont: Wadsworth, 2000. Print.



Maruyama, Geoffrey. Research Methods in Social Relations. [N.p.]: Wiley, 2014. Digital file.



Thyer, Bruce A. Quasi-Experimental Research Designs. New York: Oxford UP, 2012. Digital file.



Trochim, William M. K., ed. Advances in Quasi-Experimental Design and Analysis. San Francisco: Jossey-Bass, 1986. Print.

From the Kantian perspective, is assisted suicide morally commendable? Why?

In answering this question, we first have to remember that Kant’s ethics say that specific acts are not morally commendable or condemnable in and of themselves. There is no such thing as an act that is morally good or bad.  Instead, our motives make our actions good or bad.  Therefore, in order to say whether assisted suicide is morally commendable, we would have to know why the person is helping someone else commit suicide.  The...

In answering this question, we first have to remember that Kant’s ethics say that specific acts are not morally commendable or condemnable in and of themselves. There is no such thing as an act that is morally good or bad.  Instead, our motives make our actions good or bad.  Therefore, in order to say whether assisted suicide is morally commendable, we would have to know why the person is helping someone else commit suicide.  The person’s reasons for helping the other person die will determine whether the act is moral.


According to Kant, we have to use the categorical imperative to determine whether our actions are morally commendable.  The categorical imperative states that one must



Act only on that maxim through which you can at the same time will that it should become a universal law [of nature].



In other words, we have to look at what rule we are using to justify a given action.  We then have to ask if we would be willing to have that rule apply to everyone in our situation.


From this, we can see that assisted suicide could be morally good or morally bad.  If I help my father die because I want a larger inheritance and I want it sooner, my maxim/rule is something like “Person A should help Person B die if Person B’s death will help Person A financially.”  This is surely a terrible universal law as it gives anyone the right to help me kill myself just so they can get money.  In such a case, assisted suicide is not morally commendable.


But now let us say that my father, who is of sound mind, wants to die because he is in terrible pain and his life is miserable.  Let us further say that I will not benefit from his death in any way.  Now, my maxim/rule is “Person A should help Person B die if Person B is suffering terribly and if Person B freely and competently decides that they want to die.”  In this case, we can at least argue that assisted suicide is morally commendable.  This maxim is one that some people could accept as a universal law of nature (though some will not).


When thinking about Kantian ethics, we must remember that acts are not morally good or bad.  It is only our motives for actions that are commendable or blameworthy. 

Thursday 29 August 2013

What are the main events in Siobhan Dowd's The London Eye Mystery?

One of the first major events in Siobhan Dowd's The London Eye Mystery occurs when the family reaches the Eye. Kat is surprised when a stranger approaches them and offers to give them his one ticket. After thinking things over, Kat decides that Salim should have the ticket since he wants to ride the Eye so badly and is soon leaving England.

A second major event occurs when Salim never emerges from his pod. Ted and Kat keep a very careful eye on Salim's pod, and Ted is certain Salim's pod is the one that would land at 12:02 PM. When the pod arrives and all passengers exit, Salim is not among them. He is also not in any of the later pods.

A third major event is the moment Detective Inspector Pearce reads Marcus's statement. Marcus's statement confirms Ted's deductions that Salim had switched identities with the girl wearing the "sunglasses and a pink fluffy jacket," who also rode in Salim's pod (50). Marcus's testimony further explains that who the kids thought was a girl in a pink jacket getting onto the pod was actually Marcus, and later Salim emerged wearing the girl's disguise. According to Marcus, he and Salim intentionally pulled a con that day on the Eye because Salim had wanted to run away back to Manchester since he did not want to move to New York City with his mom. Marcus states in his testimony that Salim changed his mind that day, and the last time he saw the still-missing Salim was when Salim was waving goodbye and about to head back to the Sparks' home.

A fourth major event occurs when Ted and Kat figure out exactly what happened to Salim that day as he arrived on the Sparks' city block. Ted figures out that Salim had felt attracted to the tall, abandoned building called the Barracks, which was soon to be demolished, and couldn't resist exploring the building and taking photographs from its highest point:


[In the pod he] wasn't looking at Manhattan, Kat. Or the sun. He was looking at something that reminded him of Manhattan. A big tower block. He was looking at the Barracks (303).



Ted further figures out that their father, responsible for overseeing and locking up the building, had unknowingly locked Salim in the building that night. Salim had been trapped in the building for the past three days and is finally found by the Sparks at the end of the story.

What is a thyroidectomy? |


Indications and Procedures

A thyroidectomy is performed in order to remove thyroid tumors, to treat thyrotoxicosis (whereby the thyroid gland excretes very large amounts of thyroid hormone), to evaluate a mass, or to excise an enlarged thyroid that is causing problems with breathing, swallowing, or speaking.



The thyroid gland
is located at the base of the neck. It is composed of two lobes that straddle the trachea (throat) and a third lobe that is in the middle of the neck. Surgery on the thyroid is usually performed under general anesthesia. The patient’s neck is extended, and an incision along a natural fold or crease is made through the skin, platysma muscle, and fascia that lie over the thyroid. The muscle is cut high up to minimize damage to the nerve that controls it.


The thyroid is then carefully freed from surrounding structures (blood vessels, nerves, and the trachea). One at a time, the upper portion of each lobe is freed to allow identification of the veins that take blood from the thyroid. The veins are ligated (tied) in two places and cut between the ties. The ligaments that suspend the thyroid are cut next. It is important for the surgeon to avoid damaging the superior laryngeal nerve. Once the nerve has been protected, other blood vessels are clamped, tied, and cut. A similar procedure is followed for the lower lobes: ligating and cutting veins, protecting the inferior and recurrent laryngeal nerves, and freeing the remainder of the thyroid lobes.


Four parathyroid glands
, each about the size of a pea, are embedded in the thyroid gland. At least one of these must be preserved since they play a vital role in regulating calcium. Once these glands are identified, the tissue of the thyroid is cut away, leaving the parathyroids intact.


The remnants of the thyroid gland are folded in and sutured to the trachea to control bleeding. A final inspection for bleeding is made. The fascia is sutured closed over the thyroid; any muscles that were cut are sewn back together. Finally, the edges of skin are carefully brought together and sutured with very fine material; occasionally, clips are used. The instruments needed for a tracheostomy are left nearby to cope with any emergency that might occur during the next twenty-four hours.




Uses and Complications

Approximately one week after a thyroidectomy, the patient returns for a postoperative checkup, and sutures or clips are removed. Many, but not all, individuals having this procedure must take a synthetic thyroid hormone to make up for the tissue removed during the thyroidectomy.


Thyroid surgery is not uncommon. In the past, radiation was used to shrink the thyroid, but this procedure led to many cancers and has been discontinued. Laser techniques may reduce the size of the incision, thus reducing the size of the resulting scar in the neck.


Complications that may occur as a result of a thyroidectomy include bleeding into the neck, causing difficulty breathing; a surge of thyroid hormones into the blood, called thyroid storm or thyrotoxic crisis; and injury to the vocal cords, which can result in changes in voice pitch.




Bibliography


Bayliss, R. I. S., and W. M. Tunbridge. Thyroid Disease: The Facts. 4th ed. New York: Oxford University Press, 2008.



Bhimji, Shabir. "Thyroid Gland Removal." MedlinePlus, May 6, 2011.



Burman, Kenneth D., and Derek LeRoith, eds. Thyroid Function and Disease. Philadelphia: Saunders/Elsevier, 2007.



Doherty, Gerard M., and Lawrence W. Way, eds. Current Surgical Diagnosis and Treatment. 12th ed. New York: Lange Medical Books/McGraw-Hill, 2006.



Kronenberg, Henry M., et al., eds. Williams Textbook of Endocrinology. 12th ed. Philadelphia: Saunders/Elsevier, 2011.



Miccoli, Paolo, et al., eds. Thyroid Surgery: Preventing and Managing Complications. Hoboken, N.J.: 2013.



Rosenthal, M. Sara. The Thyroid Sourcebook: Everything You Need to Know. 5th ed. New York: McGraw-Hill, 2008.



Ruggieri, Paul, and Scott Isaacs. A Simple Guide to Thyroid Disorders: From Diagnosis to Treatment. Omaha, Nebr.: Addicus Books, 2010.



Wood, Lawrence C., David S. Cooper, and E. Chester Ridgway. Your Thyroid: A Home Reference. 4th ed. New York: Ballantine Books, 2006.

What is Gilbert's syndrome? |


Risk Factors

Individuals who have family members with Gilbert’s syndrome (autosomal dominant trait) are at risk for the disorder. People who have the disorder have a 50 percent chance of passing it on to each of their children. Males are also at an increased risk of developing the syndrome.












Etiology and Genetics

Patients with Gilbert’s syndrome have reduced activity of an enzyme known as bilirubin glucuronosyltransferase. This is a complex enzyme composed of several polypeptides, and the molecular defect is in UGT1A10 (also known as UGT1J), the gene that encodes the A10 polypeptide of the UDP glucuronosyltransferase 1 family (formerly known as the UDP glycosyltransferase 1 family). This gene is found on the long arm of chromosome 2 at position 2q37. Molecular genetics studies have revealed the interesting fact that the mutation is not within the coding region of the gene itself but rather in a controlling element called the promoter region. A two-base-pair repeat (insertion) in the mutated promoter causes drastically reduced levels of the protein to be synthesized.


Bilirubin is always present in small amounts in the bloodstream, since it is a waste product produced by the breakdown of hemoglobin in old red blood cells. In healthy individuals, the bilirubin is broken down further in the liver and excreted. This process is greatly slowed in individuals with Gilbert’s syndrome, so bilirubin accumulates in the blood and may cause yellowing of the skin or eyes.


In most cases, Gilbert’s syndrome is inherited as an autosomal recessive disorder, meaning that both copies of the gene must be deficient in order for the individual to show the trait. Typically, an affected child is born to two unaffected parents, both of whom are carriers of the recessive mutant allele. The probable outcomes for children whose parents are both carriers are 75 percent unaffected and 25 percent affected. If one parent has Gilbert’s syndrome and the other is a carrier, there is a 50 percent probability that each child will be affected. In some cases, however, carrier individuals will show some features of the syndrome even though only one of their two copies of the gene is mutant. Other studies have noted that, for unexplained reasons, some people who have two mutated copies of the gene do not develop Gilbert’s syndrome.




Symptoms

There often are no symptoms of Gilbert’s syndrome. However, people who do have symptoms may experience jaundice (yellowing) of the whites of the eyes, jaundice of the skin, abdominal pain, loss of appetite, fatigue and weakness, and darkening of the urine.




Screening and Diagnosis

The doctor will ask a patient about his or her symptoms and medical history and will perform a physical exam. Tests may include a complete blood count (CBC) and liver function tests. Blood tests are also done to rule out more serious liver diseases, such as hepatitis. Sometimes, a liver biopsy may also need to be done to rule out other liver diseases.




Treatment and Therapy

No treatment is necessary for Gilbert’s syndrome. Symptoms usually will disappear on their own.




Prevention and Outcomes

There is no way to prevent Gilbert’s syndrome. However, patients may prevent symptoms if they avoid skipping meals or fasting. Individuals should also avoid dehydration (too little fluid in the body), vigorous exercise, repeated bouts of vomiting, and stress or trauma.




Bibliography


Badash, Michelle. "Gilbert Syndrome." Health Library. EBSCO, 18 Mar. 2013. Web. 23 July 2014.



Dugdale, David C. "Gilbert's Disease." MedlinePlus. US NLM, 13 May 2013. Web.



Fretzayas, A., et al. "Gilbert Syndrome." European Jour. of Pediatrics 171.1 (2012): 11–15. MEDLINE Complete. Web. 23 July 2014.



Genetics Home Reference. "Gilbert Syndrome." Genetics Home Reference. US NLM, 21 July 2014. Web. 23 July 2014.



Hirschfield, G. M., and G. Alexander. “Gilbert’s Syndrome: An Overview for Clinical Biochemists.” Annals of Clinical Biochemistry 43.5 (2006): 340–343. Print.



Worman, Howard J. The Liver Disorders and Hepatitis Sourcebook. Updated ed. New York: McGraw-Hill, 2006. Print.

Wednesday 28 August 2013

What is duty to warn?



The duty to warn is the legal responsibility of health care providers to forgo their commitment to confidentiality when they determine that a patient is at risk of harming him- or herself or others. In practice, the duty to warn is most commonly associated with the mental health profession and the psychologists, counselors, and others who are tasked with assessing the mental well-being and stability of their patients. While various duty-to-warn laws have been enacted by state governments since the 1970s, concerns over the potentially harmful implications of such legislation still linger.






Overview

Under normal circumstances and regardless of field, medical professionals are expected to uphold the principle of doctor-patient confidentiality at all times. Health care providers are required to keep patients' personal information private and may not divulge any such information without proper consent. In certain situations, however, these providers may be ethically or legally bound to violate a patient's normal right to confidentiality to ensure the patient's safety and the safety of others. More often than not, this sort of breach occurs when it becomes apparent that a patient threatens or seems likely to do harm to him- or herself or others or presents some significant risk to public health. In such instances, health care providers may be required to warn the appropriate authorities and/or potential victims of any dangers posed by patients.


Although it applies to all health care providers, the duty to warn is of greatest concern to mental health professionals. Because they work with patients who may be unable to control their own impulses or may lack the intellectual capacity to understand the consequences of their actions, mental health professionals must occasionally make the decision to forgo confidentiality in favor of acting on their duty to warn. This, it is supposed, will help to prevent others from falling victim to acts of violence perpetrated by mentally unstable aggressors.




Historical Background

The duty to warn first arose in the legal aftermath of a criminal case involving the murder of a young woman at the hands of a mentally unstable admirer. In October 1969, Prosenjit Poddar, an Indian student enrolled in a graduate program at the University of California at Berkeley, killed Tatiana Tarasoff , a fellow student who had scorned his romantic advances. The two had become friends a year earlier but had a falling out when Tarasoff admitted that she did not have feelings for Poddar. Following this admission, Poddar suffered a severe mental breakdown. During the summer of 1969, Poddar underwent psychological treatment while Tarasoff spent a few months in Brazil. During the course of his treatment, Poddar told Dr. Lawrence Moore that he intended to kill Tarasoff when she returned. While Moore did notify campus authorities about the threat, he did not inform Tarasoff or her family. Shortly after Tarasoff's return, Poddar followed through on his promise, shooting and stabbing her to death.


After Poddar was convicted in a criminal proceeding and later deported back to India, Tarasoff's family filed a wrongful-death civil lawsuit against the university and its health department for not warning their daughter about Poddar's threat. The case was eventually tried in front of the California Supreme Court, which ruled that Moore had a duty not only to preserve Poddar's confidentiality but also to warn Tarasoff of the potential danger she faced. In the years that followed, the California Supreme Court's precedent-setting decision in the Tarasoff case led to the enactment of duty-to-warn laws (called the Tarasoff rule in California) and other similar regulations across the country.




In Practice

As variations on the concept of duty to warn have become law in most states, mental health practitioners have faced increasing pressure to accurately identify patients who present a legitimate risk to themselves or others. In most cases, potentially dangerous patients are identified through the careful use of risk assessment tools that help professionals gauge a subject's likelihood of committing an act of self-harm or outward violence. Regardless of their approach, however, practitioners ultimately have sole responsibility for determining whether a patient's behavior has become enough of a public threat to warrant overriding doctor-patient confidentiality and acting on the duty to warn.




Alternative Applications

While the duty to warn is most frequently thought of in relation to mental health, it also applies in a variety of other circumstances. A doctor who is treating a patient diagnosed with a dangerous and easily communicable disease—the Ebola virus, for example—may have to reveal details of the patient's condition to the appropriate authorities to prevent a public health crisis. In another scenario, a doctor who believes that a patient under his or her care may have been subjected to abuse or neglect might have to report such suspicions to law enforcement. In such instances, the duty to warn supersedes the need to maintain confidentiality.




Criticism

Though various duty-to-warn guidelines have been adopted in all fifty states, not all health care providers agree that such an approach is beneficial for everyone involved. Some argue that making duty to warn mandatory by law leads to an overabundance of exceptions to the right of confidentiality. Others also argue that mandating duty to warn may discourage people from seeking the help they need or from being open about any potentially dangerous intentions. Finally, some concern exists that the risk of legal liability related to the duty to warn may discourage providers from treating potentially problematic patients.




Bibliography


"Mental Health Professionals' Duty to Warn." National Conference of State Legislatures. Web. 27 Jan. 2015. http://www.ncsl.org/research/health/mental-health-professionals-duty-to-warn.aspx



Millner, Vaughn S. "Duty to Warn and Protect." Encyclopedia of Counseling. Vol. 2. Eds. Frederick T. L. Leong, Elizabeth M. Altmaier, and Brian D. Johnson. Thousand Oaks, CA: Sage Publications, 2008. 575–78. Print.



Stankowski, Joy E. "Duty to Warn." Wiley Encyclopedia of Forensic Science. Vol. 2. Eds. Allan Jamieson and Andre Moenssens. Chichester, UK: Wiley, 2009. 885–90. Print.



Weiss, Marcia J. "Tarasoff Rule." Forensic Science. Vol. 3. Eds. Ayn Embar-Seddon and Allan D. Pass. Pasadena, CA: Salem Press, 2009. 968–71. Print.

What is scleroderma? |


Causes and Symptoms


Scleroderma is a connective tissue

disease characterized by fibrosis and hardening of the skin and internal organs. The word “scleroderma” is derived from the Greek sclero, meaning “hard,” and derma, meaning “skin.” Women are four times more affected than men. The disease generally affects persons between the ages of thirty and fifty.



It is believed that scleroderma is autoimmune in origin. The exact cause of the disease is yet to be discovered, but an overproduction of collagen
has been observed in skin biopsies of patients with scleroderma. Two types of the disease have been recognized: localized scleroderma and systemic sclerosis. The localized form of the disease is more common in children and can affect small areas of the skin or muscle or can be widespread, manifesting as morphea and/or linear scleroderma. Morphea affects the skin, with gradually enlarging inflammatory plaques or patches; they may regress spontaneously over time and typically last for months to years. The skin over the lesions appears firm to hard, and the lesions themselves are ivory or yellow in color. Linear scleroderma usually affects a limb or the forehead and, if present early in childhood, can result in permanent limb shortening. This type may also affect the muscles and joints, causing limited joint mobility.


Systemic sclerosis, on the other hand, is a more widespread disease that affects multiple organs, such as the skin, esophagus, gastrointestinal tract, muscles, joints, blood vessels, heart, kidneys, lungs, and other internal organs. This disease usually manifests in adults, with symptoms of at least one or more of the following: Raynaud’s phenomenon (extreme sensitivity of the extremities to cold temperatures, with a tingling sensation and the limb turning blue, red, or white upon exposure to cold); thickening of the skin with a leathery, shiny appearance (sclerodactyly);
fibrosis and thickening of the joints with decreased mobility; swelling of the hands and feet, with pain and stiffness of the joints; and orofacial abnormalities from thickening of the skin. Some patients may experience symptoms of esophageal, heart, lung, or kidney disease. Systemic sclerosis should always be suspected in every case of difficulty swallowing and
heartburn, especially if seen in a middle-aged woman. Patients may complain of nonspecific problems, such as bloating of the abdomen, weight loss, fatigue, generalized weakness, diarrhea, constipation, shortness of breath, and vague aching of joints and muscles. They may also exhibit dryness and redness of the conjunctiva and mucous membranes (Sjögren’s syndrome
or keratoconjunctivitis sicca).


Some patients experience the CREST syndrome, which is an acronym for calcinosis, Raynaud’s phenomenon, esophageal dysmotility, sclerodactyly, and vascular
telangiectasia. Another form of the disease is localized cutaneous systemic sclerosis, which affects mainly the skin of the hands, face, feet, and forearms, with Raynaud’s phenomenon being the primary symptom.


Diagnosis of the disease is difficult, especially in the initial stages, as the symptoms are common to a variety of immunologically mediated diseases such as rheumatoid arthritis, Sjögren’s syndrome, and systemic lupus erythematosus (SLE). The diagnosis is mainly based on clinical findings, an elevated erythrocyte
sedimentation rate (ESR), and a skin biopsy showing elevated collagen levels. Sometimes, a positive antinuclear antibody test and a positive rheumatoid factor test may be seen. About 30 percent of patients are positive for the Scl-70 antibody, which is highly specific for the disease. X-rays and lung function tests are used to determine the extent of the disease.


Those with the systemic form are prone to various complications, including heart failure, kidney failure, respiratory problems, and intestinal malabsorption.




Treatment and Therapy

As of the beginning of the twenty-first century, no cure for scleroderma had been found. Each symptom, however, can be treated effectively, and the quality of life can be greatly improved if the disease is detected early in its course. The disease is primarily managed by rheumatologists and dermatologists, owing to the severity of its course and the difficulty of diagnosis. Calcium-channel blockers are used to decrease the symptoms caused by Raynaud’s phenomenon, joint pain and stiffness can be treated with nonsteroidal anti-inflammatory drugs (NSAIDs), esophageal dysmotility and subsequent heartburn is treated with antacids and antireflux measures, lung inflammation and fibrosis can be treated with cyclophosphamide, and heart failure and renal failure are treated appropriately with drugs. Penicillamine and corticosteroids are used to treat the fibrosis seen in the disease. In addition, physical and occupational therapy is instituted to improve joint mobility.


Morphea or localized scleroderma can be managed by the application of cortisone ointment to the lesions. This will not reverse or treat the disease completely, but it appears to slow the progression and provide symptomatic relief. Patients are also advised to use sunscreen lotions and moisturizers to soften the skin and prevent sunburn. Plastic surgery may be employed to correct serious deformities.




Perspective and Prospects

Scleroderma is an individual disease, with each patient exhibiting different aspects. This makes diagnosis even more difficult and complicated. Scleroderma is not contagious, and it is not believed to be heritable. It is thought that certain people are inherently more susceptible to the disease, and they develop it only if environmental or physical trigger factors, such as stress, are activated. Prognosis of the localized form of scleroderma is good, with the lesions resolving spontaneously, and the five-year survival rate for those with systemic disease is 80 to 85 percent. Many clinical trials are being conducted for such treatments as the use of stem cells as a “rebooting” mechanism, alpha interferon, ultraviolet therapy, and even psychotherapy. These approaches appear promising and aim to at least improve the quality of life of patients with scleroderma, if not cure the disease.




Bibliography


Alan, Rick, and Rosalyn Carson-DeWitt. "Scleroderma." Health Library, Sept. 1, 2011.



Brown, Michael. Scleroderma: A New Role for Patients and Families. Los Angeles: Scleroderma Press, 2002.



Fauci, Anthony S., et al, eds. Harrison’s Principles of Internal Medicine. 18th ed. New York: McGraw-Hill, 2012.



Frazier, Margeret Schell, and Jeanette Wist Drzymkowski. Essentials of Human Diseases and Conditions. 5th ed. St. Louis, Mo.: Saunders/Elsevier, 2013.



Mayes, Maureen D. The Scleroderma Book: A Guide for Patients and Families. Rev. ed. New York: Oxford University Press, 2005.



Rakel, Robert E., ed. Textbook of Family Practice. 8th ed. Philadelphia: W. B. Saunders, 2011.



"Scleroderma." MedlinePlus, May 13, 2013.



Tapley, Donald F., et al, eds. The Columbia University College of Physicians and Surgeons Complete Home Medical Guide. Rev. 3d ed. New York: Crown, 1995.



"What Is Scleroderma?" National Institute of Arthritis and Musculoskeletal and Skin Diseases, Aug. 2010.

Tuesday 27 August 2013

On a lever what will happen if the fulcrum is moved closer to the effort?

A lever is a simple machine that is used to lift or move heavy loads by use of relatively smaller forces. It uses a fixed support or hinge known as a fulcrum. A bar moves about this fulcrum. Effort is the work done on the lever and resistance is the load that needs to be lifted or moved. The distance between the fulcrum and the effort end is known as length of effort arm. The distance between the fulcrum and load or resistance end is known as the length of the resistance arm. The lever's capability is called mechanical advantage, which is defined as:

Mechanical advantage = length of effort arm / length of load arm


We want the mechanical advantage to be more than 1, so that lesser effort can result in the movement of a higher load.


It also follows from this equation that we prefer a greater length in the effort arm and a smaller length in the load arm. Thus, the fulcrum is ideally placed close to the load end and as far away as possible from the effort end. 


Thus, if we move the fulcrum close to the effort end, the mechanical advantage decreases and we have to use more effort to move the same load.


Hope this helps. 

What is ionizing radiation? |





Related cancers:
Lung, bone, bone marrow (leukemia), thyroid, breast, liver, skin





Definition:
Ionizing radiation is energy released from the disintegration of unstable atomic nuclei during radioactive decay. This type of radiation may originate from X radiation or the emission of various subatomic particles from both natural and artificial sources. Some substances decay at faster rates than others and are more or less stable than others.



Exposure routes:

Inhalation, ingestion, direct external exposure



Where found: Ionizing radiation is both naturally occurring and artificially produced. It is found in radon (55 percent) and in the earth. Non-natural sources include military weapons, nuclear reactors, and electronic products. Technologically enhanced naturally occurring radioactive materials (TENORMs) concentrate ionizing radiation in solid sludge, water treatment facilities, aluminum oxide reactions, fertilizers, coal ash, concrete aggregates, diagnostic medical procedures, cable insulation, security screening equipment, and equipment used to kill microorganisms in food.



At risk: Children, pregnant women, industry workers, medical personnel, military personnel, residents in high background radiation areas



Etiology and symptoms of associated cancers: Ionizing radiation, regardless of the source, damages deoxyribonucleic acid (DNA, the genetic material) inside cells. The damage can be chromosomal breaks, cell mutations, and actual cell transformation. The consequences of this damage range from immediate cell death to transformation into cells that become malignant over time. The ability of ionizing radiation to kill cells explains its use to treat many cancers. Cancer cells divide more rapidly and are more vulnerable to radiation. Thus, ionizing radiation has the ability, when the trajectory of the beam is focused on a tumor, to shrink tumors by killing cells.


Immediate symptoms of exposure vary according to the type of particle, the dose, the length of exposure, and the route of exposure. Radiation sickness, or acute radiation syndrome, results from immediate excessive high-dose exposure. Whole body penetration damages the cardiovascular and central nervous systems. The blood pressure will drop (hypotension), and the brain will swell. Nausea, vomiting, convulsions, and confusion will follow. Death is inevitable when exposure is greater than 3,000 rads.



History: In 1896 Henri Becquerel presented his discovery of radioactivity in Paris at the Academy of Sciences. During the 1900s scientists Marie and Pierre Curie, Dmitry Mendeleyev, and Wilhelm Conrad Röntgen defined the properties of ionizing radiation. Many of the scientists working with ionizing radiation died as a result of their exposures. In 1970 the Environmental Protection Agency (EPA) began regulating ionizing radiation.



Belotserkovsky, Eduard, and Ziven Ostaltsov. Ionizing Radiation: Applications, Sources, and Biological Effects. New York: Nova, 2012. PRint.


Christensen, Doran, M, Carol, J Iddins, and Stephen, L Sugarman. "Ionizing Radiation Injuries And Illnesses." Emergency Medicine Clinics of North America  32.1 (2014): 245–65. CINAHL Plus with Full Text. Web. 26 Jan. 2015.


DeWerd, L. A., and Michael Kissick. The Phantoms of Medical and Health Physics: Devices for Research and Development. New York: Springer, 2014. Print.


Ryan, Julie L. "Ionizing Radiation: The Good, the Bad, and the Ugly." Journal of Investigative Dermatology 132 (2012): 985–93. Print.


Santivasi, Wil L., and Fen Xia. "Ionizing Radiation-Induced DNA Damage, Response, and Repair." Antioxidants & Redox Signaling  21.2 (2014): 251–59.  Academic Search Premier. Web. 26 Jan. 2015.

What are developmental theories? |


Introduction

Developmental theory has changed greatly over time. The theories of societies at various times in history have emphasized different aspects of development. The Puritans of the sixteenth and seventeenth centuries, for example, focused on the moral development of the child; they believed that Original Sin was inherent in children and that children had to be sternly disciplined to make them morally acceptable. In contrast to this view was the developmental theory of the eighteenth century French philosopher Jean-Jacques Rousseau, who held that children were born good and were then morally corrupted by society. Sigmund Freud was interested in psychosexual development and in mental illness; his work therefore focused on these areas. John B. Watson, B. F. Skinner, and Albert Bandura worked during a period when the major impetus in psychology was the study of learning; not surprisingly, this was the focus of their work.









As developmental theorists worked intently within given areas, they often arrived at extreme positions, philosophically and scientifically. For example, some theorists focused on the biology of behavior; impressed by the importance of “nature” (genetic or other inherited sources) in development, they may have neglected “nurture” (learning and other resources received from the parents, world, and society). Others focused on societal and social learning effects and decided that nurture was the root of behavior; nature has often been relegated to subsidiary theoretical roles in physiological and anatomical development. Similar conflicts have arisen concerning developmental continuity or discontinuity, the relative activity or passivity of children in contributing to their own development, and a host of other issues in the field.


These extreme positions would at first appear to be damaging to the understanding of development; however, psychologists are now in a position to evaluate the extensive bodies of research conducted by adherents of the various theoretical positions. It has become evident that the truth, in general, lies somewhere in between. Some developmental functions proceed in a relatively stepwise fashion, as Jean Piaget or Freud would hold; others are much smoother and more continuous. Some development results largely from the child’s rearing and learning; other behaviors appear to be largely biological. Some developmental phenomena are emergent processes (any process of behavior or development that was not necessarily inherent in or predictable from its original constituents) of the way in which the developing individual is organized, resulting from both nature and nurture in intricate, interactive patterns that are only beginning to be understood. These findings, and the therapeutic and educational applications that derive from them, are comprehensible only when viewed against the existing corpus of developmental theory. This corpus in turn owes its existence to the gradual construction and modification of developmental theories of the past.




Theoretical Questions and Properties

Theoretical perspectives on development derive from a wide variety of viewpoints. Although there are numerous important theoretical issues in development, three questions are central for most theories. The first of these is the so-called nature-nurture question, concerning whether most behavioral development derives from genetics or from the environment. The second of these issues is the role of children in their own development: are children active contributors to their own development, or do they simply and passively react to the stimuli they encounter? Finally, there is the question of whether development is continuous or discontinuous: Does development proceed by a smooth accretion of knowledge and skills, or by stepwise, discrete developmental stages? Perspectives within developmental psychology represent very different views on these issues.


Useful developmental theories must possess three properties. They must be parsimonious, or as simple as possible to fit the available facts. They must be heuristically useful, generating new research and new knowledge. Finally, they must be falsifiable, or testable. A theory that cannot be tested can never be shown to be right or wrong. Developmental theories can be evaluated in terms of these three criteria.




Psychodynamic Theories

Arguably, the oldest developmental theoretical formulation in use is the psychodynamic model, which gave rise to the work of Erik H. Erikson and Carl Jung, and has as its seminal example, the theory of Sigmund Freud. Freud’s theory holds that all human behavior is energized by dynamic forces, many of which are consciously inaccessible to the individual. There are three parts to the personality in Freud’s formulation: the id, which emerges first and consists of basic, primal drives; the ego, which finds realistic ways to gratify the desires of the id; and the superego, the individual’s moral conscience, which develops from the ego. A primary energizing force for development is the libido, a psychosexual energy that invests itself in different aspects of life during the course of development. In the first year of life (Freud’s oral stage), the libido is invested in gratification through oral behavior, including chewing and sucking. Between one and three years of age (the anal stage), the libido is invested in the anus, and the primary source of gratification has to do with toilet training. From three to six years, the libido becomes invested in the genitals; it is during this phallic stage that the child begins to achieve sexual identity. At about six years of age, the child enters latency, a period of relative psychosexual quiet, until the age of twelve years, when the genital stage emerges and normal sexual love becomes possible.


Freud’s theory is a discontinuous theory, emphasizing stage-by-stage development. The theory also relies mainly on nature, as opposed to nurture; the various stages are held to occur across societies and with little reference to individual experience. The theory holds that children are active in their own development, meeting and resolving the conflicts that occur at each stage.


The success of psychodynamic theory has been questionable. Its parsimony is open to question: There are clearly simpler explanations of children’s behavior. The falsifiability of these ideas is also highly questionable because the theories are quite self-contained and difficult to test. Psychodynamic theory, however, has proven enormously heuristic—that is, having the property of generating further research and theory. Hundreds of studies have set out to test these ideas, and these studies have significantly contributed to developmental knowledge.




Behaviorist Theories

In contrast to psychodynamic theories, the behaviorist theories pioneered by John B. Watson and B. F. Skinner hold that development is a continuous process, without discrete stages, and that the developing child passively acquires and reflects knowledge. For behaviorists, development results from nurture, from experience, and from learning, rather than from nature. The most important extant behaviorist theory is the
social learning theory of Albert Bandura, which holds that children learn by watching others around them and imitating others’ actions. For example, Bandura demonstrated that children were far more inclined to commit violent acts (toward a toy) if someone else, particularly an adult, committed the acts first. The children were especially disposed to imitate if they perceived the acting individual as powerful or as rewarded for his or her violent actions.




Organic Lamp Theories

The behaviorist theories are relatively parsimonious and heuristic. They are also testable, and it has been shown that, although many of the findings of the behaviorists have stood the test of time, there are developmental findings that do not fit this framework. To understand these findings, one must turn to the so-called organic lamp theories. This term comes from the fact that within these theories, children are seen as active contributors to their own development, and certain developmental processes are held to be “emergent”: As fuel combusts to produce heat and light in a lamp, hereditary and environmental factors combine in development to produce new kinds of behavior. This framework was pioneered by Kurt Goldstein and Heinz Werner, but the most significant extant organic lamp theory is the cognitive development theory of Jean Piaget.




Piaget’s Contributions

Piaget’s theory involves a discontinuous process of development in four major stages. The sensorimotor stage (birth to two years) is followed by the preoperational stage (two to seven years), the concrete operational stage (seven years to adolescence), and the formal operational stage (adolescence to adulthood). During the sensorimotor stage, the child’s behavior is largely reflexive, lacking coherent conscious thought; the child learns that self and world are actually different, and that objects exist even when they are not visible. During the preoperational stage, the child learns to infer the perspectives of other people, learns language, and discovers various concepts for dealing with the physical world. In the concrete operational stage, the ability to reason increases, but children still cannot deal with abstract issues. Finally, in formal operations, abstract reasoning abilities develop. The differences among the four stages are qualitative differences, reflecting significant, discrete kinds of behavioral change.


Piaget’s theory is not entirely accurate; it does not apply cross-culturally in many instances, and children may, under some experimental circumstances, function at a higher cognitive level than would be predicted by the theory. In addition, some aspects of development have been shown to be more continuous in their nature than Piaget’s ideas would indicate. Yet Piaget’s formulation is relatively parsimonious. The various aspects of the theory are readily testable and falsifiable, and the heuristic utility of these ideas has been enormous. This theory has probably been the most successful of the several extant perspectives, and it has contributed significantly to more recent advances in developmental theory. This progress includes the work of James J. Gibson, which emphasizes the active role of the organism, embedded in its environment, in the development of perceptual processes; the information-processing theories, which emphasize cognitive change; and the ethological or evolutionary model, which emphasizes the interplay of developmental processes, changing ecologies, and the course of organic evolution.




Modern-Day Applications

Developmental theory has been important in virtually every branch of medicine and education. The psychoanalytic theories of Freud were the foundation of psychiatry and still form a central core for much of modern psychiatric practice. These theories are less emphasized in modern clinical psychology, but the work of Freud, Erikson, Jung, and later psychodynamicists is still employed in many areas of psychotherapy.


The behavioristic theories have proved useful in the study of children’s learning for educational purposes, and they have considerable relevance for social development. An example is seen in the area of media violence. Bandura’s work and other research stemming from social learning theory have repeatedly demonstrated that children tend to imitate violent acts that they see in real life or depicted on television and in other media, particularly if the individuals who commit these acts are perceived as powerful or as rewarded for their actions. Although this is disputed, especially by the media, most authorities are in agreement that excessive exposure to televised violence leads to real-world violence, largely through the mechanisms described by social learning theorists. Social learning theory has contributed significantly to an understanding of such topics as school violence, gang violence, and violent crime.




Interplay of Nature Versus Nurture

The organic lamp views have provided developmentalists with useful frameworks against which to understand the vast body of developmental data. Work within the Piagetian framework has shown that both nature and nurture contribute to successful development. One cannot, for example, create “superchildren” by providing preschoolers with college-level material. In general, they are simply not ready as organisms to cope with the abstract thinking required. On the other hand, the work of researchers on various Piagetian problems has shown that even very young children are capable of complex learning.


Organic lamp theory has demonstrated the powerful interplay between biological factors and the way in which children are reared. An example is seen in the treatment of Down syndrome, a chromosomal condition that results in mental retardation. The condition occurs when there are three chromosomes, rather than two, at the twenty-first locus. Clearly, this is a biological condition, and it was believed to be relatively impervious to interventions that come from the environment. It has now been shown, however, that children afflicted with Down syndrome develop much higher intelligence when reared in an intellectually stimulating environment, as opposed to the more sterile, clinical, determined environments typically employed in the past. The child’s intellect is not entirely determined by biology; it is possible to ameliorate the biological effects of the syndrome by means of an environmental intervention. This type of complex interplay of hereditary and environmental factors is the hallmark of applied organic lamp theory.


The most important application of developmental theory generally, however, lies in its contribution to the improved understanding of human nature. Such an understanding has considerable real-world importance. For example, among other factors, an extreme faith in the nature side of the nature-nurture controversy led German dictator Adolf Hitler to the assumption that entire races were, by their nature, inferior and therefore should be exterminated. His actions, based on this belief, led to millions of human deaths during World War II. Thus, one can see that developmental theories, especially if inadequately understood, may have sweeping applications in the real world.




Bibliography


Dowling, John E. The Great Brain Debate: Nature or Nurture? Princeton: Princeton UP, 2007. Print.



Gollin, Eugene S., ed. Developmental Plasticity: Behavioral and Biological Aspects of Variations in Development. New York: Academic Press, 1981. Print.



Lerner, Richard M. On the Nature of Human Plasticity. New York: Cambridge UP, 1984. Print.



Miller, Patricia H. Theories of Developmental Psychology. 4th ed. New York: Worth, 2002. Print.



Piaget, Jean. Biology and Knowledge. Chicago: U of Chicago P, 1971. Print.



Pickren, Wade E., Donald A. Dewsbury, and Michael Wertheimer. Portraits of Pioneers in Developmental Psychology. New York: Psychology Press, 2012. Print.



Revelle, Glenda. "Applying Developmental Theory and Research to the Creation of Educational Games." New Directions for Child & Adolescent Development 2013.139 (2013): 31–40. Print.



Shaffer, David Reed. Developmental Psychology: Childhood and Adolescence. 7th ed. Belmont: Wadsworth, 2007. Print.



Siegler, Robert S. Emerging Minds: The Process of Change in Children’s Thinking. New York: Oxford UP, 1998. Print.



Thompson, Dennis, John D. Hogan, and Philip M. Clark. Developmental Psychology in Historical Perspective. Malden: Wiley, 2012. Print.

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...