Tuesday 31 December 2013

How do the endocrine and nervous system combine to activate the body in stressful situations?

When there is an emergency, or a stressful experience, the nervous system and endocrine system work together to activate the body to react and prepare for the event.

Information is instantly processed by the central and peripheral nervous system. Your brain (central) processes the thoughts and emotions and ultimately decides how your body will react. The peripheral system undergoes a sympathetic response (autonomic component), which causes changes in the body: increased heart beat and blood pressure, dilation of pupils, slowing of digestion, and sweating. This is known as the fight or flight response.


Concurrently during stress, the endocrine system is activated. The hypothalamus, a tiny gland located at the base of your skull, tells the pituitary gland what hormones need to be released to other glands of the body in response to the event. Both the nervous system and the endocrine system combine effectively in response to stress. First, the central nervous system (brain) processes the information, and it's urgency. At the same time the peripheral system gets the body amped up and ready phsyically.


Meanwhile, the endocrine system is receiving this information and deciding which hormones are needed and in what quantity to deal with the physical challenges and/or changes which are occurring. These ordered hormones control behavior, affect emotions, and regulate our most basic human needs (hunger, thirst, sex, sleep).

What is acidosis? |


Causes and Symptoms


Acidosis is a state of excess acidity in the body’s fluids. For metabolism to function normally and oxygen to be delivered properly to tissues and organs, blood pH must remain within a narrow, slightly alkaline range between 7.35 and 7.45. Acids and bases (alkalies) in bodily fluids kindle the chemical reactions that support living. The organs that largely regulate the acid-base balance are the kidneys and lungs.




Acidosis arises from either a decrease of base bicarbonate (bicarbonate combined with such minerals as calcium, magnesium, or phosphorous—a base) or an increase of carbonic acid (combined carbon dioxide and water—an acid). Medicine recognizes both metabolic and respiratory acidosis, although clinically the two are often intertwined. Metabolic acidosis involves the kidneys, which metabolize, neutralize, or excrete any acid but carbon dioxide. Respiratory acidosis involves the ability of the lungs to exhale carbon dioxide, which is a gas but readily combines with water to form carbonic acid. Acidosis can be verified through an analysis of arterial blood gases.


Metabolic acidosis occurs when metabolism is impaired, sometimes by ingestion of a substance that exceeds the capacity of the kidneys to buffer. Acidifying ingestibles include wood alcohol (methanol), antifreeze (ethylene glycol), and salicylates such as aspirin in large doses. Certain infections can cause metabolic acidosis. Poorly controlled type 1 diabetes mellitus causes metabolic acidosis when inadequate insulin levels enable ketone bodies from fat breakdown to acidify the blood. When overexertion exceeds the ability of the blood to supply
muscles, lactic acid is produced; the same thing occurs when poor blood supply exceeds normal oxidative metabolism. To remedy acidic blood, the body leeches buffering minerals such as calcium, magnesium, sodium, and potassium from organs and bones.


Respiratory acidosis occurs when the lungs cannot expel enough carbon dioxide. It is often caused by diseases such as emphysema, chronic bronchitis, advanced pneumonia, pulmonary
edema, or asthma, all of which damage the lungs or bronchi. It can also occur with diseases of the nerves or muscles that move the chest or diaphragm. Apnea that interrupts sleep breathing or hyperventilation can cause respiratory acidosis. It can also result from oversedation with narcotic opioids or strong sedatives that slow breathing. In cardiac arrest, metabolic and respiratory acidoses are concurrent.


Acidosis can be chronic or acute. Its symptoms can range from unnoticed to vague and unspecific to disturbing or life-threatening. Because their causes differ, metabolic and respiratory acidosis also vary in their symptoms. Symptoms of metabolic acidosis include nausea, vomiting, and fatigue from blood-borne acids and a tendency (which may be absent in infants) to breathe quickly and deeply as the body attempts to expel more carbon dioxide. Respiratory acidosis can cause headache, confusion, and breathing that is shallow, slow, or both in the effort to expel carbon dioxide.




Treatment and Therapy

Metabolic acidosis is treated by determining the cause and correcting the imbalance. The acidic substance in the blood must be removed or buffered, although this may prove difficult. Treatments vary for metabolic acidosis caused by kidney disease, shock, cancer, poisoning, diabetes, and cardiac arrest. In mild cases, acidosis may be corrected by giving intravenous fluids. In severe cases, bicarbonate can be given intravenously. This provides only temporary relief, however, and may create further problems with excessive sodium.


Respiratory acidosis is generally controlled by improving lung function. Diseases such as asthma or emphysema may respond to bronchodilators, which open the airways. Children under five years of age may develop simultaneous metabolic and respiratory acidosis from salicylate poisoning; vomiting can then be induced with ipecac syrup. Whenever breathing is impaired sufficiently to prevent the exchange of carbon dioxide and oxygen, mechanical ventilation may be required.




Perspective and Prospects

Properly functioning bodily processes require oxygen. The more acidic the blood, however, the less oxygen it can carry. The simple process of living produces quantities of metabolic and respiratory acids that, in health, the body continually and silently buffers. In the Western world, people further challenge the acid-base balance through diet, with animal products such as meat, eggs, and dairy as well as with processed foods such as white four and sugar and beverages such as coffee and soft drinks. Many common drugs are acid-forming, as are artificial sweeteners. As long as they function normally, the lungs and kidneys oppose the tendencies of bodily fluids to exceed normal pH parameters. Having healthy lung and kidney function, and avoiding or balancing acid in the diet, can help maintain optimal internal chemistry.




Bibliography:


"Acidosis." MedlinePlus, Mar. 22, 2013.



Gennari, F. John, et al., eds. Acid-Base Disorders and Their Treatment. Boca Raton, Fla.: Taylor & Francis, 2005.



Hogan, Mary Ann, et al. Fluids, Electrolytes, and Acid-Base Balance. 2d ed. Upper Saddle River, N.J.: Pearson/Prentice Hall, 2007.



Lewis, James L., III. “Acidosis.” In The Merck Manual Home Health Handbook: Third Home Edition, edited by Robert S. Porter. Whitehouse Station, N.J.: Merck Research Laboratories, 2009.



"Metabolic Acidosis." MedlinePlus, Mar. 22, 2013.



"Respiratory Acidosis." MedlinePlus, Mar. 22, 2013.

Monday 30 December 2013

What is psychopharmacology? |


Introduction

Psychopharmacology is the scientific study of the effect of drugs on psychological processes. The field of psychopharmacology has two interrelated goals. The first is to understand the way that drugs interact in the brain and the effects that these interactions have on behavior, consciousness, cognition, emotion, and other psychological processes. The second goal is to use knowledge of the effects of drugs to improve human psychological welfare. In most cases, this involves studying drug effects in the hope of developing and improving drugs used to treat psychiatric disorders such as depression and schizophrenia. In other cases, drug effects may be studied in the hope of learning ways to prevent people from taking drugs in ways that cause harm to both individuals and society.















Although psychopharmacological research may involve any drug with the capacity to alter psychological experience (a psychotropic drug), most studies have focused on the examination of drugs that fall into two broad categories: those that are useful in the treatment of psychological disorders (therapeutic drugs) and those that have the capacity to produce compulsive patterns of use and abuse in the people who take them (drugs of abuse).




History

Though psychopharmacology is a relatively new scientific field, archaeological evidence indicates that human beings have been using drugs to manipulate their psychological experiences and to treat disease since prehistory. Many psychotropic substances are naturally occurring or produced by natural processes, and early people were often adept at exploiting and, in many cases, cultivating the plants from which they came. Many of these psychotropic substances, including opium, cocaine, alcohol, and peyote (a hallucinogen), were used both for their ability to influence individual experience and in social, cultural, and ceremonial contexts. In some cases, they were used medicinally to treat a variety of physical and psychological ailments.


The ancient Greeks used a variety of drugs to treat mental illness and had formed a specific theory of illness and health that guided their use. Hippocrates theorized that mental and physical illnesses were caused by an imbalance in one or more of the four bodily fluids (humors): blood, bile, choler, and phlegm. Specific drugs and other treatments were used to increase or decrease these humors in an effort to reestablish the balance. For example, symptoms thought to be caused by an excess of blood, such as mania, might be treated by bleeding the patient. Bleeding would, indeed, often slow the patient down, and this would be taken as evidence of the validity of the underlying theory. Though modern medicine requires a different level of proof of efficacy, the strategy of treating symptoms by using remedies that produce opposite or counteracting effects to the symptoms of interest remains a mainstay of pharmacological approaches to treating psychological disorders.


The modern psychiatric drug era got its start in the 1950s. During this decade, early examples of virtually all major classes of psychiatric drugs were discovered, in many cases by accident. In 1949, John Cade was conducting a series of experiments involving injecting uric acid into guinea pigs. When he added
lithium in an effort to increase the water solubility of the solution, he noted that the animals were much calmer, so he tested the effects of lithium in people with mania. Lithium has become a commonly prescribed mood stabilizer, effective in the treatment of bipolar disorder. Though originally investigated as an antihistamine and surgical sedative, the first antipsychotic medication, chlorpromazine (Thorazine), was found to be useful in treating patients with schizophrenia in 1952. Iproniazid (Marsilid), an early antidepressant drug approved for use in 1958, was initially intended to treat tuberculosis. Meprobamate (Miltown, Equanil, Meprospan), an anxiolytic or antianxiety medication, became available about the same time.


These medications produced a revolution in the treatment of mental illness. Before the advent of modern medications, the available treatment options for patients with serious mental illnesses were rarely effective and in some cases were downright barbaric. Most patients with serious mental illness were confined in psychiatric hospitals and asylums. The new drugs dramatically reduced the number of institutionalized psychiatric patients and improved the quality of life of countless individuals. Nevertheless, none of these drugs were completely effective, and they all had a tendency to produce troubling side effects in a significant number of the people who took them.


In the decades that followed, many chemical modifications of these drugs were explored, occasionally creating minor improvements in effectiveness or reducing some of the side effects. By the late 1980s, a smaller breakthrough occurred with the successful reintroduction of clozapine (Clozaril), a new, atypical antipsychotic medication, and the development of fluoxetine (Prozac), an antidepressant. These drugs were somewhat novel in the way they acted in the brain. They offered distinct advantages in practice and stimulated new avenues for research. As understanding of the underlying causes of psychiatric disorders improves, researchers hope to develop safer and more effective medications for mental illness.




Drug Effects

Before any drug can alter psychological experience, it must reach target receptors in the brain, typically from some other part of the body. Pharmacokinetics refers to the study of drug movements throughout the body over time. The speed of drug onset and the duration of drug effect are important variables in determining the qualitative experience of taking the drug. For example, drugs of abuse that are absorbed and distributed quickly tend to produce stronger rewarding properties than those with more gradual onsets. For this reason, drug abusers may try to change the way that drugs are administered in an effort to speed their onset.


Psychopharmacologists also need to understand specifically what drugs do once they reach their targets in the brain. This topic is referred to as pharmacodynamics. Psychotropic drugs produce their effects by interacting with proteins in the membranes of individual neurons in the brain. These proteins normally interact with naturally occurring chemicals in the brain called neurotransmitters, which serve as chemical bridges across the spaces between neurons, called synapses, so that signals can be transmitted from one neuron to another. There are many different neurotransmitters, and for each of these, there may be a number of different specific receptor proteins. The various receptor proteins are not evenly distributed throughout the brain; different regions of the brain vary in terms of the density of different types of receptors for different neurotransmitters. Membrane proteins on both sides of the synapse are involved in the process of releasing, receiving, and recycling these chemical messengers, and psychotropic drugs can interact at any point along the way.


Psychotropic drugs can interact with these proteins in a host of different ways, thereby altering the activity of naturally occurring neurotransmitters in various regions of the brain. In general, drug effects are classified as either agonist or antagonist effects. Agonists are drugs that increase the natural activity of the neurotransmitter in some way. An agonist drug might mimic the effects of the neurotransmitter at the receptor itself, or it might stimulate the release of a neurotransmitter or prolong its effectiveness by preventing the neuron’s normal process of eliminating the neurotransmitter once it is used. Because all these potential effects would ultimately enhance the function of the neurotransmitter, they would all be classified as agonist effects. In contrast, antagonist effects serve to reduce the functioning of the neurotransmitter. Sample antagonist effects include blocking receptor proteins or preventing the storage of neurotransmitters.


"Affinity" refers to the degree to which a drug interacts with membrane proteins. Drugs with high affinity interact strongly, readily, or for relatively long durations. Low-affinity drugs interact weakly, incompletely, or briefly. Thus, the effects of various drugs can be described in terms of how strongly they interact (affinity) and in what ways they interact (agonist or antagonist) with which specific receptors for which specific neurotransmitters. In principle, drugs can be designed to be quite specific, though in practice most drugs that are commonly used have multiple effects.




Classes of Psychiatric Medications

The main classes of psychiatric medications are antipsychotics, antidepressants, mood stabilizers, and anxiolytics.



Antipsychotics

Antipsychotics are used to treat symptoms of psychosis related to a range of conditions or disorders, including mania, delusional disorders, and psychotic depression. Most commonly, however, they are used in patients suffering from schizophrenia. There are many antipsychotic medications available, but they can be broadly classified into two groups: typical (first generation) and atypical (second generation). The typical antipsychotics, also called neuroleptics or major tranquilizers, are dopamine antagonists. They work primarily by blocking the D2 subclass of receptor proteins for the neurotransmitter dopamine. By blocking these receptors, typical antipsychotics reduce dopamine activity in specific circuits within the brain. The more strongly a typical antipsychotic binds or interacts with the D2 receptor, the more potent the drug. This, and other evidence, has suggested a dopamine hypothesis of schizophrenia—that is, that schizophrenia is caused by overactivity of dopamine circuits in the brain.


Though generally effective, the typical antipsychotics have important limitations. D2 receptors are concentrated in regions of the brain that are important to the regulation of movement, and blocking these receptors produces significant movement-related side effects. In addition, typical antipsychotics are effective in treating only the more overt, or positive, symptoms of schizophrenia, such as hallucinations; they do little to help with the negative symptoms, such as emotional flatness and social withdrawal. These drugs also are simply not effective in some patients.


Since the 1990s, a flurry of antipsychotic development has occurred, based largely on the finding that a specific antipsychotic drug, clozapine, can be effective in treating psychosis without producing the movement-related side effects. Clozapine and similar drugs that were developed later are referred to as atypical agents because although they are relatively weak D2 receptor blockers, they were thought to be at least equal to the typical drugs in treating the positive symptoms of schizophrenia and better at treating the negative symptoms, with a low risk of movement-related side effects. However, further study has shown that this may not necessarily be the case, and some scientists have questioned whether the distinction between typical and atypical antipsychotics is truly a meaningful one.




Antidepressants

Antidepressants are a diverse class of drugs that are effective in the treatment of depression. Though different individuals may respond better or worse to any particular drug, overall the drugs are roughly equal in clinical effectiveness. They are not, however, equal in the side effects that they produce or in the effects that they have at the synapse. The oldest class of antidepressant drugs is the
monoamine oxidase (MAO) inhibitors, or MAOIs. Monoamine oxidase is an enzyme that usually functions to degrade three different neurotransmitters: dopamine, serotonin, and norepinephrine. The MAO inhibitors attach to this enzyme and prevent it from doing its work. Therefore, these drugs are ultimately agonists for the three neurotransmitters. This mechanism of action, however, has an inconvenient and potentially dangerous side effect. Monoamine oxidase is used not only in the brain but also in the human digestive system to metabolize tyramine, a substance found in aged cheeses and meats, some nuts and beans, and assorted other foods. People prescribed MAOI antidepressants need to be quite careful about what foods they eat to avoid those high in tyramine. If they do not, serious and potentially dangerous elevations in blood pressure can occur.


A second category of antidepressant drug is the tricyclic antidepressants. The tricyclics are also agonists for the neurotransmitters serotonin and norepinephrine, but they accomplish this in a different way. In addition to being degraded by MAO inside the cell, these neurotransmitters are also reabsorbed by the neuron once they have been released. The process is called reuptake. By blocking reuptake, the tricyclics allow these neurotransmitters to stay in the synapse for a longer period of time, enabling them to repeatedly interact with receptor proteins in adjacent cells. Like the MAOIs, the tricyclics have numerous side effects, but the most troubling is the fact that these drugs can be very toxic, indeed lethal, in overdose.


The second wave of drug development was led by the antidepressant fluoxetine (Prozac)—an example of a third category of antidepressant referred to as
selective serotonin reuptake inhibitors (SSRIs). Like the tricyclics, the SSRIs block reuptake. However, they are more specific to the neurotransmitter serotonin and have a much better safety profile than the other classes of antidepressants. The most troubling problems associated with the SSRIs are sexual side effects.


A fourth class of antidepressant is often simply referred to as atypicals. These drugs are quite varied in terms of their effects in the brain and do not fit neatly into any other category. Some drugs combine reuptake blocking with effects on postsynaptic receptors. Others are relatively specific to the neurotransmitter dopamine. Like the other antidepressants discussed, these drugs are roughly equal in effectiveness, although their side effects can differ considerably.




Mood Stabilizers

Moodstabilizers are used in the treatment of bipolar spectrum disorders (formerly known as manic-depressive illnesses). These disorders are characterized by emotional volatility; the individual has some combination of periods of mania, depression, or both interspersed with more normal functioning. Therefore, there are three treatment issues: the treatment of mania, the treatment of depression, and the stabilization of mood over time.


The most common substance used to stabilize mood is lithium. Lithium is typically considered to be an effective treatment for stabilizing mood across time as well as an effective, if slow, treatment for reducing mania. Its effectiveness as an antidepressant is less clear. The precise mechanism of action that makes lithium effective is unclear, as it has many effects in the brain. A number of medications, including anticonvulsants, antipsychotics, and antidepressants, have been proposed to treat bipolar disorder either in place of or in addition to lithium. Combination therapy (prescribing multiple drugs from different categories to treat the disorder) is a common practice in treating bipolar disorder.




Anxiolytics

Anxiolytic is another name for an anxiety-reducing drug, also known as a tranquilizer. Most of the typical or traditional anxiolytics are central nervous system depressants. These drugs operate by enhancing the effects of an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA), which reduces the electrical activity of the brain. The most common anxiolytics, a class of drugs called the benzodiazepines, have a neuromodulatory effect at the GABA receptor. When these drugs interact with the receptor, naturally occurring GABA is more effective. The benzodiazepines are highly effective in treating symptoms of anxiety in the short term and are commonly used in treating anxiety associated with generalized anxiety disorder and panic attacks. They are less effective in dealing with some symptoms associated with other anxiety disorders, such as post-traumatic stress disorder and obsessive-compulsive disorders. An additional limitation is that the benzodiazepines, though generally safe to use as prescribed, can become a substance of abuse if taken inappropriately, for long periods of time, or in relatively high doses. In addition, when taken in high doses or with alcohol or other central nervous system depressants, these drugs can be quite dangerous.


Alternatives to the benzodiazepines include the novel antianxiety drug buspirone (BuSpar) and several of the antidepressant medications discussed earlier. These drugs have several important advantages over the benzodiazepines and one major drawback. They are not central nervous system depressants and therefore do not cause sedation or interact with alcohol or other central nervous system depressants. However, they are very slow to take effect in comparison to benzodiazepines. It typically takes a week before even the initial responses are observed and several weeks before the full clinical effect is reached.





Drugs of Abuse and Substance Dependence

Although the majority of drugs used in the treatment of psychological disorders are not prone to abuse, a broad range of other drugs with differing therapeutic purposes and different synaptic functions are. In
substance dependence, the effects that a drug produces change across time in two ways. First, tolerance and withdrawal may occur. In this case, more and more of the drug is required to achieve the same intoxicating effect (tolerance), and negative effects occur when the drug is removed (withdrawal). Second, substance dependence is characterized by loss of control over use of the substance; dependent individuals will use the drug in greater quantity or frequency than they intend and will be unable to curtail their use.


The synaptic changes associated with drug tolerance and drug withdrawal depend on the particular type of drug that has been used. In the case of the opiate drugs, for example, receptors in a region of the brain called the locus coeruleus decrease their responding when the drugs are taken. However, over time and over repeated administrations of the drug, this region of the brain will begin to adapt. The cells become less responsive to the opiate drugs by altering the sensitivity of those receptors with which the drugs interact. Therefore, more of the drug is needed to produce the original effect, and the system becomes dysregulated if the drug is abruptly removed. Synaptic changes of this nature occur with dependence involving other drugs as well, though the precise details of these changes differ from drug to drug. Although important in the treatment of drug addictions, changes associated with tolerance and withdrawal are not complete descriptions of the common denominator that links diverse substances as drugs of abuse.


A feature that all drugs of abuse share is that they are potent reinforcers—they feel good to humans (and animals). All rewards, whether natural or drug, activate dopamine release in a region of the brain called the nucleus acumbens. Direct electrical stimulation of pathways related to the nucleus acumbens will also serve as a powerful reinforcer, and this system is activated with drugs of abuse. Drugs as diverse as amphetamines, cocaine, alcohol, opiates, nicotine, phencyclidine (PCP), and marijuana all trigger the release of dopamine in this reward circuit. Similar to the changes seen with tolerance and withdrawal, the sensitivity of dopamine receptors in this circuit changes when repeatedly activated by drugs of abuse. In time, the sensitivity of the reward system is adjusted; stronger rewards—that is, more drugs—become necessary to activate the system.




Bibliography


Advokat, Claire D., Joseph E. Comaty, and Robert M. Julien. Julien's Primer of Drug Action: A Comprehensive Guide to the Actions, Uses, and Side Effects of Psychoactive Drugs. 13th ed. New York: Worth, 2014. Print.



Ebmeier, K. P., C. Donaghey, and J. D. Steele. “Recent Developments and Current Controversies in Depression.” Lancet 367.9505 (2006): 153–67. Print.



Hyman, S. E., and R. C. Malenka. “Addiction and the Brain: The Neurobiology of Compulsion and Its Persistence.” Nature Reviews: Neuroscience 2.10 (2001): 695–703. Print.



Lieberman, Jeffrey A., et al. "Effectiveness of Antipsychotic Drugs in Patients with Chronic Schizophrenia." New England Journal of Medicine 353.12 (2005): 1209–23. Print.



López-Muñoz, F., et al. “Half a Century since the Clinical Introduction of Chlorpromazine and the Birth of Modern Psychopharmacology.” Progress in Neuro-Psychopharmacology & Biological Psychiatry 28.1 (2004): 205–8. Print.



Maj, Mario. “The Effect of Lithium in Bipolar Disorder: A Review of Recent Research Evidence.” Bipolar Disorders 5.3 (2003): 180–88. Print.



Marin, Humberto, and Javier I. Escobar. Clinical Psychopharmacology: A Practical Approach. Hackensack: World Sci., 2013. Print.



Meltzer, Herbert Y. “Mechanism of Action of Atypical Antipsychotic Drugs.” Neuropsychopharmacology: The Fifth Generation of Progress. Ed. Kenneth L. Davis et al. Philadelphia: Lippincott, 2002. 819–31. Print.



Meyer, Jerrold S., and Linda F. Quenzer. Psychopharmacology: Drugs, the Brain, and Behavior. 2nd ed. Sunderland: Sinauer, 2013. Print.



Sobel, Stephen V. Successful Psychopharmacology: Evidence-Based Treatment Solutions for Achieving Remission. New York: Norton, 2012. Print.



Stahl, Stephen M. Stahl's Essential Psychopharmacology: Neuroscientific Basis and Practical Applications. 4th ed. New York: Cambridge UP, 2013. Print.



Tyrer, Peter, and Tim Kendall. "The Spurious Advance of Antipsychotic Drug Therapy." Lancet 373.9657 (2009): 4–5. Print.

What are natural treatments for alcoholic hepatitis?


Introduction

The liver is a sophisticated chemical laboratory, capable of
carrying out thousands of chemical transformations on which the body depends. The
liver produces some important chemicals from scratch and modifies others to allow
the body to use them better. In addition, the liver neutralizes an enormous range
of toxins.




A number of influences can severely damage the liver, of which alcohol is
the most common. This powerful liver toxin harms the liver in three stages:
alcoholic fatty liver, alcoholic hepatitis, and cirrhosis. Although the first two
stages of injury are usually reversible, cirrhosis is not. Generally, liver
cirrhosis is a result of more than ten years of heavy
alcohol
abuse.


Usually, alcoholic hepatitis is discovered through blood tests that detect levels of enzymes released from the liver. The blood levels of these enzymes, which are known by acronyms such as SGOT, SGPT, ALT, AST, and GGT, rise as damage to the liver (by any cause) progresses.


If blood tests show that a person has alcoholic hepatitis (or any other form of liver disease), it is essential that the person stop drinking. There is little in the way of specific treatment beyond this.




Principal Proposed Natural Treatments

Several herbs and supplements have shown promise for protecting the liver from alcohol-induced damage. However, none of these has been conclusively proven effective, and cutting down (or eliminating) alcohol consumption is undoubtedly more effective than any other treatment. Following is a discussion of the treatments used specifically to treat early liver damage caused by alcohol.



Milk thistle. Numerous double-blind, placebo-controlled studies
enrolling several hundred people have evaluated whether the herb milk thistle
can successfully counter alcohol-induced liver damage. However, these studies have
yielded inconsistent results. For example, a double-blind, placebo-controlled
study performed in 1981 followed 106 Finnish soldiers with alcoholic liver disease
over a period of four weeks. The treated group showed a significant decrease in
elevated liver enzymes and improvement in liver structure, as evaluated by biopsy
in twenty-nine subjects.


Two similar studies enrolling approximately sixty people also found benefits. However, a three-month, double-blind, placebo-controlled study of 116 people showed little to no additional benefit, perhaps because most participants reduced their alcohol consumption and almost one-half of them stopped drinking entirely. Another study found no benefit in seventy-two persons who were followed for fifteen months.


A 2007 review of published and unpublished studies on milk thistle as a treatment for liver disease concluded that benefits were seen only in low-quality trials, and even in those, milk thistle did not show more than a slight benefit. A subsequent 2008 review of nineteen randomized trials drew a similar conclusion for alcoholic liver disease generally, although it did find a modest reduction in mortality for persons with severe liver cirrhosis.



Other proposed natural treatments. The supplement
S-adenosylmethionine (SAMe) has also shown some promise for
preventing or treating alcoholic hepatitis, but there is no reliable evidence to
support its use for this purpose. The supplement trimethylglycine helps the body create its own SAMe and has
also shown promise in preliminary studies.




Herbs and Supplements to Avoid

High doses of the supplements beta-carotene and vitamin A
might cause alcoholic liver disease to develop more rapidly in people who abuse
alcohol. Nutritional supplementation at the standard daily requirement level
should not cause a problem.


Although one animal study suggests that the herb kava might
aid in alcohol withdrawal, the herb can cause liver damage; therefore, it should
not be used by people with alcoholic liver disease (and probably not by anyone).
Numerous other herbs possess known or suspected liver-toxic properties, including
coltsfoot, comfrey, germander, greater celandine, kombucha, pennyroyal, and
various prepackaged Chinese herbal remedies. For this reason, people with
alcoholic liver disease should use caution before taking any medicinal herbs.




Bibliography


Abittan, C. S., and C. S. Lieber. “Alcoholic Liver Disease.” Current Treatment Options in Gastroenterology 2 (1999): 72-80.



Leo, M. A., and C. S. Lieber. “Alcohol, Vitamin A, and Beta-carotene: Adverse Interactions, Including Hepatotoxicity and Carcinogenicity.” American Journal of Clinical Nutrition 69 (1999): 1071-1085.



McClain, C. J., et al. “S-adenosylmethionine, Cytokines, and Alcoholic Liver Disease.” Alcohol 27 (2002): 185-192.



Ni, R., et al. “Toxicity of Beta-carotene and Its Exacerbation by Acetaldehyde in HepG2 Cells.” Alcohol and Alcoholism 36 (2001): 281-285.



Rambaldi, A., and C. Gluud. “S-adenosyl-l-methionine for Alcoholic Liver Diseases.” Cochrane Database of Systematic Reviews (2001): CD002235. Available through EBSCO DynaMed Systematic Literature Surveillance at http://www.ebscohost.com/dynamed.



Rambaldi, A., B. Jacobs, and C. Gluud. “Milk Thistle for Alcoholic and/or Hepatitis B or C Virus Liver Diseases.” Cochrane Database of Systematic Reviews (2007): CD003620. Available through EBSCO DynaMed Systematic Literature Surveillance at http://www.ebscohost.com/dynamed.

Sunday 29 December 2013

Describe the historical concepts of mental illness.


Introduction

People are social creatures who learn how to behave appropriately in families and in communities. What is considered appropriate, however, depends on a host of factors, including historical period, culture, geography, and religion. Thus, what is valued and respected changes over time, as do sociocultural perceptions of aberrant or deviant behavior. How deviancy is treated depends a great deal on the extent of the deviancy—is the person dangerous, a threat to self or to the community, in flagrant opposition to community norms, or is the person just a little odd? How the community responds also depends on its beliefs as to what causes aberrant behavior. Supernatural beliefs in demons, spirits, and magic were common in preliterate societies; in the medieval Western world, Christians believed that the devil was in possession of deranged souls. Hence, the mad were subjected to cruel treatments justified by the idea of routing out demons or the devil. For centuries, the prevailing explanation for madness was demonic possession.







Prior to the nineteenth century, families and communities cared for the mad. If they were unmanageable or violent, the mad were incarcerated in houses of correction or dungeons, where they were manacled or put into straitjackets. If a physician ever attended someone who was deemed mad by the community, it was to purge or bleed the patient to redress a supposed humoral imbalance. Most medical explanations before the advent of scientific medicine were expressed in terms of the four humors: black and yellow bile, blood, and phlegm. Imbalances usually were treated with laxatives, purgatives, astringents, emetics, and bleeding. In the late eighteenth century, however, understanding moved from the holistic and humoral to the anatomical, chemical, and physiological. Views of humans and their rights also changed enormously around this time as a consequence of the American and French Revolutions.


During the nineteenth and twentieth centuries, madhouses were first replaced by more progressive lunatic asylums and then by mental hospitals and community mental health centers. In parallel fashion, custodians and superintendents of madhouses became mad-doctors or alienists in the nineteenth century and psychiatrists, psychologists, and counselors of various kinds in the twentieth century. Similarly, the language changed: Madness was variously called lunacy, insanity, derangement, or alienation. The contemporary term is mental disorder. These changes reflect the rejection of supernatural and humoral explanations of madness in favor of a disease model with varying emphases on organic or psychic causes.




Early Views of Madness

One of the terrible consequences of the belief in supernatural possession by demons was the inhumane treatment in which it often resulted. An example is found in the book of Leviticus in the Bible, which many scholars believe is a compilation of laws that had been handed down orally in the Jewish community for as long as a thousand years until they were written down, perhaps about 700 b.c.e. Leviticus 20:27, in the King James version, reads, “A man or a woman that hath a familiar spirit . . . shall surely be put to death: they shall stone him with stones.” The term “familiar spirit” suggests demonic possession, and death was the response for dealing with demons in their midst.


There were exceptions to the possession theory and the inhumane treatment to which it often led. Hippocrates, who lived around 300 b.c.e. in Greece and who is regarded as the father of medicine, believed that mental illness had biological causes and could be explained by human reason through empirical study. Although Hippocrates found no cure, he did recommend that the mentally ill be treated humanely, as other ill people would be treated.


The period of Western history that is sometimes known as the Dark Ages was particularly dark for the mad. Folk belief, theology, and occult beliefs and practices of all kinds often led to terrible treatment. Although some educated and thoughtful people, even in that period, held humane views, they were in the minority regarding madness.




Eighteenth and Nineteenth Century Views

It was not until what could be considered the modern historical period, the end of the eighteenth century, that major changes took place in the treatment of the insane. Additionally, there was a change in attitudes toward the insane, in approaches to their treatment, and in beliefs regarding the causes of their strange behaviors. One of the pioneers of this new attitude was the French physician Philippe Pinel
. Pinel was appointed physician-in-chief of the Bicêtre Hospital in Paris in 1792. The Bicêtre was one of a number of “asylums” that had developed in Europe and in Latin America over several hundred years to house the insane. Often started with the best of intentions, most of the asylums became hellish places of incarceration.


In the Bicêtre, patients were often chained to the walls of their cells and lacked even the most elementary amenities. Under Pinel’s guidance, the patients were freed from their confinement—popular myth has Pinel removing the patients’ shackles personally, risking death if he should prove to be wrong about the necessity for confinement, but in fact it was Pinel’s assistant Jean Baptiste Pussin who performed the act. Pinel also discarded the former treatment plan of bleeding, purging, and blistering in favor of a new model that emphasized talking to patients and addressing underlying personal and societal causes for their problems, using medical treatments such as opiates only as a last resort. Talking to his patients about their symptoms and keeping careful notes of what they said allowed Pinel to make advances in the classification of mental illnesses as well.


This change was occurring in other places at about the same time. After the death of a Quaker in Britain’s York Asylum, the local Quaker community founded the York Retreat, where neither chains nor corporal punishment were allowed. In America, Benjamin Rush, a founder of the American Psychiatric Association, applied his version of moral treatment, which was not entirely humane as it involved physical restraints and fear as therapeutic agents. Toward the middle of the nineteenth century, American crusader Dorothea Dix
fought for the establishment of state mental hospitals for the insane. Under the influence of Dix, thirty-two states established at least one mental hospital. Dix had been influenced by the moral model, as well as by the medical sciences, which were rapidly developing in the nineteenth century. Unfortunately, the state mental hospital often lost its character as a “retreat” for the insane.


The nineteenth century was the first time in Western history that a large number of scientists turned their attention to abnormal behavior. For example, the German psychiatrist Emil Kraepelin spent much of his life trying to develop a scientific classification system for psychopathology. Sigmund Freud attempted to develop a science of mental illness. Although many of Freud’s ideas have not withstood empirical investigation, perhaps his greatest contribution was his insistence that scientific principles apply to mental illness. He believed that abnormal behavior is not caused by supernatural forces and does not arise in a chaotic, random way, but that it can be understood as serving some psychological purpose.




Modern Medicines

Many of the medical and biological treatments for mental illness in the first half of the twentieth century were frantic attempts to deal with very serious problems—attempts made by clinicians who had few effective therapies to use. The attempt to produce convulsions (which often did seem to make people “better,” at least temporarily) was popular for a decade or two. One example was insulin shock therapy, in which convulsions were induced in mentally ill people by insulin injection. Electroshock therapy was also used. Originally it was primarily used with patients who had schizophrenia, a severe form of psychosis. Although it was not very effective with schizophrenia, it was found to be useful with patients who had depressive psychosis. Now known as electroconvulsive therapy, it continues to be used in cases of major depression or bipolar disorder which are resistant to all other treatments. Another treatment sometimes used, beginning in the 1930’s, was prefrontal lobotomy. Many professionals today would point out that the use of lobotomy indicates the almost desperate search for an effective treatment for the most aggressive or the most difficult psychotic patients. As originally used, lobotomy was an imprecise slashing of the frontal lobe of the brain.


The real medical breakthrough in the treatment of psychotic patients was associated with the use of certain drugs from a chemical family known as phenothiazines. Originally used in France as a tranquilizer for surgery patients, their potent calming effect attracted the interest of psychiatrists and other mental health workers. One drug of this group, chlorpromazine, was found to reduce or eliminate psychotic symptoms in many patients. This and similar medications came to be referred to as antipsychotic drugs. Although their mechanism of action is still not completely understood, there is no doubt that they worked wonders for many severely ill patients while causing severe side effects for others. The drugs allowed patients to function outside the hospital and often to lead normal lives. They enabled many patients to benefit from psychotherapy. The approval of the use of chlorpromazine as an antipsychotic drug in the United States in 1955 revolutionized the treatment of many mental patients. Individuals who, prior to 1955, might have spent much of their lives in a hospital could now control their illness effectively enough to live in the community, work at a job, attend school, and be a functioning member of a family.


In 1955, the United States had approximately 559,000 patients in state mental hospitals; seventeen years later, in 1972, the population of the state mental hospitals had decreased almost by half, to approximately 276,000. Although all of this cannot be attributed to the advent of the psychoactive drugs, they undoubtedly played a major role. The phenothiazines had finally given medicine a real tool in the battle with psychosis. One might believe that the antipsychotic drugs, combined with a contemporary version of the moral treatment, would enable society to eliminate mental illness as a major human problem. Unfortunately, good intentions go awry. The “major tranquilizers” can easily become chemical straitjackets; those who prescribe the drugs are sometimes minimally involved with future treatment. In the late 1970s, the makers of social policy saw what appeared to be the economic benefits of reducing the role of the mental hospital, by discharging patients and closing some mental hospitals. However, they did not foresee that large numbers of homeless psychotics would live in the streets as a consequence of deinstitutionalization. The plight of the homeless during the early part of the twenty-first century continues to be a serious, national problem in the United States.




Disorder and Dysfunction

The twentieth century saw the exploration of many avenues in the treatment of mental disorders. Treatments ranging from classical psychoanalysis to cognitive and humanistic therapies to the use of therapeutic drugs were applied. Psychologists examined the effects of mental disorders on many aspects of life, including cognition and personality. These disorders affect the most essential of human functions, including cognition, which has to do with the way in which the mind thinks and makes decisions. Cognition does not work in “ordinary” ways in the person with a serious mental illness, making his or her behavior very difficult for family, friends, and others to understand. Another aspect of cognition is perception. Perception has to do with the way that the mind, or brain, interprets and understands the information that comes to a person through the senses. There is a general consensus among most human beings about what they see and hear, and perhaps to a lesser extent about what they touch, taste, and smell. The victim of mental illness, however, often perceives the world in a much different way. This person may see objects or events that no one else sees, phenomena called hallucinations. The hallucinations may be visual—for example, the person may see a frightening wild animal that no one else sees—or the person may hear a voice accusing him or her of terrible crimes or behaviors that no one else hears.


A different kind of cognitive disorder is delusions. Delusions are untrue and often strange ideas, usually growing out of psychological needs or problems of a person who may have only tenuous contact with reality. A woman, for example, may believe that other employees are plotting to harm her in some way when, in fact, they are merely telling innocuous stories around the water cooler. Sometimes people with mental illness will be disoriented, which means that they do not know where they are in time (what year, what season, or what time of day) or in space (where they live, where they are at the present moment, or where they are going).


In addition to experiencing cognitive dysfunction that creates havoc, mentally ill persons may have emotional problems that go beyond the ordinary. For example, they may live on such an emotional “high” for weeks or months at a time that their behavior is exhausting both to themselves and to those around them. They may exhibit bizarre behavior; for example, they may talk about giving away vast amounts of money (which they do not have), or they may go without sleep for days until they drop from exhaustion. This emotional “excitement” seems to dominate their lives and is called mania. The word “maniac” comes from this terrible emotional extreme.


At the other end of the emotional spectrum is clinical depression. This does not refer to the ordinary “blues” of daily life, with all its ups and downs, but to an emotional emptiness in which the individual seems to have lost all emotional energy. The individual often seems completely apathetic. The person may feel life is not life worth living and may have anhedonia, which refers to an inability to experience pleasure of almost any kind.




Treatment Approaches

Anyone interacting with a person suffering from severe mental disorders comes to think of him or her as being different from normal human beings. The behavior of those with mental illness is regarded, with some justification, as bizarre and unpredictable. They are often labeled with a term that sets them apart, such as “crazy” or “mad.” There are many words in the English language that have been, or are, used to describe these persons—many of them quite cruel and derogatory. Since the nineteenth century, professionals have used the term “psychotic” to denote severe mental illness or disorder. Interestingly, one translation of psychotic is “of a sickness of the soul” and reflects the earlier belief regarding the etiology, or cause, of mental illness. This belief is still held by some therapists and pastoral counselors in the twenty-first century. Until the end of the twentieth century, the term “neurosis” connoted more moderate dysfunction than the term “psychosis.” However, whether neurosis is always less disabling or disturbing than psychosis has been an open question. An attempt was made to deal with this dilemma in 1980, when the DSM-III officially dropped the term “neurosis” from the diagnostic terms.


The contemporary approach to mental disorder, at its best, offers hope and healing to patients and their families. However, much about the etiology of mental disorder remains unknown to social scientists and physicians.


In 1963, President John F. Kennedy signed the Community Mental Health Act. Its goal was to set up centers throughout the United States offering services to mentally and emotionally disturbed citizens and their families, incorporating the best that had been learned and that would be learned from science and from medicine. Outpatient services in the community, emergency services, “partial” hospitalizations (adult day care), consultation, education, and research were among the programs supported by the act. Although imperfect, it nevertheless demonstrated how far science had come from the days when witches were burned at the stake and the possessed were stoned to death.


When one deals with mental disorder, one is dealing with human behavior—both the behavior of the individual identified as having the problem and the behavior of the community.
The response of the community is critical for the successful treatment of disorder. For example, David L. Rosenhan, in a well-known 1973 study titled “On Being Sane in Insane Places,” showed how easy it is to be labeled “crazy” and how difficult it is to get rid of the label. He demonstrated how one’s behavior is interpreted and understood on the basis of the labels that have been applied. (The “pseudopatients” in the study had been admitted to a mental hospital and given a diagnosis—a label—of schizophrenia. Consequently, even their writing of notes in a notebook was regarded as evidence of their illness.) To understand mental disorder is not merely to understand personal dysfunction or distress, but also to understand social and cultural biases of the community, from the family to the federal government. The prognosis for eventual mental and emotional health depends not only on appropriate therapy but also on the reasonable and humane response of the relevant communities.




Bibliography


American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders: DSM-5. 5th ed. Washington, DC: American Psychiatric Assn., 2013. Print.



Berrios, German E., and Roy Porter. A History of Clinical Psychiatry: The Origin and History of Psychiatric Disorders. New Brunswick: Athlone, 1999. Print.



Chung, Man Cheung, and Michael Hyland. History and Philosophy of Psychology. Chichester: Wiley, 2012. Print.



Frankl, Viktor Emil. Man’s Search for Meaning. New York: Washington Square, 2006. Print.



Freud, Sigmund. The Freud Reader. Ed. Peter Gay. 1989. Reprint. New York: Norton, 1995. Print.



Grob, Gerald N. The Mad Among Us: A History of the Care of America’s Mentally Ill. New York: Free Press, 1994. Print.



Porter, Roy. The Greatest Benefit to Mankind: A Medical History of Humanity. New York: Norton, 1999. Print.



Porter, Roy. Madness: A Brief History. New York: Oxford UP, 2003. Print.



Robinson, Daniel N. An Intellectual History of Psychology. 3d ed. Madison: U of Wisconsin P, 1995. Print.



Rosenhan, David L. “On Being Sane in Insane Places.” Science 179 (January 19, 1973): 250–258. Print.



Rudnick, Abraham. Recovery of People with Mental Illness: Philosophical and Related Perspectives. Oxford: Oxford UP, 2012. Print.



Shiraev, Eric. A History of Psychology: A Global Perspective. Thousand Oaks: Sage, 2011. Print.



Torrey, E. Fuller, and Judy Miller. The Invisible Plague: The Rise of Mental Illness from 1750 to the Present. New Brunswick: Rutgers UP, 2007. Print.



Wallace, Edwin R., and John Gach. History of Psychiatry and Medical Psychology: With an Epilogue on Psychiatry and the Mind-Body Relation. New York: Springer, 2008. Print.

What is a low-carbohydrate diet?


Overview

Mainstream groups such as the American Heart Association and the American Dietetic Association traditionally endorse a unified set of dietary guidelines for people who wish to lose weight: Eat a low-fat diet and cut calories. However, many popular weight-loss and diet books take a very different approach. The Atkins diet, the Zone diet, Protein Power, and numerous other dietary approaches reject the low-fat guideline. Instead, these methods recommend cutting down on carbohydrates. According to proponents of these theories, when a person reduces the carbohydrates in his or her diet (and, correspondingly, increases protein or fat, or both), that person will find it much easier to reduce calorie intake and may even lose weight without cutting calories.


The controversy over these contradictions has been heated. Proponents of the low-fat diet claim that low-carbohydrate (low-carb) diets are ineffective and even dangerous, while low-carb proponents say much the same about the low-fat approach. However, an article published in the Journal of the American Medical Association suggests that neither side has a strong case. Researchers concluded, essentially, that a calorie is a calorie, regardless of whether it comes from a low-carb or a low-fat diet. They did not find any consistent evidence that the low-carb diet makes it easier to lose weight than the low-fat diet, but neither did they find any consistent evidence for the reverse. Furthermore, the authors of the review did not find any compelling reason to conclude that low-carb diets are unsafe, although they did point out that the long-term safety of such diets remains unknown.


Subsequent studies confirmed these findings for a variety of low-carb diets. In some studies, one particular diet method may do better than others, but in other studies a different diet will stand out. Researchers reviewing thirteen studies comparing low-carb with low-fat/low-calorie diets in overweight participants for a minimum of six months concluded that the low-carb diets tended to perform better at reducing weight and cardiovascular disease factors for up to one year. Nevertheless, a consensus has yet to emerge among nutrition scientists as to what diet performs better overall. Many of the foregoing studies suggest that if a diet causes weight loss, cholesterol will improve regardless of the diet used to achieve that weight loss. However, the manner of change in a person’s cholesterol profile differs between the two approaches. Low-fat diets tend to improve total and LDL (low-density lipoprotein, or bad) cholesterol levels, but they tend to worsen HDL (high-density lipoprotein, or good) cholesterol and triglyceride levels; low-carb, high-fat diets have the opposite effect.


The Mediterranean diet, which is relatively high in fiber and monounsaturated fats (such as olive oil) has also attracted the attention of nutrition researchers. There is good evidence that it is as effective as low-carb diets for weight reduction and probably more effective than low-fat diets. It also seems to have the added advantage of benefiting persons with diabetes more than the other two diets.


However, if a person undertakes a low-carb (or low-fat) diet that does not cause weight loss, that person’s cholesterol profile will probably not improve significantly. In addition, there is little to no evidence that the low-carb approach improves blood sugar control except insofar as it leads to weight loss. However, there is some evidence that a low-carb diet that is high in monounsaturated fats reduces blood pressure to a slightly greater extent than does a high-carb, low-fat diet. Contrary to claims by some low-carb proponents, low-fat, high-carb diets do not seem to backfire metabolically and promote weight gain.


Based on this information, it seems that the most sensible course to take to lose weight is to experiment with different diets and determine which one cuts the most calories (and keeps them cut). If the low-carb diet approach works, one should continue with it. However, if it does not help one lose weight, it should not be continued indefinitely. Additionally, most health experts suggest that dieting according to fads is ineffective, as any immediate weight loss is typically not sustained long term. Instead most recommend that following a balanced diet in combination with regular exercise and other healthy habits is the best way to manage weight and promote overall well-being.




Risks

Any form of extreme dieting can cause serious side effects or even death. All people who intend to adopt an unconventional diet should first seek medical advice. Furthermore, people with kidney failure should not use low-carb, high-protein diets, as high protein intake can easily overstress failing kidneys. (High-protein diets are probably not harmful for people with healthy kidneys.)


In addition, people who take the blood thinner warfarin (Coumadin) may need to have their blood coagulation tested after beginning a high-protein, low-carb diet. Two case reports suggest that such diets may decrease the effectiveness of warfarin, requiring a higher dose. Conversely, a person who is already on warfarin and a high-protein, low-carb diet and then goes off the diet, may need to reduce his or her warfarin dose.







Bibliography


Berglund, L., et al. “Comparison of Monounsaturated Fat with Carbohydrates as a Replacement for Saturated Fat in Subjects with a High Metabolic Risk Profile: Studies in the Fasting and Postprandial States.” American Journal of Clinical Nutrition 86 (2007): 1611-1620.



Dansinger, M. L., et al. “Comparison of the Atkins, Ornish, Weight Watchers, and Zone Diets for Weight Loss and Heart Disease Risk Reduction.” Journal of the American Medical Association 293 (2005): 43-53.



Ebbeling, C. B., et al. “Effects of a Low-Glycemic Load vs Low-Fat Diet in Obese Young Adults.” Journal of the American Medical Association 297 (2007): 2092-2102.



Gardner, C. D., et al. “Comparison of the Atkins, Zone, Ornish, and LEARN Diets for Change in Weight and Related Risk Factors Among Overweight Premenopausal Women: The A to Z Weight Loss Study.” Journal of the American Medical Association 297 (2007): 969-977.



Hession, M., et al. “Systematic Review of Randomized Controlled Trials of Low-Carbohydrate vs. Low-Fat/Low-Calorie Diets in the Management of Obesity and Its Comorbidities.” Obesity Reviews 10 (2009): 36-50.



Howard, B. V., J. E. Manson, et al. “Low-Fat Dietary Pattern and Weight Change over Seven Years: The Women’s Health Initiative Dietary Modification Trial.” Journal of the American Medical Association 295 (2006): 39-49.



Howard, B. V., L. Van Horn, et al. “Low-Fat Dietary Pattern and Risk of Cardiovascular Disease: The Women’s Health Initiative Randomized Controlled Dietary Modification Trial.” Journal of the American Medical Association 295 (2006): 655-666.



"Low-Carb Diet: Can It Help You Lose Weight?" Mayo Clinic. Mayo Foundation for Medical Education and Research, 20 Sept. 2014. Web. 27 Jan. 2016.



"Low-Carbohydrate Diets." Nutrition Source. Harvard T. H. Chan School of Public Health, 2016. Web. 27 Jan. 2016.



Luscombe-Marsh, N. D., et al. “Carbohydrate-Restricted Diets High in Either Monounsaturated Fat or Protein Are Equally Effective at Promoting Fat Loss and Improving Blood Lipids.” American Journal of Clinical Nutrition 81 (2005): 762-772.



Nordmann, A. J., et al. “Effects of Low-Carbohydrate vs Low-Fat Diets on Weight Loss and Cardiovascular Risk Factors: A Meta-analysis of Randomized Controlled Trials.” Archives of Internal Medicine 166 (2006): 285-293.



Shai, I., et al. “Weight Loss with a Low-Carbohydrate, Mediterranean, or Low-Fat Diet.” New England Journal of Medicine 359 (2008): 229-241.



Tay, J., et al. “Metabolic Effects of Weight Loss on a Very-Low-Carbohydrate Diet Compared with an Isocaloric High-Carbohydrate Diet in Abdominally Obese Subjects.” Journal of the American College of Cardiology 51 (2008): 59-67.



Wal, J. S., et al. “Moderate-Carbohydrate Low-Fat Versus Low-Carbohydrate High-Fat Meal Replacements for Weight Loss.” International Journal of Food Sciences and Nutrition 58 (2007): 321-329.

Saturday 28 December 2013

How do Common Core Standards oppress children, their imagination, and their education?

This question suggests a very strong bias against Common Core Standards, and I am aware this subject has become highly politicized.  There may very well be some value to the standards, depending on how they are used. Nevertheless, it is possible to provide some support for these characterizations of the Common Core Standards. 


In the early years, K-3, it appears that the standards are particularly oppressive and harmful to children's natural tendency to be imaginative,...


This question suggests a very strong bias against Common Core Standards, and I am aware this subject has become highly politicized.  There may very well be some value to the standards, depending on how they are used. Nevertheless, it is possible to provide some support for these characterizations of the Common Core Standards. 


In the early years, K-3, it appears that the standards are particularly oppressive and harmful to children's natural tendency to be imaginative, and if this is the case, it clearly harms their education.  Children learn best by playing in the early grades.  This is how children get to exercise their imaginations and how they learn.  Inquiry is to a large degree imagination-based. A child wants to know something, hypothesizes some possible answers, and tries them out in play. These standards do not allow for this. Furthermore, children's development is strikingly uneven in the early years, such that some four-year-olds are ready to read while some six-year-olds are just getting ready to do so.  This is not accounted for in the Common Core Standards. This is oppressive, of course, trying to force very young children into a one-size-fits-all curriculum.  According to the Washington Post, early childhood experts were not consulted in the formulation of the standards, and this is what we have reaped as a result.


Once children are reading for themselves, what I find of great concern is a focus on non-fiction instead of fiction in the language arts. I am guessing the intent was to make students more job-ready by forcing them to focus more on how to gather, analyze, and judge information.  That is a fine goal, but there is great value in teaching texts of fiction, value that teaching non-fiction can simply not replace.  Even older students need to develop imaginatively, best done with fiction, and studies have made clear that reading literary fiction socializes us and develops our empathy, both attributes all students need to develop to succeed in their professional and personal lives. 


I have read numerous criticisms of the math standards, which seem to be based more on process than product.  But I suspect the primary problem is that children are being taught math in a different way than their parents were, and this makes parents confused, unhappy, and defensive.  It certainly does matter in math, though, that one get the answer right, and no matter how well one understands the process, it seems to me that an elevation of process over product could easily have planes falling to earth and two sides of a bridge not meeting in the middle.


Should there be standards? Certainly. Should there be uniform federal standards? That is not so clear. In a small country like Japan, with a homogeneous population, this is not a controversial idea. But in the United States, with a very delicate balance between states' rights and the power of the federal government, with so much diversity in its people and even its geography, it is possible that this is not the best idea.  



What is bone grafting? |


Indications and Procedures

Ideally, the grafting procedure involves the transfer of bone tissue from one site to another on the same individual, which is termed an autogenous graft. This method eliminates the chance of rejection, allowing the transplantation of entire functional units of tissue: arteries, veins, and even nerves, as when a toe is used to replace a finger or thumb (toe-digital transfer). Autogenous rib or fibula grafts may be utilized for the reconstruction of the face or extremities.



Bone grafts are often used in situations in which a bone fracture is not healing properly. A fracture that fails to heal in the usual time is considered to be a delayed union. Cancellous material from the bone (the spongy inner material), usually obtained from the iliac crest of the pelvis or from the ends of the long bones, is placed around the site. The fracture must then be immobilized for several months, allowing the grafted material to infiltrate and repair the fracture.




Uses and Complications

The grafting of bone tissue is carried out to correct a bone defect, to provide support tissue in the case of a severe fracture, or to encourage the growth of new bone. The source of the skeletal defect may be congenital malformation, disease, or trauma. For example, reconstruction may be necessary following cancer
surgery, particularly for the jaw or bones elsewhere in the face.


If the autogenous bone supply is inadequate to fill the need, allogeneic bone grafts, the transplantation of bone from an individual other than an identical twin, may be necessary. Such foreign tissue is more likely to undergo rejection, reducing the chance of a successful procedure; the more closely the tissues of the two persons are matched, the less likely rejection will be a problem.


If the graft is able to vascularize quickly and to synthesize new tissue, the procedure is likely to be successful. The graft itself may provide structural support, or it may gradually be replaced by new bone at that site, completing the healing process.




Bibliography


Aho, OM. "The mechanism of action of induced membranes in bone repair." Journal of Bone and Joint Surgery. 95, 7. (April 2013): 597–604.



Bentley, George, and Robert B. Greer, eds. Orthopaedics. 4th ed. Oxford, England: Linacre House, 1993.



Callaghan, John J., Aaron Rosenberg, and Harry E. Rubash, eds. The Adult Hip. 2d ed. Philadelphia: Lippincott Williams & Wilkins, 2007.



Doherty, Gerard M., and Lawrence W. Way, eds. Current Surgical Diagnosis and Treatment. 12th ed. New York: Lange Medical Books/McGraw-Hill, 2006.



Dowthwaite, SA. "Comparison of fibular and scapular osseous free flaps for oromandibular reconstruction: a patient-centered approach to flap selection." JAMA Otolaryngol Head Neck Surgery. 139, 3. (March 2013): 285–292.



Eiff, M. Patrice, Robert L. Hatch, and Walter L. Calmbach. Fracture Management for Primary Care. 2d ed. Philadelphia: W. B. Saunders, 2003.



Lindholm, T. Sam. Advances in Skeletal Reconstruction Using Bone Morphogenetic Proteins. London: World Scientific, 2002.



Tapley, Donald F., et al., eds. The Columbia University College of Physicians and Surgeons Complete Home Medical Guide. Rev. 3d ed. New York: Crown, 1995.



Tierney, Lawrence M., Stephen J. McPhee, and Maxine A. Papadakis, eds. Current Medical Diagnosis and Treatment 2007. New York: McGraw-Hill Medical, 2006.



Wood, Debra. "Bone graft." Health Library, December 21, 2011.

Friday 27 December 2013

What is a Skinner box?


Introduction

The modern Skinner box consists of a chamber housed in a sound- and light-attenuating shell and connected to a computer through an interface. This arrangement ensures a uniform, controlled environment that minimizes extraneous and distracting stimuli during an experiment. For rats and mice, the chamber is usually equipped with one or more manipulanda (for example, a lever to be pressed or a chain to be pulled). Responses are detected electronically (through closure of a microswitch) and recorded by computer software. The rodent chamber typically has a device that dispenses food (for example, 20- or 45-milligram pellets) or a liquid (for example, water or sugar solutions) into a magazine tray located near the manipulanda, and is equipped with speakers (to present auditory stimuli) and lights. Presentations of these events are programmed using computer software. Operant chambers for rodents are manufactured with grid floors that can be set up for delivery of faradic stimulation (electric shock) for use in Pavlovian studies of fear conditioning and instrumental studies of punishment or escape and avoidance learning.











Operant chambers for pigeons generally have either a projector to display visual stimuli on a response key that is pecked by the pigeon or an LCD panel for displaying computer-generated images, and a hopper to present grain that can be accessed through an opening in the magazine tray. Head entries into the magazine tray to retrieve a reward can be detected automatically by interruption of a photobeam. Operant chambers may be modified for use with small primates.


The operant chamber is typically used for the study of changes in behavior as a result of its consequences.




History

The Skinner box was one of many inventions created by the radical behaviorist B. F. Skinner. While a graduate student in psychology at Harvard, Skinner began experiments to understand the variability of behavior and to investigate the conditions that affected the strength of behavior. He constructed an experimental chamber with a feeding device that registered a rat’s contacts as it retrieved measured quantities of food. Later, Skinner equipped his box with a horizontal bar that, when pushed down by the rat, would cause a feeder to dispense a pellet into the magazine tray. In The Behavior of Organisms (1938), Skinner devotes three pages to a discussion of his use of the lever press response as an operant. The typical lever was a 1/8-inch brass rod, 6 centimeters (cm) long, mounted 8 to 10 cm above the floor and protruding 1 cm into the chamber. The rat had to exert about 10 grams of pressure to depress the lever and register a response on a cumulative recorder. Among the merits of the lever press response mentioned by Skinner were the ease and spontaneous frequency with which the response was made by rats. Another advantage he noted was that this operant required stimulus support and therefore could not occur in the absence of the actual lever.


Throughout the 1930s, Skinner used an operant chamber fitted with a lever for rats. Following his work on Project Pigeon in the early 1940s, Skinner constructed a modified operant chamber for pigeons and in subsequent work used pecking at a response key as the operant. The pigeon’s superior vision and longer life contributed to his decision to switch organisms.




Impact on Research

The Skinner box provided researchers in the 1930s with a laboratotry apparatus that had several highly desirable advantages over mazes and runways. Data collection was substantially less labor- and time-intensive. Larger numbers of animals could be studied, which increased the power of statistical analysis and the likelihood of detecting lawful properties of behavior by averaging across individuals. The cumulative recorder, another of Skinner’s ingenious inventions, provided an immediate and continuous record of the rate of behavior (a measure of its strength) that could be mechanically averaged across subjects by the Summarizer, also developed by Skinner.


In addition to these practical benefits, there were significant theoretical impacts of the new apparatus. Arguably the most important was the distinction Skinner was prompted to make on the basis of his studies with the operant chamber between two kinds of responses, respondent (Pavlovian) and operant (instrumental). Whereas respondents were elicited by preceding stimuli, operants were controlled by their consequences (the reinforcement contingency).


Shaping by successive approximation was a technique developed by Skinner to train animals to perform complex actions. The operant chamber also allowed him to develop his theory of how schedules of reinforcement influenced behavior in lawful ways. It is no accident that slot machines use a variable ratio schedule of reinforcement.


Contemporary psychology reflects the impact of the modern operant chamber in many ways. Empirical challenges to classical temporal contiguity theory (for example, blocking and contingency) emerged from studies using changes in the rate of a free operant to assess learning. Postconditioning manipulations of a response-contingent outcome have illuminated the processes involved in instrumental learning. Behavior modification programs use the principles of operant conditioning.


Effects of Skinner’s work with the operant chamber have extended beyond psychology to behavioral economics, behavioral pharmacology, and personalized instruction. Skinner did not raise his daughter Deborah in an operant chamber, contrary to urban legend, but he did build the baby tender, which was designed to provide a safe and comfortable environment for an infant. Skinner’s Walden Two (1948) describes a fictional utopian community that raised its children using the principles of operant conditioning.




Bibliography


Ferster, Charles B. “The Use of the Free Operant in the Analysis of Behavior.” Psychological Bulletin 50.4 (1953): 263–74. Print.



Lattal, Kennon A. “JEAB at 50: Coevolution of Research and Technology.” Journal of the Experimental Analysis of Behavior 89.1 (2008): 129–35. Print.



O’Donohue, William, and Kyle E. Ferguson. The Psychology of B. F. Skinner. Thousand Oaks: Sage, 2001. Print.



Rutherford, Alexandra. Beyond the Box: B. F. Skinner's Technology of Behavior from Laboratory to Life, 1950s–1970s. Toronto: U of Toronto P, 2009. Print.



Schachter, Daniel L., Daniel T. Gilbert, and Daniel M. Wegner. “B. F. Skinner: The Role of Reinforcement and Punishment.” Psychology. 3rd ed. New York: Worth, 2014. 278–80. Print.



Vargas, Julie. “Biographical Information.” B. F. Skinner Foundation. B. F. Skinner Foundation, n.d. Web. 25 June 2014.

What are retroviruses? |



Biology of Retroviruses

Retroviruses are members of the viral family Retroviridae. They are enveloped, positive sense (+) RNA viruses
about 100 nanometers in diameter that replicate within the host’s cytoplasm through a double-stranded DNA intermediate that is integrated into the host genome. In addition to the +RNA, there is a cellular tRNA hydrogen bonded to the +RNA that serves as a primer for reverse transcriptase.



The viral RNA genome, its associated nucleoprotein, reverse transcriptase, and integrase are surrounded by a protein capsid. Immediately external to the capsid is the matrix protein. The outer layer of the retrovirus is a lipid bilayer envelope derived from the host’s plasma
membrane that is acquired as the virus emerges from the host cell. Within the envelope are two glycoproteins that are encoded by the virus genome and serve as plasma membrane attachment sites during entry into the cell.


The retrovirus genome consists of two 7 kilobase to 11 kilobase +RNA molecules that code for only a few proteins, including gag, which codes for the matrix, capsid, and nucleoprotein; pol, which codes for reverse transcriptase, RNAse, integrase, and a protease; and env, which codes for the envelope glycoproteins.


The retrovirus binds to plasma membrane receptors via the viral envelope glycoproteins. When the retrovirus enters the cell, the viral RNA is released along with its reverse transcriptase. A double-stranded DNA is synthesized from the +RNA using viral reverse transcriptase. Integrase catalyzes the incorporation of the double-stranded DNA molecule into the host genome. When integrated, the viral DNA is referred to as a provirus and replicates with the host genome. Host RNA polymerase transcribes the viral genes, making copies of the viral genome and mRNA molecules that can be translated into viral proteins. Viral RNA and proteins are assembled into new viral particles that emerge from the plasma membrane by budding.


Some retroviruses such as Rous sarcoma virus (RSV), feline leukemia virus (FLV), and mouse mammary tumor virus (MMTV) can induce tumors in their host species. More than twenty-five cancer-causing (oncogenic) retroviruses have been isolated. The retrovirus gains oncogenic potential when it inadvertently acquires a eukaryotic gene during infection. Although the eukaryotic gene may not be oncogenic when first acquired, after several generations it may mutate or otherwise become altered, transforming it into one that is oncogenic. Retroviruses such as FLV that do not carry an oncogene can still transform by either disrupting the function of a normal gene by integrating within it or by integrating next to it so that the neighboring gene can use the viral promoter, resulting in gene overexpression and cellular proliferation.




Perspective and Prospects

The study of retroviruses dates to 1910 with the work of Peyton Rous, who discovered that certain sarcomas in chickens are caused by an agent later identified as a virus. The virus was later named Rous sarcoma virus. In 1970, the laboratories of Howard Temin and David Baltimore
independently discovered that certain RNA viruses have an enzyme, now known as reverse transcriptase, that permit the viruses to reverse transcribe their RNA genomes into double-stranded DNA. In the early 1970s, the laboratory of J. Michael Bishop and Harold Varmus demonstrated that Rous sarcoma virus has a gene, now known as src, responsible for transforming normal cells into tumor cells. Uninfected cells, including human cells, have a normal src gene that is related to the viral src gene. In the past, an RSV infected a chicken and incorporated the host src gene into its own genome. The src gene acquired by the virus became altered over time so that it now causes cancer when an RSV infects a chicken cell.


There are many examples of retroviruses, including human T-cell leukemia virus (HTLV), the first pathogenic human retrovirus discovered in 1980 by Bernard J. Poiesz, Robert Gallo, and their colleagues at the National Institutes of Health and by Mitsuaki Yoshida in Japan. Human immunodeficiency virus (HIV), which causes Acquired immunodeficiency syndrome (AIDS), is also a retrovirus; it was discovered in 1983 by Luc Montagnier, Françoise Barré-Sinoussi, and their colleagues at the Pasteur Institute in France.


Since reverse transcriptase does not have the proofreading activities associated with DNA polymerase, retroviruses mutate and evolve more rapidly than DNA viruses, making the development of drugs and vaccines difficult.


Recombinant retroviruses are often used as vectors for genetic engineering. Retroviruses that are modified by removing the genes that make them harmful and replacing them with normal eukaryotic genes can be used to deliver a normal copy of a gene to a defective cell. The DNA copy of the recombinant retrovirus can integrate into the host genome and genetically modify the cell.




Bibliography


Cullen, Bryan R. Human Retroviruses. New York: Oxford University Press, 1993.



Dudley, Jaquelin. Retroviruses and Insights into Cancer. New York: Springer, 2011.



Gallo, Robert. Virus Hunting: AIDS, Cancer, and the Human Retrovirus—A Story of Scientific Discovery. New York: Basic Books, 1991.



Gallo, Robert C., Dominique Stehelin, and Oliviero E. Varnier. Retroviruses and Human Pathology. Totowa, N.J.: Humana Press, 1986.



Holmes, Edward C. The Evolution and Emergence of RNA Viruses. New York: Oxford University Press, 2009.



Kurth, Reinhard, and Norbert Bannert, eds. Retroviruses: Molecular Biology, Genomics, and Pathogenesis. Norfolk, England: Caister Academic Press, 2010.



Singh, Sunit K., and Daniel Ruzek, eds. Neuroviral Infections: RNA Viruses and Retroviruses. Boca Raton, Fla.: CRC Press, 2013.

What is cystoscopy? |


Indications and Procedures


Cystoscopy is indicated in patients for whom visual inspection of the urethra, bladder mucosa, and ureteral orifices is likely to yield a diagnosis. This includes patients who have hematuria (blood in the urine), incontinence, and irritative bladder symptoms for whom all obvious causes have been ruled out. In addition, patients who have undergone difficult abdominal or pelvic surgery may receive cystoscopy to verify that the bladder and the ureters, the tubes that carry urine from the kidneys to the bladder, are intact.



Cystoscopy is performed with the patient in a supine position with legs in stirrups. The cystoscope consists of a small metal tube, through which distension medium is passed. The light source, which enables visualization, also passes through this tube. The cystoscope can be attached to a video screen, or the clinician can visualize the urethral and bladder mucosas directly through the cystoscope. The cystoscope may be angled at 0 degrees, 30 degrees, or 70 degrees to facilitate visualization of different parts of the bladder. The procedure involves passing the cystoscope into the urethra and then the bladder under direct visualization. Cystoscopy is performed in a systematic fashion to ensure complete coverage of the urethral and bladder mucosas. Abnormal areas can be biopsied. The ureteral orifices can be visualized using the cystoscope, and the presence of urine flow from the orifices confirms patency (lack of obstruction) of the ureters.




Uses and Complications

Cystoscopy can be used to diagnose a variety of benign and malignant conditions of the lower urinary tract. Among the benign conditions commonly found through cystoscopy are endometriosis of the bladder, interstitial
cystitis, foreign bodies, and anatomic abnormalities such as fistulas (communicating tracts between the bladder and another organ such as the bowels) or diverticula (small outpouchings of the bladder or urethra). By filling the bladder with distension fluid during cystoscopy, it is also possible to perform limited bladder function tests. Malignant conditions that may be found on cystoscopy include bladder cancers and cancers of adjacent pelvic organs, such as the cervix, which may invade the bladder.


Cystoscopy is an extremely safe procedure. Theoretical risks include the possibility of bladder injury or perforation from the cystoscope.




Bibliography


Doherty, Gerard M., and Lawrence W. Way, eds. Current Surgical Diagnosis and Treatment. 13th ed. New York: Lange Medical Books/McGraw-Hill, 2010.



Miller, Brigitte E. An Atlas of Sigmoidoscopy and Cystoscopy. Boca Raton, Fla.: Parthenon, 2002.



Randall, Brian. "Cystoscopy." Health Library, April 17, 2013.



Rock, John A., and Howard W. Jones III, eds. Te Linde’s Operative Gynecology. 10th ed. Philadelphia: Lippincott Williams & Wilkins, 2011



Stenchever, Morton A., et al. Comprehensive Gynecology. 5th ed. St. Louis, Mo.: Mosby/Elsevier, 2007.



Vorvick, Linda J. "Cystoscopy." MedlinePlus, June 18, 2012.

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...