Saturday 31 October 2015

What are protease inhibitors? |


Definition

Protease inhibitors (PIs) are a class of drugs that treat or prevent infection
by viruses. They belong to a larger therapeutic category, antiretroviral drugs, and are used primarily to treat
human
immunodeficiency virus (HIV)
infection and hepatitis C.







Pharmacology

Viruses that are blocked by protease inhibitors are prevented from maturing,
infecting, or replicating. Protease inhibitors act on viruses at a very late stage
of replication, stopping a cell’s ability to incorporate proteins into viral
particles.




Risk Factors

Protease inhibitors have dramatically improved the life expectancy of persons
with HIV and hepatitis C, but PIs have a tendency to interact with other drugs,
causing undesirable side effects. There is also a risk of drug-resistant mutated
viruses. Persons who take PIs may experience kidney
stones, nausea, diarrhea, and abnormal sensations around the
mouth. Most of these side effects are not serious and tend to resolve rapidly.


Persons with acquired immunodeficiency syndrome (AIDS) who are taking PIs risk liver dysfunction, including
hepatitis
B and hepatitis C infections. Excess bleeding and blood clots
are rare side effects. Persons taking PIs also report side effects such as high
blood sugar, abdominal obesity, high triglycerides, fatty tissue disorders,
insulin resistance, sexual dysfunction, and pancreatitis.




Treatment and Therapy

To reduce the risks of PI side effects and drug
resistance, clinicians often implement combinations of drugs.
For example, clemizole increases the effectiveness of PIs, enabling them to be
used in smaller doses. Physicians have also had some success in treating persons
with drug combinations that do not involve PIs. However, the research-based
recommendation on this practice is to be cautious about removing a person from PI
therapy if he or she has already done well on it. Preliminary studies are underway
to see whether PIs might be used to treat cancer.




Impact

Pharmaceutical researchers developed the first protease inhibitors between 1989
and 1994. Additional drugs are under investigation, and a series of new PIs have
been brought to market for treatment. PIs are the largest class of drugs in the
fight against HIV infection. In terms of virology and
immunology and clinical and survival issues, PIs offer
patients a quality of life that was previously unattainable.




Bibliography


Carr, Andrew, et al. “A Syndrome of Peripheral Lipodystrophy, Hyperlipidemia, and Insulin Resistance in Patients Receiving HIV Protease Inhibitors.” AIDS 12 (1998): F51-F58.



Centers for Disease Control and Prevention. “Hepatitis C.” Available at http://www.cdc.gov/hepatitis/hcv.



John, Mina, et al. “Hepatitis C Virus-Associated Hepatitis Following Treatment of HIV-Infected Patients with HIV Protease Inhibitors: An Immune Restoration Disease?” AIDS 12 (1998): 2289-2293.



Kilby, J. Michael. “Switching HIV Therapies: Competing Host and Viral Factors.” The Lancet 375 (2010): 352.



Moatti, Jean-Paul, et al., eds. AIDS in Europe: New Challenges for the Social Sciences. New York: Routledge, 2000.



Villani, Paola, et al. “Antiretrovirals: Simlutaneous Determination of Five Protease Inhibitors and Three Nonnucleoside Transcriptase Inhibitors in Human Plasma.” Therapeutic Drug Monitoring 23 (2001): 380-388.



Wit, Ferdinand W. N. M., Joep M. A. Lange, and Paul A. Volberding. “New Drug Development: The Need for New Antiretroviral Agents.” In Global HIV/AIDS Medicine, edited by Paul A. Volberding et al. Philadelphia: Saunders/Elsevier, 2008.

What literary device is in the following excerpt from Shakespeare's Romeo and Juliet?"And shake the yoke of inauspicious stars / from this...

Near the end of Shakespeare's Romeo and Juliet, Romeo believes that his wife, Juliet, is dead, and he prepares to drink poison so that he can die and be with her.  Bad luck has plagued them from the beginning of their relationships: they are from rival families, Romeo killed Juliet's cousin Tybalt, Juliet is betrothed to another man even after she has secretly wed Romeo, Romeo is banished for the slaying of Tybalt, and...

Near the end of Shakespeare's Romeo and Juliet, Romeo believes that his wife, Juliet, is dead, and he prepares to drink poison so that he can die and be with her.  Bad luck has plagued them from the beginning of their relationships: they are from rival families, Romeo killed Juliet's cousin Tybalt, Juliet is betrothed to another man even after she has secretly wed Romeo, Romeo is banished for the slaying of Tybalt, and so on.  Standing by her body, Romeo speaks aloud his disbelief that she could remain so beautiful even in death and his intention never to leave her side again.  He says, "Oh, here / Will I set up my everlasting rest, / And shake the yoke of inauspicious stars / From this world-wearied flesh" (5.3.118-121).  He means that he is preparing to die here, beside her, and, in doing so, will remove the "yoke" of unluckiness and misfortune that has afflicted him.  He compares, via metaphor, his bad luck to a yoke, such as would be worn by an ox to allow him to drag a heavy load, and himself to a brute animal, like an ox, whose only purpose seems to be to pull such loads.  

Friday 30 October 2015

How does the poet convey the feeling that the Creator was unafraid to handle the dreaded tiger once its heart started beating?

William Blake's entire poem is filled with rhetorical questions that show awe toward the creative force behind the powerful and dangerous tiger. The first question asks "what immortal hand or eye could frame" such an impressive beast. In the second stanza the speaker asks what hand could "dare seize the fire." This suggests that the maker of the tiger had to be more bold and daring than the beast itself—he had to be able to...

William Blake's entire poem is filled with rhetorical questions that show awe toward the creative force behind the powerful and dangerous tiger. The first question asks "what immortal hand or eye could frame" such an impressive beast. In the second stanza the speaker asks what hand could "dare seize the fire." This suggests that the maker of the tiger had to be more bold and daring than the beast itself—he had to be able to grasp the beast without being harmed himself.


The questions that arise after the "heart began to beat" are somewhat ambiguous. They are "What dread hand? & what dread feet?" The point of these questions seems to be: "What dreadful hands and feet could handle the beating of such a powerful heart?" By asking such a rhetorical question, the poet acknowledges that the hands and feet of the Creator are, indeed, capable of controlling the thing he has made.


The lines that indicate the Creator can handle his creation are the lines that say he can "seize the fire" and that he himself has "dread hands" and "dread feet." Although the poem presents these qualities of the Creator in question form, the answers to the questions are self-evident. The poem acknowledges the superiority of the Creator over the creation even as it seems to question the wisdom of creating a beast as dreadful as the tiger.

Thursday 29 October 2015

One amu is equal to the mass of one ___.

One amu (or atomic mass unit) is approximately equal to the mass of one nucleon (either proton or neutron). In general, the average mass of one proton and one neutron is termed as the amu and this is approximately equal to 1.67 x10^-27 kg.


Since hydrogen has one proton and no neutron, the mass of one hydrogenatom can also be thought as 1 amu. The current definition of amu states that one amu is...

One amu (or atomic mass unit) is approximately equal to the mass of one nucleon (either proton or neutron). In general, the average mass of one proton and one neutron is termed as the amu and this is approximately equal to 1.67 x10^-27 kg.


Since hydrogen has one proton and no neutron, the mass of one hydrogen atom can also be thought as 1 amu. The current definition of amu states that one amu is equal to 1/12th the mass of one carbon-12 atom in its ground state.


Amu is commonly used to state the mass of atoms and molecules. Amu is also sometimes written as Dalton (Da) and is commonly used to report the masses of proteins. For example, hydrogen has a mass of 1 Da and sodium has a mass of 23 Da.


Hope this helps.  


What is a low-glycemic diet?


Overview

Mainstream organizations, such as the American Heart Association and the American Dietetic Association, endorse a unified set of guidelines for the optimum diet. According to these organizations, the majority of calories in the daily diet should come from carbohydrates (55 to 60 percent), fat should provide no more than 30 percent of total calories, and protein should be kept to 10 to 15 percent.


However, many popular diet books turn the standard diet upside down. The Atkins
diet, the Zone
diet, Protein Power, and other alternative dietary approaches
reject carbohydrates and advocate increased consumption of fat or
protein, or both. According to theory, the low-carbohydrate (carb) approach aids
in weight loss (and provides a variety of other health benefits) by reducing the
body’s production of insulin.


The low-glycemic-index (low-GI) diet splits the difference between the low-carb
and low-fat approaches. It maintains the low-carb diet’s focus on insulin, but
it suggests choosing certain carbohydrates over others rather than restricting
carbohydrate intake.


Evidence suggests that carbohydrates are not created equal. Some carbohydrates,
such as pure glucose, are absorbed quickly and create a rapid, strong
rise in both blood sugar and insulin. However, other carbohydrates (such as brown
rice) are absorbed much more slowly and produce only a modest blood sugar and
insulin response. According to proponents of the low-GI diet, eating foods in the
latter category will enhance weight loss and improve health. However, despite some
promising theory, there is no solid evidence that low-GI diets enhance weight
loss.


Besides weight loss, preliminary evidence suggests that the low-GI approach
(or, even better, a related method called low-glycemic load, which is discussed
later in this article) may help prevent heart disease. The low-GI approach has
also shown promise for treating and possibly preventing diabetes.




What Is the Glycemic Index?

The precise measurement of the glucose-stimulating effect of a food is called its glycemic index. The lower a food’s GI, the less potent its effects on blood sugar (and, therefore, on insulin).


The GI of glucose is arbitrarily set at 100. The ratings of other foods are determined as follows. First, researchers calculate a portion size for the food to supply 50 grams (g) of carbohydrates. Next, they give that amount of the food to a minimum of eight to ten people and measure the blood sugar response. (By using a group of people rather than one person, researchers can ensure that the idiosyncrasies of one person do not skew the results.) On another occasion, researchers also give each participant an equivalent amount of glucose and perform the same measurements. The GI of a food is then determined by comparing the two outcomes. For example, if a food causes one-half of the blood sugar rise of glucose, it is assigned a GI of 50; if it causes one-quarter of the rise, it is assigned a GI of 25. The lower the GI, the better.


When scientists first began to determine the GI of foods, some of the results drew skepticism. It did not surprise anyone when jellybeans turned out to have a high GI of 80 (after all, jellybeans are mostly sugar). Also, it was not unexpected that kidney beans have a low GI of 27 because they are notoriously difficult to digest. However, when baked potatoes showed an index of 93, researchers were stunned. This rating is higher than that of almost all other foods, including ice cream (61), sweet potatoes (54), and white bread (70). Based on this finding, low-GI diets recommend that people avoid potatoes.


There are other surprises hidden in the GI tables. For example, fructose (the sweetener in honey) has an extraordinarily low GI of 23, lower than brown rice and almost three times lower than white sugar. Candy bars also tend to have a relatively good (low) GI, presumably because their fat content makes them digest slowly.


It is difficult to predict the GI of a food without specifically testing it, but there are some general factors that can be recognized. Fiber content tends to reduce the GI of a food, presumably by slowing down digestion. For this reason, whole grains usually have a lower GI score than refined, processed grains. Fat content also reduces GI score. Simple carbohydrates (such as sugar) often have a higher GI score than complex carbohydrates (such as brown rice).


However, there are numerous exceptions to these rules. Factors such as the acid content of food, the size of the food particles, and the precise mixture of fats, proteins, and carbohydrates can substantially change the GI measurement. For a measurement like the GI to be meaningful, it has to be generally reproducible among people. In other words, if a potato has a GI of 93 in one person, it should have nearly the same GI when given to another person. Science suggests that the GI passes this test. The GI of individual foods is fairly constant among people, and even mixed meals have a fairly predictable effect according to most studies.


Thus, the GI of a food really does indicate its propensity to raise insulin levels. Whether a diet based on the index will aid in weight loss, however, is a different issue.




Following a Low-Glycemic-Index Diet

Following a low-GI diet is fairly easy. Basically, one should follow the typical diet endorsed by authorities such as the American Dietetic Association, but in doing so, one should choose carbohydrates that fall toward the lower end of the GI scale. Popular books such as The Glucose Revolution (1999) give a great deal of information on how to make these choices.




Do Low-Glycemic-Index Diets Aid in Weight Loss?

There are two primary theoretical reasons given why low-GI diets should help reduce weight. The most prominent reason given in books on the low-GI approach involves insulin levels. Basically, these books show that low-GI diets reduce insulin release, and then take almost for granted the idea that reduced insulin levels should aid in weight loss. However, there is little justification for the second part of this argument. Excess weight is known to lead to elevated insulin levels, but there is little meaningful evidence for the reverse: that reducing insulin levels will help remove excess weight.


Books on the low-GI diet give another reason for using their approach. They state that low-GI foods fill a person up more quickly than do high-GI foods and that they also keep one feeling full for longer. However, there is more evidence against this belief than for it.



The satiety index. A measurement called the satiety index assigns a numerical quantity to the filling quality of a food. These numbers are determined by feeding people fixed caloric amounts of those foods and then determining how soon they get hungry again and how much they eat at subsequent meals. The process is similar to the methods used to establish the GI index.


The results of these measurements do not corroborate the expectations of low-GI diet proponents. As it happens, foods with the worst (highest) GI index are often the most satiating, exactly the reverse of what low-GI-theory proponents would say. For example, the satiety index claims that potatoes are among the most satiating of foods. However, the GI analysis gave potatoes a bad rating. According to the low-GI theory, one should feel hunger pangs shortly after eating a big baked potato. In real life, this does not happen.


There are numerous other contradictions between research findings and the low-GI/high-satiety theory. For example, one study found no difference in satiety between fructose (fruit sugar) and glucose when taken as part of a mixed meal, even though fructose has a GI more than four times lower than glucose.


Some studies do seem to suggest that certain low-GI foods are more filling than high-GI foods. However, in these studies the bulkiness and lack of palatability of the low-GI foods chosen may have played a more important role than the foods’ GI. Thus, the satiety argument for low-GI diets does not appear to hold up to scrutiny.




Is the Glycemic Index the Right Measurement?

There is another problem with the low-GI approach: It is probably the wrong way to assess the insulin-related effects of food. The GI measures blood sugar response per gram of carbohydrate contained in a food, not per gram of the food. This leads to some odd numbers. For example, a parsnip has a GI of 98, almost as high as pure sugar. If taken at face value, this figure suggests that dieters should avoid parsnips. However, parsnips are mostly indigestible fiber, and a person would have to eat a few bushels to trigger a major glucose and insulin response.


The reason for the high number is that the GI rates the effects per gram of carbohydrate rather than per gram of total parsnip, and the sugar present in minute amounts in a parsnip itself is highly absorbable. The high GI rating of parsnips is thus extremely misleading. Books such as The Glucose Revolution address issues like this on a case-by-case basis by arguing, for example, that one can consider most vegetables “free foods” regardless of their GI. In fact, the same considerations apply to all foods and distort the meaningfulness of the scale as a whole.


A different measurement, the glycemic load (GL), takes this into account. The GL is derived by multiplying the GI by the percent carbohydrate content of a food. In other words, it measures the glucose/insulin response per gram of food rather than per gram of carbohydrate in that food. Using this system, the GL of a parsnip is 10, while glucose has a relative load of 100. Also, the GL of a typical serving of potato is only 27.




Scientific Evidence

Theory is one thing and practice is another. It is certainly possible that making sure to focus on low-GI or low-GL foods will help a person lose weight, even if the theoretical justification for the idea is weak. However, there is only preliminary positive evidence to support this possibility, and the largest and longest-term trial failed to find benefit.


In one of the positive studies, 107 overweight adolescents were divided into two groups: a low-GI group and a low-fat group. The low-GI group was counseled to follow a diet consisting of 45 to 50 percent carbohydrates (preferably low-GI carbohydrates), 20 to 25 percent protein, and 30 to 35 percent fat. Calorie restriction was not emphasized. The low-fat group received instructions for a standard low-fat, low-calorie diet divided into 55 to 60 percent carbohydrates, 15 to 20 percent protein, and 25 to 30 percent fat. In about four months, participants on the low-GI diet lost about 4.5 pounds, while those on the standard diet lost just under 3 pounds.


This study does not say as much about the low-GI approach as it might seem. Perhaps the most obvious problem is that the low-GI diet used here was also a high-protein diet. It is possible that high-protein diets might help weight loss regardless of the GI of the foods consumed. (In fact, this is precisely what proponents of high-protein diets claim.)


Another problem is that participants were not assigned to the two groups randomly. Rather, researchers consciously picked what group each participant should join. This is a major flaw because it introduces the possibility of intentional or unintentional bias. It is quite possible, for example, that researchers placed adolescents with greater self-motivation into the low-GI group, based on an unconscious desire to see results from the study. This is not an academic problem, and modern medical studies always use randomization to circumvent it.


Finally, researchers made no effort to determine how well participants followed their diets. It might be that those in the low-fat diet group simply did not follow the rules as well as those in the low-GI diet group because the rules were more challenging. Despite these many flaws, the study results are still promising. Losing weight without deliberately cutting calories is potentially a great thing.


In another study, thirty overweight women with excessively high insulin levels were put on either a normal low-calorie diet or one that supplied the same amount of calories but used low-GI foods. The results during twelve weeks showed that women following the low-GI diet lost several pounds more than those following the normal diet.


In yet another small study, this one involving overweight adolescents, a conventional reduced-calorie diet was compared with a low-GL diet that did not have any calorie restrictions. The results showed that simply by consuming low-GI foods, without regard for calories, the participants on the low-GI diet were able to lose as much or more weight as those on the low-calorie diet.


However, in a large and long-term study, an eighteen-month trial of 203 Brazilian women, the use of a low-GI diet failed to prove more effective than a high-GI diet. Additionally, a smaller study failed to find a low-GI diet more effective for weight loss than a low-fat diet except in people with high levels of circulating insulin.




Possible Health Benefits

There is some evidence that a low-GI diet (or, even better, a low-GL diet) might help prevent cancer and heart disease. The low-GI approach has also shown promise for preventing or treating diabetes.



Heart disease prevention. One large observational study evaluated
the diets of more than 75,000 women and found that those women whose diets
provided a lower GL had a lower incidence of heart
disease. In this study, 75,521 women age thirty-eight to
sixty-three years were followed for ten years. Each filled out detailed
questionnaires regarding her diet. Using this data, researchers calculated the
average GL of each participant. The results showed that women who consumed a diet
with a high GL were more likely to experience heart disease than those who
consumed a diet of low GL.


Other observational studies suggest that the consumption of foods with lower GL may improve cholesterol profile: specifically, reduced triglyceride levels and higher HDL (good cholesterol) levels. These effects, in turn, might lead to decreased risk of heart disease. However, other observational studies have found little or no relationship between heart disease and GI or GL.


These contradictory results are not surprising, but even if the observational study results were entirely consistent, it would not prove the case for a low-GI approach. Conclusions based on observational studies are notoriously unreliable because of the possible presence of unidentified confounding factors. For example, because there is an approximate correlation between fiber in the diet and GL, it is possible that benefits, when seen, are from fiber intake instead. Factors such as this one may easily obscure the effects of the factor under study, leading to contradictory or misleading results.


Intervention trials (studies in which researchers actually intervene in participants’ lives) are more reliable, and some have been conducted to evaluate the low-GI diet. For example, in the foregoing large weight-loss trial, the low-GI diet failed to prove more effective than a high-GI diet in terms of weight loss. The results did suggest, though, that a low-GI diet can improve cholesterol profile. However, this study was not primarily designed to look at effects on cholesterol.


A study that primarily focused on this outcome followed thirty people with high lipid levels for three months. During the second month, low-GI foods were substituted for high-GI foods, while other nutrients were kept similar. Improvements were seen in total cholesterol, LDL (bad) cholesterol, and triglycerides, but not in HDL. A close analysis of the results showed that only participants who had high triglycerides at the beginning of the study showed benefit. Another controlled trial found that a high carb, low-GL diet optimized lipid profile compared with several other diets. However, another study found that low-fat and low-GI diets were about equally effective in terms of profile.


Another approach to the issue involves analysis of effects on insulin
resistance. Evidence suggests that increased resistance of
the body to its own insulin raises the risk of heart disease. One study found that
the use of a low-GI diet versus a high-GI diet improved the body’s sensitivity to
insulin in women at risk for heart disease. Similar results were seen in a group
of people with severe heart disease and in healthy people. While these results are
preliminary, taken together they do suggest that consumption of low-GI foods might
have a beneficial effect on heart disease risk.



Low-GL diet and diabetes. Two large observational studies, one
involving men and the other involving women, found that diets with lower GLs were
associated with a lower rate of diabetes. For example, one trial followed 65,173
women for six years. Women whose diets had a high GL had a 47 percent increased
risk of developing diabetes compared with those whose diets had the lowest GL.
Fiber content of diet also makes a difference. People who
consumed a diet that was both low in fiber and high in GL had a 250 percent
increased incidence of diabetes.


However, as always, the results of these observational studies have to be taken with caution. It is quite possible that unrecognized factors are responsible for the results seen. For example, magnesium deficiency is widespread and may contribute to the development of diabetes; whole grains contain magnesium and are also low-GI foods. Therefore, it could be that the benefits seen in these studies are actually caused by increased magnesium intake in the low-GI group, rather than by effects on blood sugar and insulin.


Furthermore, one observational study found no connection between the glycemic values of foods and the incidence of diabetes. Another observational study did find a correlation between carbohydrate intake (especially pastries) and the onset of diabetes, but no consistent relationship with GI. Other studies have found no relationship between sugar consumption (a high-GI food) and diabetes onset.


Thus, reducing dietary GL may help prevent diabetes, but this is not known for sure. Whether or not low-GI diets can prevent diabetes, going on a low-GI diet might improve blood sugar control for people who already have diabetes. However, the benefits seem to be small at most.




Other Uses and Applications

Weak evidence hints that a low-GI diet might help prevent macular
degeneration. Although there are theoretical reasons to
believe that the use of white sugar and other high-GI foods might promote colon
cancer, a large observational study failed to find any association between colon
cancer rates and diets high in sugar, carbohydrates, or GL.


It has been proposed that low-GI foods may enhance sports performance. One study involving a simulated sixty-four-kilometer bicycle race found no performance differences between the use of honey (low GI) and the use of dextrose (high GI) as a carbohydrate source. However, another study did find benefit with the consumption of a low-GI snack before endurance exercise. Finally, one study compared a low-GL diet with a high-carb diet in people with acne and found evidence that the low-GL diet reduced acne symptoms.




Conclusion

The evidence that a low-GI diet will help one lose weight is not impressive. Its theoretical foundation is weak, and it appears to be using the wrong method of ranking foods regarding their effects on insulin. Conversely, however, there is no reason to believe a low-GI diet causes harm.


While the most popular low-GI-diet books, such as The Glucose Revolution and Sugar Busters (1995), recommend a diet that is generally reasonable and should be safe, it is easy to design some fairly extreme low-GI diets. For example, a diet consisting of nothing but lard would be a very, very low-GI diet, because the GI of lard is 0. Although it no longer seems that saturated fat is as harmful as it was once thought to be, a pure lard diet is probably not a good idea. Any diet book or other source that recommends achieving a low GI by consuming an extreme diet should be approached with caution.




Bibliography


Chiu, C. J., et al. “Dietary Glycemic Index and Carbohydrate in Relation to Early Age-Related Macular Degeneration.” American Journal of Clinical Nutrition 83 (2006): 880-886.



Clapp, J. F., and B. Lopez. “Low- Versus High-Glycemic Index Diets in Women: Effects on Caloric Requirement, Substrate Utilization, and Insulin Sensitivity.” Metabolic Syndrome and Related Disorders 5 (2007): 231-242.



Ebbeling, C. B., et al. “Effects of a Low-Glycemic Load vs Low-Fat Diet in Obese Young Adults.” Journal of the American Medical Association 297 (2007): 2092-2102.



Noakes, M., et al. “The Effect of a Low Glycaemic Index (GI) Ingredient Substituted for a High GI Ingredient in Two Complete Meals on Blood Glucose and Insulin Levels, Satiety, and Energy Intake in Healthy Lean Women.” Asia Pacific Journal of Clinical Nutrition 14, suppl. (2005): S45.



Pittas, A. G., et al. “The Effects of the Dietary Glycemic Load on Type 2 Diabetes Risk Factors During Weight Loss.” Obesity 14 (2006): 2200-2209.



Smith, R. N., et al. “A Low-Glycemic-Load Diet Improves Symptoms in Acne Vulgaris Patients.” American Journal of Clinical Nutrition 86 (2007): 107-115.



Tavani, A., et al. “Carbohydrates, Dietary Glycaemic Load, and Glycaemic Index, and Risk of Acute Myocardial Infarction.” Heart 89 (2003): 722-726.



Wu, C. L., and C. Williams. “A Low Glycemic Index Meal Before Exercise Improves Endurance Running Capacity in Men.” International Journal of Sport Nutrition and Exercise Metabolism 16 (2006): 510-527.

What are hematomas? |


Causes and Symptoms

A hematoma is caused by blood leakage through the wall of an artery, capillary, or vein; subsequent pooling in surrounding tissue; and resultant coagulation in a semisolid mass. This leakage may occur spontaneously due to fragility of a vessel wall or due to an aneurysm. Leakage may also occur posttrauma due to events ranging from a violent sneeze to bodily injury. Symptoms usually consist of localized edema, inflammation, and pain.



A hematoma can occur anywhere along the circulatory system pathway and may be given a descriptive label indicative of its location. Superficial hematomas include aural, intramuscular, scalp, septal, subcutaneous, and subungual hematomas. An aural (ear) hematoma is a blood mass that accumulates between the ear cartilage and the periphondrium (connective tissue) as a result of blunt force trauma to the external ear. Symptoms include ecchymosis (discoloration to the area) and swelling. An intramuscular hematoma is a blood mass that accumulates within a muscle, often in the forearm or lower leg, as a result of blunt force trauma that leaves skin intact but damages muscle fibers and connective tissue. Symptoms include ecchymosis and swelling. A scalp hematoma is a blood mass that accumulates in the skin and muscle layer covering the skull as a result of head injury. Although not usually serious, it nonetheless could be indicative of bleeding within the skull. A septal hematoma is a blood mass that accumulates
in the nasal septum, usually in conjunction with a broken nose or injury to nearby soft tissue. Symptoms include nasal congestion, septal swelling, and resultant difficulty breathing. A subcutaneous hematoma is a blood mass that accumulates under the skin as a result of damage to superficial blood vessels. It occurs more frequently to those who take anticoagulants. Symptoms include ecchymosis and swelling. A subungual hematoma is a blood mass that accumulates under the nail plate of a finger or toe. Symptoms include pain due to pressure buildup in the nail bed.


Internal hematomas include cranial, fracture site, and intraabdominal hematomas. Cranial hematomas can be epidural, subdural, or intracerebral; all are potentially life threatening. An epidural hematoma (also called extradural hematoma) is a blood mass that accumulates in the epidural space (inside the skull but outside the dura mater, the membrane that covers the brain), often as a result of damage to the middle meningeal artery, located in the temple area, following skull fracture. Symptoms include asthenia (weakness), confusion, dizziness, drowsiness, nausea and vomiting, severe headache, unmatched pupil size, and often intermittent loss of consciousness.


An acute, subacute, or chronic subdural hematoma
(also called subdural hemorrhage) is a blood mass that accumulates in the subdural space (inside the dura mater but outside the brain tissue) as a result of damage to cerebral veins, most often due to head injury. Symptoms in adults include asthenia, balance difficulties, confusion or lethargy, headache, nausea and vomiting, seizures, speech difficulties, and visual disturbances. Symptoms in infants include bulging fontanelles, high-pitched crying, increased head circumference, seizures, and vomiting. Symptom onset is more gradual than for epidural hematoma due to a slower leakage rate for venous blood compared to arterial blood and a larger space for blood to fill before pressure buildup is sufficient to affect brain function. For an acute-onset hematoma, which is associated with the highest rate of death or permanent injury, symptoms usually occur immediately after severe head injury. For a subacute-onset hematoma, symptoms may occur days or weeks after injury occurrence. For a chronic-onset hematoma, symptoms may occur weeks after a less severe head injury.


An intracerebral hematoma (also called intraparenchymal hematoma) is a blood mass that accumulates in the brain tissue as a result of aneurysm, anticoagulant use, arteriovenous malformation, autoimmune diseases, bleeding disorders, brain tumor, drug abuse (amphetamines, cocaine), encephalitis (central nervous system
infection), or uncontrolled chronic hypertension. It may be accompanied by shear injury—tearing of the axon portion of the cranial nerves located in the substantia alba (white matter of the brain)—resulting in severe brain damage due to loss of ability to transmit neural impulses from the brain to the body. A fracture site hematoma is a blood mass that accumulates near a fracture, especially that of the femur (thigh), humerus (upper arm), or pelvis, all of which can result in significant internal hemorrhage. An intraabdominal hematoma is a blood mass that accumulates somewhere within the abdomen—in any of the abdominal organs, in any abdominal component of the gastrointestinal tract, in the peritoneum, or in the retroperitoneal space.




Treatment and Therapy

Initial treatment of superficial hematomas consists of rest-ice-compression-elevation (RICE) of the affected area, if possible, as well as oral administration of nonsteroidal anti-inflammatory drugs (NSAIDs) or other analgesics, if pain management is required.


For an aural hematoma, more aggressive treatment may be necessary due to the potential compromise of blood supply and subsequent cartilage atrophy resulting in a deformity of the pinna (outer ear) that is commonly known as cauliflower ear. Treatment consists of lancing and draining the hematoma followed by application of a compression bandage to enable reperfusion of the cartilage and to prevent hematoma reformation. This bandage is usually removed after three to seven days.


For intramuscular hematoma, more aggressive treatment may be necessary due to the potential compromise of blood supply and subsequent damage to the muscle, connective tissues, and nerves, a condition known as compartment syndrome. It most commonly occurs in muscles of the forearm and lower leg. Treatment consists of surgical intervention to drain the hematoma.


For septal hematoma, more aggressive treatment may be necessary due to the potential compromise of blood supply and subsequent cartilage atrophy resulting in perforation of the septum. Treatment consists of lancing and draining the hematoma followed by application of a gauze sponge or cotton ball in the nasal cavity.


For subungual hematoma, more aggressive treatment may be necessary to relieve pressure between the nail plate and the nail bed. Treatment consists of trephination (hole boring) of the nail plate and drainage of the hematoma.


Cranial hematomas (epidural, subdural, and intracerebral) are potentially life threatening and require immediate medical attention at the onset of signs or symptoms due to the risk of irreversible brain damage and possible death. Administration of anticonvulsant medication may be necessary to control or prevent seizures, and administration of corticosteroid medication may be necessary to reduce cerebral edema (brain swelling).


For epidural hematoma, diagnosis of increased intracranial pressure and location of hematoma are confirmed via computed tomography (CT) scan. Treatment consists of prompt surgical intervention to drain or remove the hematoma. For acute, subacute, and chronic subdural hematoma, diagnosis and location of the hematoma are confirmed via CT scan or magnetic resonance imaging (MRI) scan. Increased risk factors include advanced age, alcohol abuse, and daily use of anticoagulants, anti-inflammatory medication, or aspirin. Treatment consists of prompt surgical intervention to drain or remove the hematoma.


For intracerebral hematoma, diagnosis of increased intracranial pressure and location of hematoma are confirmed via CT scan or MRI scan. Treatment may consist of surgical intervention to drain or remove the hematoma.




Perspective and Prospects

Although intrinsic factors—such as aneurysm, arteriovenous malformations, autoimmune diseases, bleeding disorders, brain tumor, encephalitis, or uncontrolled chronic hypertension—and extrinsic factors—such as anticoagulant use, alcohol abuse, or drug abuse (amphetamines, cocaine)—may increase the likelihood of hematoma formation, the most common cause is trauma. Minor traumas, ranging from a violent sneeze to a mild sports injury, as well as major traumas, including car accidents and severe falls, all have the potential to cause hematoma formation.


While the size, type, and severity of hematomas vary according to location and causality, a common complication is infection risk as a result of the colonization of bacteria in stagnant blood. Attenuation or avoidance of complications may be achieved by early diagnosis and, if warranted, prompt medical treatment.


As is the case with all undesirable medical conditions, prevention is preferable to treatment. Although trauma prevention may not always be possible, risk may be minimized via lifestyle choices and proper use of safety equipment.




Bibliography


Beers, M. H., ed. The Merck Manual of Medical Information. 2d ed. Whitehouse Station, N.J.: Merck, 2003.



Bluestone, C. D., S. E. Stool, and C. M. Alper, et al. Pediatric Otolaryngology. 4th ed. Philadelphia: W. B. Saunders: 2002.



DeBerardino, Thomas, and Mark D. Miller. Blunt Trauma Injuries in the Athlete. Philadelphia: Elsevier, 2013.



Hockberger, R. S., R. M. Walls, and J. A. Marx. Rosen’s Emergency Medicine: Concepts and Clinical Practice. 6th ed. Philadelphia: Mosby/Elsevier, 2006.



Lawton, Michael T. Seven Aneurysms: Tenets and Techniques for Clipping. New York: Thieme Medical Publishers, 2011.



Neff, Deanna M. "Subdural Hematoma." Health Library, November 26, 2012.



Raimondi, Anthony J., Maurice Choux, and Concezio Di Rocco. Head Injuries in the Newborn and Infant. New York: Springer-Verlag, 2013.



Salazar, Misael F. Garza, and Araceli Ruiz Mendoza. Hematomas: Types, Treatments and Health Risks. New York: Nova Biomedical Publishers, 2012.

Wednesday 28 October 2015

What are burns and scalds?


Causes and Symptoms

Burns are injuries to tissues caused by contact with dry heat (fire), moist heat (steam or a hot liquid, also called scalds), chemicals, electricity, lightning, or radiation. The word “burn” comes from the Middle English brinnen or brennen (to burn) and from the Old English byrnan (to be on fire) combined with baernan (to set afire). As of 2014, the American Burn Association reported that nearly a half million people recieve medical attention for burns each year, with over thirty thousand hospitalized in burn centers. According to the World Health Organization in 2015, globally, an estimated 265,000 deaths occur annually due to fires; fire-related deaths alone rank among the fifteen leading causes of death among individuals aged five to twenty-nine years (these figures do not include burns from scalding, electricity, chemicals, or radiation). Burns are most common in children and older people and in low-income countries; many burns are caused by accidents in the home that are preventable.



The depth of the injury is proportional to the intensity of the heat of the causative agent and the duration of exposure. Burns can be classified according to the agent causing the damage. Some examples of burns according to this classification are brush burns, caused by friction of a rapidly moving object against the skin or ground into the skin; chemical burns, caused by exposure to a caustic chemical; flash burns, caused by very brief exposure to intense radiant heat (the typical burn of an atomic explosion); radiation burns, caused by exposure to radium, x-rays, or atomic energy; and respiratory burns, caused by inhalation of steam or explosive gases.


Burns can also be classified as major or severe (involving more than 20 percent of the body and any deep burn of the hands, face, feet, or perineum), moderate (a burn that requires hospitalization but not specialized care, as with burns covering 5 to 20 percent of the body but without deep burns of hands, face, feet, or perineum), or minor (a superficial burn involving less than 5 percent of the body that can be treated without hospitalization).


While many domestic burns are minor and insignificant, more severe burns and scalds can prove to be dangerous. The main danger for a burn patient is the shock that arises as a result of loss of fluid from the circulating blood at the site of the burn. This loss of fluid leads to a fall in the volume of the circulating blood in the area. The maintenance of an adequate blood volume is essential to life, and the body attempts to compensate for this temporary loss by withdrawing fluid from the uninjured areas of the body into the circulation. In the first forty-eight hours after a severe burn is received, fluid from the blood vessels, salt, and protein pass into the burned area, causing swelling, blisters, low blood pressure, and very low urine output. The body loses fluids, proteins, and salt, and the potassium level is raised. Such low fluid levels are followed by a shift of fluid in the opposite direction, resulting in excess urine, high blood volume, and low concentration of blood electrolytes. If carried too far, this condition begins to affect the viability of the body cells. As a result, essential body cells such as those of the liver and kidneys begin to suffer, eventually causing the liver and kidneys to cease proper function. Liver and renal failure are revealed by the development of jaundice and the appearance of albumin in the urine. In addition, the circulation begins to fail, with a resultant lack of oxygen in the tissues. The victim becomes cyanosed, restless, and collapsed, and in some cases death ensues. Other possible problems related to burns include collapse of the circulatory system, shutdown of the digestive and excretory systems, shock, pneumonia, and stress ulcers.


In addition, particularly with severe burns, there is a strong risk of infection. Severe burns can leave a large area of raw skin surface exposed and extremely vulnerable to any microorganisms. The infection of extensive burns may cause fatal complications if effective antibiotic treatment is not given. The combination of shock and infection can often be life-threatening unless expert treatment is immediately available.


The immediate outcome of a burn is more determined by its extent (amount of body area affected) than by its depth (layers of skin affected). The “rule of nines” is used to assess the extent of a burn in relation to the surface of a body. The head and each of the arms cover 9 percent of the body surface; the front of the body, the back, and each leg cover 18 percent; and the crotch accounts for the remaining 1 percent. The greater the extent of a burn, the more seriously ill the victim will become from loss of fluid. The depth of the burn (unless it is very great) is mainly of importance when the question arises as to how much surgical treatment, including skin grafting, will be required. An improvement over the rule of nines in the evaluation of the seriousness of burns is the Berkow formula, which takes into account the age of the patient.


A burn caused by chemicals differs from a burn caused by fire only in that the outcome of the chemical burn is usually more favorable, since the chemical destroys the bacteria on the affected part and reduces the chance of infection. Severe burns can also be caused by contact with electric wires. As current meets the resistance in the skin, high temperatures are reached and burning of the victim takes place. Exposure to 220 volts burns only the skin, but higher voltage can cause severe underlying damage to any tissue in its path. Electrical burns normally cause minimal external skin damage, but they can cause serious heart damage and require evaluation by a physician. Explosions and the action of acids and other chemicals also cause burns. Severe and extensive fire burns are most frequently produced by the clothes catching fire.




Treatment and Therapy

General treatment of a burn injury includes pain relief, the control of infection, the maintenance of the balance of fluids and electrolytes in the system, and a good diet. A high-protein diet with supplemental vitamins is prescribed to aid in the repair of damaged tissue. The specific treatment depends on the severity of the burn. Major burns should be treated in a specialized treatment facility, while minor burns can be treated without hospitalization. A moderate burn normally requires hospitalization but not specialized care.


In the case of minor burns or scalds, all that may be necessary is to hold the body part in cool water until the pain is relieved, as cooling is one of the most effective ways of relieving the pain of a burn. However, the application of ice to a burn may cause more harm to the skin, as ice will restrict blood flow to the affected area and slow the healing process. If the burn involves the distal part of a limb—for example, the hand and forearm—one of the most effective ways of relieving the pain is to immerse the burned part in lukewarm water and add cold water until the pain disappears. If the pain does not return when the water warms up, the burn can be dressed in the usual way (a piece of sterile gauze covered by cotton with a bandage on top). The part should be kept at rest and the dressing dry and clean until healing takes place. Blisters can be pierced with a sterile needle, but the skin should not be cut away. No ointment or oil should be applied, and an antiseptic is not always necessary. Even minor burns can be serious if it covers as much as two-thirds of the body area. On a child, such burns are dangerous on an even smaller area of the skin, and special attention should be given to the patient.


In the case of moderate burns or scalds, it is advisable to use antiseptics (such as chlorhexidine, bacitracin, and neomycin), and the patient should be taken to a doctor. Treatment may consist of applying a dressing with a suitable antibiotic or an antiseptic or pain-relieving cream and covering the burn with a dressing sealed at the end. This dressing is left on for four to five days and removed if there is evidence of infection or if pain occurs.


For severe burns and scalds, the patient must go to the hospital. Unless there is a need for resuscitation, or attention to other injuries, nothing should be done on the spot except to make sure that the patient is comfortable and to cover the burn with a sterile cloth. Clothing should be removed from the burned area only if this does not traumatize the skin further. Burned clothing should be sent to the burn center, as it may help determine the chemicals and other substances that either caused or entered the wound. Once the victim is in the hospital, the first thing to check is the extent of the burn and whether a transfusion is necessary. If the burn covers more than 9 percent of the body surface, a transfusion is required. It is essential to prevent infection or to bring it under control. A high-protein diet with ample fluids is needed to compensate for the protein that has been lost along with the fluid from the circulation. The process of healing is slow and tedious, including careful nursing, physiotherapy, and occupational therapy. The length of hospital stay can vary from a few days in some cases to many weeks in the case of severe and extensive burns.


In some cases, depending on the extent of the burn, it will be necessary to consider skin grafting, in which a graft of skin from one part of the body (or from another individual) is implanted over another part. Skin grafting is done soon after the initial injury. The donor skin is best taken from the patient, but when this is not possible, the skin of a matched donor can be used. Prior to grafting, or in some cases as a substitute for it, the burn may be covered with either cadaver or pig skin to keep it moist and free from exogenous bacterial infection. Artificial skin holds great promise for treating severe burns.


In the case of chemical burns, treatment can be specific and depends on the chemical causing the burn. For example, phenol or lysol can be washed off promptly, while acid or alkali burns should be neutralized by washing with sodium bicarbonate or acetic acid, respectively, or with a buffer solution for either one. In many cases, flushing with water to remove the chemical is the first method of action.


Victims who have inhaled smoke may develop swelling and inflammation of the lungs, and they may need special care for burns of the eyes. People who have suffered an electrical burn may suffer from shock and may require artificial respiration, which should begin as soon as contact with the current has been broken.




Perspective and Prospects

Burns have been traditionally classified according to degree. The French surgeon Guillaume Dupuytren divided burns into six degrees, according to their depth. A first-degree burn is one in which there is simply redness; it may be painful for a day or two. This level of burn is normally seen in cases of extended exposure to sunlight or x-rays. A second-degree burn affects the first and second layers of skin. There is great redness, and the surface is raised up in blisters accompanied by much pain. Healing normally occurs without a scar. A third-degree burn affects all skin layers. The epidermis is entirely peeled off, and the true skin below is destroyed in part, so as to expose the endings of the sensory nerves. This is a very painful form of burn, and a scar follows on healing. With a fourth-degree burn, the entire skin of an area is destroyed with its nerves, so that there is less pain than with a third-degree burn. A scar forms and later contracts, and it may produce great deformity in the affected area. A fifth-degree burn will burn the muscles as well, and still greater deformity follows. In a sixth-degree burn, a whole limb is charred, and it separates as in gangrene.


In current practice, burns are referred to as superficial (or partial thickness), in which there is sufficient skin tissue left to ensure regrowth of skin over the burned site, and deep (or full thickness), in which the skin is totally destroyed and grafting will be necessary. It is difficult to determine the depth of a wound at first glance, but any burn involving more than 15 percent of the body surface is considered serious. As far as the ultimate outcome is concerned, the main factor is the extent of the burn—the greater the extent, the worse the outlook.


Unfortunately, burns are most common in children and older people, those for whom the outcome is usually the worst. Many burns are caused by accidents in the home, which are usually preventable. In fact, among the primary causes of deaths by burns, house fires account for the majority of the incidents. Safety measures in the home and on the job are extremely important in the prevention of burns. Severe and extensive burns most frequently occur when the clothes catch fire. This rule applies especially to cotton garments, which burn quickly. Particular care should always be exercised with electric fires and kettles or pots of boiling water in houses where small children or elderly people are present.


In the United States, most severely burned patients are given emergency care in a local hospital and are then transferred to a large burn center for intensive long-term care. The kind of environment provided in special burn units in large medical centers varies, but all have as their main objective avoiding contamination of the wound, as the major cause of death in burn victims is infection. Some special units use isolation techniques and elaborate laminar air-flow systems to maintain an environment that is as free of microorganisms as possible.


The patient who has suffered some disfigurement from burns will have additional emotional problems in adjusting to a new body image. Burn therapy can be long and tedious for the patient and for family members. They will need emotional and psychological support as they work their way through the many problems created by the physical and emotional trauma of a major wound.




Bibliography


"Burns." Health Library, September 30, 2012.



"Burns." World Health Organization. WHO, n.d. Web. 12 Feb. 2015.



"Burns: First Aid." Mayo Clinic, February 1, 2012.



"Fire Deaths and Injuries: Fact Sheet." Centers for Disease Control and Prevention, October 11, 2011.



Glanze, Walter D., Kenneth N. Anderson, and Lois E. Anderson, eds. The Signet Mosby Medical Encyclopedia. New York: Signet, 1996.



Jeschke, Marc G. Burn Care and Treatment: A Practical Guide. Medford: Springer, 2013.



Landau, Sidney I., ed. International Dictionary of Medicine and Biology. New York: John Wiley & Sons, 1986.



Leikin, Jerrold B., and Martin S. Lipsky, eds. American Medical Association Complete Medical Encyclopedia. New York: Random House Reference, 2003.



Marcovitch, Harvey, ed. Black’s Medical Dictionary. 42d ed. Lanham, Md.: Scarecrow Press, 2010.



Miller, Benjamin F., Claire Brackman Keane, and Marie T. O’Toole. Miller-Keane Encyclopedia and Dictionary of Medicine, Nursing, and Allied Health. Rev. 7th ed. Philadelphia: Saunders/Elsevier, 2005.



Sheridan, Robert Leo. Burns: A Practical Approach to Immediate Treatment and Long-Term Care. London: Manson Publishing, 2012.

Tuesday 27 October 2015

What is drug testing? |


Drug Screening Basics

Many organizations testing for drugs use a drug panel that tests urine for multiple drugs. Drug screens often test for the metabolites of substances, or the chemical results that are found within the body after the drug is processed. A positive urine test is followed by a confirming test for the specific drug. For example, the metabolite for marijuana is tetrahydrocannibinol (THC). If the first test is positive for THC, then a confirming test for delta-9-tetrahydrocannabinol-9-carboxylic acid is performed, confirming (or denying) marijuana use. The opiate screening will test for opiate metabolites, and a confirming test will test specifically for the opiate drugs codeine or morphine.




Government and private employers are major users of drug testing methods, and many private firms have created their own drug testing programs. The American Management Association estimated in 2004 that nearly two-thirds (62 percent) of employers in the United States use drug testing.


Drug screens are used for six primary reasons. The first reason is to prescreen potential job candidates for drug use. The second reason is to randomly test workers as a deterrent to drug use on the job and to identify safety hazards in persons in high-risk occupations. Third, tests may be performed if there is a reasonable suspicion that a specific person is abusing drugs. Fourth, tests may be ordered subsequent to an employee accident, and fifth, upon the return to work of an employee involved in an earlier accident. Sixth, employers will use a follow-up test to recheck an employee following a positive drug screen. Employers will often use toxicology screens to test an employee before a promotion or to test an employee during that employee’s annual physical examination.


One key reason for the implementation of drug testing in the twentieth century was to deter workplace drug use, and screening appears to be working. According to the US Substance Abuse and Mental Health Services Administration (SAMHSA), the number of positive drug tests at worksites around the United States plummeted from 13.6 percent in 1998 to 3.6 percent in 2009. However, according to Quest Diagnostics and the analysis of over ten million workplace drug test results, the positivity rate for urine drug tests (6.6 million) had increased for the first time in years by 2013 and had increased once again to 4.7 percent in 2014.


Any positive test for drugs is often followed by a more sophisticated test, one that uses gas chromatography/mass spectrometry (GS/MS) to test for specific substances. If this test is positive too, it will be given to a physician knowledgeable about drug abuse for further review.




Law Enforcement Drug Screening

Law enforcement officials test for drugs among persons involved in automobile and other vehicle accidents to determine if alcohol or drugs may have been a factor in causing the accident. Law enforcement often uses drug screens to test persons who have been arrested to determine if that person was under the influence of drugs at the time the alleged crime was committed. Incarcerated persons are subject to drug screening, as are those who are on probation or parole. A positive drug screen for a person on probation or parole often means he or she will be sent back to jail or prison, because a positive drug screen is usually a violation of the terms of probation or parole.


Drug courts throughout the United States manage the cases of persons convicted of drug offenses, and drug testing is an integral part of the program. It is estimated that more than one thousand courts have drug courts to manage drug offenders.


Drug testing also may be sought by law enforcement because some drugs are known to escalate the risk of violence. As a result, a person in custody for committing a violent act may be tested for recent drug use, especially for the use of a drug, such as methamphetamine or cocaine, that is linked to violent behavior. According to research by SAMHSA (2002–2004), among adolescents age twelve to seventeen years, all of whom had engaged in violent behavior in the past year, 69.3 percent had abused methamphetamine, 61.8 percent had abused cocaine, and 61.4 percent had abused hallucinogens in the past year. Marijuana was found to be used by nearly one-half (49.7 percent) of the adolescents deemed violent.




Other Reasons for Drug Testing

Competitive athletes
are barred from using drugs such as anabolic steroids, which are known to increase muscle mass and endurance but which also have many serious health effects. Also, athletic organizations use drug screening to ensure that high school, college, and professional athletes remain drug free because drugs affect athletic performance and can provide an unfair advantage in competition.


In 2015, it was announced that the Electronic Sports League, one of the largest leagues in competitive video gaming, would be forming guidelines to institute a testing program for players involved in these e-sports. This decision came after it was revealed that some professional gamers had taken drugs such as Adderall to sharpen their focus during competitive gaming tournaments.


A 2002 ruling by the US Supreme Court (Pottawatomie County v. Earls
) allows schools to test students who are not athletes, through random drug tests. Schools that adopt such programs believe that random testing can deter students from abusing drugs and also believe that testing allows for the identification of students with drug problems, who would benefit from counseling.


Pain management doctors may test their patients to ensure they are taking only the drugs that are prescribed to them and not taking any additional drugs of abuse. Some people who are prescribed drugs such as opiates, amphetamines, and benzodiazepines divert (mostly sell) their drugs to others. In this case, a negative test for the prescribed drug is problematic, indicating that the person is not taking the prescribed drug. Pain management doctors also want their patients to take only scheduled drugs that are prescribed; doctors are alerted if the test reveals the presence of nonprescribed drugs of abuse. Some pain management doctors require patients to sign a contract that they are willing to be tested randomly for drugs. If the patient refuses to sign the contract, the doctor will not provide treatment.


Persons admitted to emergency rooms with an altered mental state are often tested for drugs to help medical professionals determine whether the behavior is likely caused by drug abuse or by mental illness. One complicating factor is that some mentally ill persons also abuse drugs. It should be noted, however, that few mentally ill persons are violent. However, research has indicated that the abuse of alcohol and drugs escalates the risk for violence among people with mental illness.


Substance abuse treatment facilities may require drug screening to ensure that patients in the facility are not using drugs that they have acquired illicitly, that is, drugs brought to the facility by visitors or others. Child protection workers may request drug screening to verify that former addicts who had abused or neglected their children in the past are no longer using drugs of abuse. Sometimes young children are tested for drugs, particularly if it is known that a parent has abused drugs in the past. Investigators may test the child’s hair or urine for traces of drug use. If drugs are found in the body fluids or hair of a child, the child may be removed from his or her home. Additionally, small children sometimes ingest drugs carelessly left out by drug abusers; these children are at risk of cardiovascular or neurological symptoms, even death.




Pros and Cons of Testing Methods

Although urine is the most common fluid screened, organizations do seek other means for screening for drugs; each screening method has advantages and disadvantages. For example, because hair grows about one-half inch per month, testing of the hair can determine the presence of drugs from several months prior to the test. In contrast, screens of the urine can detect drugs used within hours or days only, with some exceptions. As a result, if recent abuse of drugs is sought, then tests of the urine or blood are preferable; if information on long-term drug abuse detection is sought, then hair testing for drugs may be preferable.


Another factor in determining what test to use is the speed at which the test results are needed. Urine and blood test results usually can be obtained rapidly. In contrast, hair must be sent to a specialized laboratory for analysis. Oral fluid testing, often referred to as saliva testing, can be done on-site, although this test is not as commonly used as urine or blood testing.


The reliability of a given test is another factor. For example, hair testing can be affected by the use of hair bleaches and dyes by the person providing a sample. Up to 60 percent of drugs may be removed through such processes. The drugs least affected by the use of cosmetic substances for the hair include cannabis (marijuana and hashish) and opiates. Urine testing and blood testing are highly reliable, although false positives can occur with urine testing. In addition, often there is no first-hand observation of the collection of urine for testing, as is with testing of the blood, hair, or oral fluids. As a result, some people deliberately attempt to alter urine test results by, for example, submitting the urine of another (likely drug-free) person.


The invasiveness of a given test is sometimes a consideration in choosing the type of test. Saliva testing is considered noninvasive because it requires that the person simply spit multiple times into a special container. Hair testing is noninvasive because it requires cutting only a few strands of hair close to the scalp. Urine testing is not considered invasive but it can be embarrassing if the examiner listens outside the door. Conversely, blood testing is the most invasive form of blood testing because it requires the penetration of the skin with a needle to collect the blood.




Drug Positives and False Positives

In most cases, a person who tests positive for a drug most likely used the drug. Positive drug screens may lead to job termination or to not being hired for a job. Some people may test positive for a drug, especially with a urine screen, even if they have not used the drug in question.


It also is true that a person will test positive for a drug that they have been prescribed. A person being treated for attention-deficit hyperactivity disorder, for example, will test positive for amphetamine use. While tests can determine the presence of, in this case, amphetamine, tests cannot determine if the person is using the drug lawfully.


Some medications may cause a false positive on a urine screen. For example, nonsteroidal anti-inflammatory drugs (NSAIDs) may give a false positive for the use of marijuana. NSAIDs may also give a false positive urine result for barbiturates, a type of controlled drug that is included in some drug screens. The use of a Vicks inhaler may give a false positive result in the urine for an amphetamine. The use of sertraline (Zoloft), a commonly used antidepressant, may give a false positive for benzodiazepine. Other antidepressants, such as bupropion, desipramine, and trazadone, may give a false positive result for amphetamine use.




Bibliography


American Management Association. “Medical Testing 2004 Survey.” American Management Association. Amer. Management Assn., 3 Sept. 2003. Web. 11 Mar. 2011.



Heller, Jacob. “Toxicology Screen.” MedlinePlus. US Natl. Library of Medicine, 12 Feb. 2009. Web. 8 Mar. 2011.



"Illicit Drug Positivity Rate Increases Sharply in Workplace Testing." Quest Diagnostics. Quest Diagnostics, 9 June 2015. Web. 29 Oct. 2015.



Moller, Monique, Joey Gareri, and Gideon Koren. “A Review of Substance Abuse Monitoring in a Social Services Context: A Primer for Child Protection Workers.” Canadian Journal of Clinical Pharmacology 17.1 (2010): 177–93. Web. 5 Mar. 2012.



Nasky, Kevin M., George L. Cowan, and Douglas R. Knittel. “False-Positive Urine Screening for Benzodiazepines: An Association with Sertraline? A Two-Year Retrospective Chart Analysis.” Psychiatry 6.7 (2009): 36–39. Print.



Reynolds, Lawrence A. “Historical Aspects of Drugs-of-Abuse Testing in the United States.” Drugs of Abuse: Body Fluid Testing. Eds. Raphael C. Wong and Harley Y. Tse. Totowa: Humana, 2010. Print.



US Department of Health and Human Services. “Mandatory Guidelines for Federal Workplace Drug Testing Programs.” Federal Register. Federal Register, 25 Nov. 2008. Web. 11 Mar. 2011.



Vincent, E. Chris, Arthur Zebelman, and Cheryl Goodwin. “What Common Substances Can Cause False Positives on Urine Screens for Drugs of Abuse?” Journal of Family Practice 55.10 (2006). Web. 5 Mar. 2012.



Wingfield, Nick, and Conor Dougherty. "Drug Testing Is Coming to E-sports." New York Times. New York Times, 23 July 2015. Web. 29 Oct. 2015.

Monday 26 October 2015

What does psychology tell us about advertising?


Introduction


Advertising is a process of persuading an audience to buy products, contract services, or support a candidate or issue. Advertising creates a reality for the consumer—both the image of the product, company, or candidate and the need for a product or service. Advertisements try to change consumer attitudes toward a product, company, or candidate. Attitudes consist of three components: belief, affect (emotion), and intention to act. The ultimate goal of the advertiser is to persuade the consumer to act—to buy the product, support the candidate or company, or use the service.









Advertising is one form of mass communication. A classified ad from around 1000 b.c.e. offered a reward (a gold coin) to anyone finding and returning a runaway slave. Johannes Gutenberg, an inventor and metallurgist, invented movable type in the fifteenth century, which allowed for the printed mass communication of advertising. The Industrial Revolution of the nineteenth century cultivated commercialism and transportation of national publications, including a large number of magazines. Advertising proliferated on radio after 1920, on television after 1945, and on the Internet starting in the mid-1990s.


Psychologists study advertising as a form of communication in the context of cognition and psycholinguistics. Consumers “read” advertisements, whether in print, on television, or online, similarly to the way they read books. Therefore, one way to examine how consumers understand and react to advertisements is by researching comprehension of narrative scripts. For advertisements to effectively change consumer attitudes, their message must be understood. Psychologists study how consumers process the information in the advertisement. Sometimes information processing leads to miscomprehension which, often unintentionally, can create in the consumer false ideas about the product.


From a more social cognitive and humanistic perspective, psychologists look at the appeals advertisers make to human needs. Advertisements associate basic needs and natural responses to those needs with their products. Advertisers classically condition consumers to respond to their products as they would to any stimulus naturally satisfying a need. Sometimes these associations are not consciously made.




Narrative Script

The first step in changing consumer attitudes toward an advertised product is to persuade the consumer that the informational content of the advertisement is true. One way advertisers create belief in the consumer is to follow a narrative script, a simple plot such as a child might hear when a parent reads a story.


The narrative script is a knowledge structure composed of exposition, complication, and resolution. The exposition introduces the characters and settings of the story. The complication is a developing problem. The resolution is the solution to the problem. Many advertisements take the form of the narrative script to facilitate comprehension and belief. For example, John, Jane, and their daughter Judy are playing at the park (exposition). While swinging, Judy falls on the ground, scraping her knee (complication). Jane soothes Judy and dresses her wound by applying a plastic bandage coated with an antibacterial agent (resolution). The consumer is comfortable with the narrative script as an understandable and entertaining format. The advertiser is able to hold the audience’s attention. The resolution is associated with the product (bandages).




Information Processing

The consumer’s belief in the advertisement is affected by how the consumer processes the information presented in it. There are eight stages of information processing involved in the comprehension of advertisements. The belief component of the consumer’s attitude toward the product can be formed or modified at any stage. The first stage is exposure. The consumer must have the opportunity to perceive the advertisement. The second stage is attention. The consumer may pay attention to part or all of an advertisement. The third stage is comprehension. The consumer must understand the information in the ad. The fourth stage is evaluation. The consumer assesses the information presented in the ad. The fifth stage is encoding. The consumer encodes, or saves, the advertised information in long-term memory. Later, the sixth stage, retrieval, can occur: The consumer retrieves the encoded information. The seventh stage is decision. The consumer decides to buy (or not buy) the advertised product. The final stage is the action of buying the product.




Miscomprehension

Advertisers may persuade consumers to buy their product by intentionally inducing miscomprehension. The basis of the miscomprehension is the tendency for people to encode inferences, or interpretations, of stated advertising claims. Therefore, the consumer later remembers inferences made, but not explicitly stated, about a product. At no point is the advertiser presenting false information. However, the advertisement is structured in a way that induces the consumer to draw a specific inference. An advertisement might use hedge words such as “may” or “could.” For example, pain reliever Brand A “may help” prevent heart attacks.


Other advertisements contain elliptical comparatives. In an elliptical comparison, the standard that something is being compared to is intentionally left out. The consumer naturally completes the comparison with the most logical standard. However, the true standard might not be the most logical. For example, cereal Brand A “gives you more.” More what? A logical standard might be “more vitamins.” The true standard might be “more heartburn.” An advertisement might imply causation when in actuality the relationship is correlational. Juxtaposing two imperative statements implies that the first statement leads to, or causes, the next statement: Buy tire Brand A. Drive safely.




Psychological Appeals

Advertisers use psychological appeals directed to basic human needs. Abraham Maslow, the American humanist psychologist who developed a hierarchy of needs in the 1960s, theorized that all humans have needs that must be met to achieve self-fulfillment. The most basic are the physiological needs, such as food, water, and shelter. People also need basic feelings: feeling secure, that they belong, and that they are loved, as well as feelings of self-esteem. The final need is self-actualization, the highest form of self-fulfillment. Advertisements may focus on any one, or combinations, of these needs.


Psychological appeals in advertising influence the emotional component of people’s attitudes. The advertised product is associated with positive emotions such as fun, love, belonging, warmth, excitement, and satisfaction. Advertisements can also be based on fear: The advertisers try to convince consumers that there will be negative consequences if they do not buy their product, focusing on the need for safety. For example, buying any tire other than the one advertised will increase the risk of an automobile accident. Advertisements also appeal to the human need for self-esteem, which is heightened through power and success. An ad may aim to associate the product with the consumer being the best or having the most.




Classical Conditioning

Russian physiologist Ivan Petrovich Pavlov discovered the process of classical conditioning
in the early twentieth century. An unconditioned stimulus
(US) produces naturally an unconditioned response (UR). For example, the image of a baby may naturally produce pleasant, even maternal or paternal, feelings. In classical conditioning, the unconditioned stimulus is paired with a neutral stimulus (one that does not normally produce the unconditioned response). For example, a can of soda (neutral stimulus) can be paired with a picture of a baby (unconditioned stimulus). With several pairings, the neutral stimulus will become the conditioned stimulus, eliciting the unconditioned response without the actual association with the unconditioned stimulus. Once conditioning occurs, the unconditioned response becomes the conditioned response (CR).


Thus, the can of soda becomes the conditioned stimulus when it alone produces pleasant feelings (conditioned response). Advertisers use classical conditioning to associate a product with a stimulus that elicits the desired responses (belief in the product, positive emotions about the product, intent to buy the product) in the consumer. While shopping, a consumer sees the advertised can of soda, associates it with positive feelings, and therefore is more likely to purchase this brand of soda.




Subliminal Advertising

Stimuli that are subliminal are below the threshold of conscious perception. Consumers are not normally aware of subliminal stimuli unless they consciously look for them. For example, an image on a product package may contain the shape of sexual organs. There is some weak evidence that subliminal messages in advertising may positively affect the emotional quality of consumer attitudes toward a product. However, there is no evidence that subliminal messages affect consumer behavior toward a product.




Bibliography


Benoit, William L., and Pamela J. Benoit. Persuasive Messages: The Process of Influence. Malden: Blackwell, 2008. Print.



Cialdini, Robert B. Influence: Science and Practice. 5th ed. Boston: Pearson Education, 2009. Print.



Cialdini, Robert B. Influence: The Psychology of Persuasion. Rev. ed. New York: Collins, 2007. Print.



Day, Nancy. Advertising: Information or Manipulation? Berkeley Heights: Enslow, 1999. Print.



Harris, Richard J., and Fred W. Sanborn. A Cognitive Psychology of Mass Communication. 6th ed. New York: Routledge, 2014. Print.



Heath, Robert. Seducing the Subconscious: The Psychology of Emotional Influence in Advertising. Chichester: Wiley-Blackwell, 2012. Print.



Hogan, Kevin. The Psychology of Persuasion: How to Persuade Others to Your Way of Thinking. Gretna: Pelican, 1996. Print.



Maddock, Richard C., and Richard L. Fulton. Marketing to the Mind. Westport: Greenwood, 1996. Print.



Mills, Harry A. Artful Persuasion: How to Command Attention, Change Minds, and Influence People. New York: AMACOM, 2000. Print.



Pradeep, A. K. Mind Men: How Neuromarketing Advances Are Transforming Advertising. Hoboken: Wiley, 2014. Print.



Pratkanis, Anthony R., and Elliot Aronson. The Age of Propaganda: The Everyday Use and Abuse of Persuasion. Rev. ed. New York: Freeman, 2007. Print.



Schumann, David W., and Esther Thurson, eds. Advertising and the World Wide Web. Hillsdale: Erlbaum, 1999. Print.



Sugarman, Joseph, Dick Hafer, and Ron Hugher. Triggers: How to Use the Psychological Triggers of Selling to Motivate, Persuade, and Influence. Las Vegas: Delstar, 1999. Print.

Sunday 25 October 2015

What are stem cells? |


Structure and Functions

Stem cells are unspecialized cells that can develop into all the specialized cell types that organize themselves into the tissues, organs, and organ systems making up an entire individual. An egg fertilized by a sperm is called a totipotent stem cell, meaning that this single cell has the capacity to divide repeatedly and ultimately to contribute cells to each specialized body component. For example, from the single cell that is a fertilized human egg, cells must ultimately specialize to become the beating cells of the heart, pancreatic cells that produce insulin, skin cells that cover the body, and bone cells that support the body, among scores of other types of cells.



After fertilization, an egg divides repeatedly to form an embryo. The three- to five-day-old embryo is a hollow ball of cells called a blastocyst. Inside the blastocyst, a group of about thirty cells called the inner cell mass constitutes the stem cells of the embryo. Embryonic stem cells are referred to as pluripotent, because they have the capacity to develop into most, but not all, of the specialized cell types that will form the structures needed for the embryo to develop into an adult. Embryonic stem cells do not form the placenta, the structure that provides the essential connection between mother and embryo during gestation.


Adults also harbor several types of stem cells, although a very small number in each tissue. The major function of adult stem cells is to provide new cells to replenish aging or damaged ones. Many adult stem cells are believed to be sequestered in a specific area of tissue and remain nondividing until activated by tissue disease or injury. Others are required to provide new cells with greater frequency. For example, skin stem cells are constantly differentiating into mature skin cells to replace the large numbers of cells naturally lost each day.


Pluripotent hematopoietic (blood) stem cells reside in the bone marrow and are also very active. They regenerate themselves through mitosis but also divide into the numerous specialized cells found in the blood, including the red blood cells that carry oxygen, the various types of white blood cells involved in body defenses, and the platelets critical to clot formation.


Unlike specialized cells such as heart cells, brain cells, and muscle cells, which do not normally replicate themselves, stem cells may replicate many times, even when isolated from the body and propagated in the laboratory. Because of their capacity to regenerate themselves and their ability to differentiate into specific tissue types, scientists are isolating and studying stem cells in hopes of understanding diseases such as cancer. They are exploring the prospect of using stem cells as therapeutic agents in treating a host of diseases and disorders, including Parkinson’s disease, diabetes mellitus, and some forms of heart disease.


Embryonic stem cells are studied in the laboratory by isolating the inner cell mass from a three- to five-day-old embryo. The embryos are typically donated for research, with informed consent, by individuals who have extra, unneeded embryos created by in vitro fertilization for the treatment of infertility. The cells are added to a culture dish containing a nutrient medium and coated with mouse cells that provide a sticky surface to which the stem cells adhere. Newer methods allow stem cells to grow in the absence of contaminating mouse cells. The stem cells replicate repeatedly and fill the dish, then are divided and added to fresh culture dishes. After six months of repeated growth, division, and transfer to fresh culture dishes, the original thirty stem cells may yield millions of embryonic stem cells. The cells are analyzed at six months of growth, and if they have not differentiated, remain pluripotent, and appear genetically normal, then they are referred to as an embryonic stem-cell line.


Adult stem cells have proven to be much more difficult to grow in culture, and doing so has been a major focus of work by scientists. Unlike embryonic stem cells, adult stem cells are generally limited to differentiating into the cell type of their tissue of origin. Some evidence suggests, however, that certain types of adult stem cells may be manipulated in the laboratory to differentiate into a broader range of tissue types.




Medical Applications

There are three major areas of stem cell research, each with potential medical applications. One branch of research seeks to discover and understand the many steps in the complex process of cellular differentiation. Other researchers are exploring the potential uses of stem cells in pharmaceutical development. A third major line of research focuses on the use of stem cells in the treatment of a host of diseases.


Embryonic stem cells are used to study the processes by which undifferentiated stem cells differentiate into specialized cell types. Through this work, scientists will gain a greater understanding of normal cell development. Understanding the mechanisms of normal cell development will provide insights into situations of abnormal growth and development. Scientists already know that turning specific genes on and off at critical times in the differentiation process is what leads to one cell becoming a muscle cell, another a lung cell, and still another a red blood cell, but the signals that influence these genes are only partially understood. Many serious medical conditions, such as cancer and certain birth defects, are the result of abnormal cellular differentiation and division. A better understanding of these processes in normal situations could lead to major insights in the development of such disorders and perhaps point the way to preventive measures or new therapeutic tools.


Established cell lines are often used by pharmaceutical companies when testing potential products. For example, cancer cell lines are used to test antitumor drugs. If human stem cell lines were available, then many drugs could be tested for both beneficial and toxic effects in stem cell cultures in one of two general fashions. In one case, drugs could be tested for their effects, either positive or negative, on the normal differentiation of stem cells into specialized cells. In a second scenario, pluripotent stem cells could be used to create new lines of a variety of differentiated cell types that are not yet available, and drugs specific for that cell type could be tested on these cell cultures. In either case, screening drugs with cell lines derived from human stem cells would have the advantage of testing directly on human cells. Such testing would decrease the number of nonhuman animals used in drug testing and could decrease the number of human clinical trials needed to prove the efficacy and safety of a drug, thus speeding it through the governmental approval process and making it available to the public. To screen drugs effectively, however, the cells must be identical from culture to culture and for each drug being tested. To achieve this, scientists must understand the cellular signals and biochemical pathways that control cellular differentiation into the desired cell type so that the process can be controlled precisely in repeated experiments. Scientists do not yet understand differentiation well enough to initiate drug testing in stem cells, but many are working toward that goal.


Perhaps the most exciting area of stem cell research is the possibility of using pluripotent stem cells to treat disease. Organ and tissue transplantation is commonly used to treat a number of medical conditions. Heart and kidney transplants are a few examples. These treatments are available only when organs fail and often put the patient at serious risk of death. The donor material often must come from donation of the organs after the death of another individual. There are serious shortages of transplantable organs, and many patients die before suitable donor organs become available. Even if a transplant can be performed, the body will attack the transplanted organ because it is perceived as foreign, thus risking the destruction and rejection of the organ. Even with powerful drugs to suppress this response, some organs are still rejected, with dire consequences for the recipient.


Pluripotent stem cells, if directed to differentiate into specific cell types, have the potential to provide a renewable source of cells and tissue. For example, it may be possible to generate healthy heart cells from stem cells in the laboratory, then transplant these cells into a damaged heart. The hope is that the transplanted cells would proliferate and grow into healthy, functioning tissue that would rejuvenate the damaged heart and circumvent the need for heart transplantation. Other conditions that could be treated with stem cell therapy are diabetes, Alzheimer’s disease, Parkinson’s disease, stroke, burns, and spinal cord injury. Although a great deal of research is ongoing in this area of regenerative medicine, not all stem cell therapies are experimental. For example, transplantation of blood-forming hematopoietic stem cells found in bone marrow has been in use since the 1960s. More pure preparations of adult hematopoietic stem cells are currently approved for the treatment of leukemia, lymphoma, and several inherited blood disorders.




Perspective and Prospects

In the 1960s, researchers first discovered that bone marrow contains at least two types of stem cells. One type, termed hematopoietic stem cells, was found to form all of the different types of blood cells. The second line, termed stromal cells, generates fat, cartilage, and connective tissue. Also during this time, scientists studying adult rat brains discovered areas that contained undifferentiated cells that divided and differentiated into nerve cells. At that time, scientists did not believe that brain cells could regenerate themselves and discounted the results of this study. In the 1990s, enough evidence had accumulated for scientists to agree that adult brains, including those of humans, contain stem cells that are able to differentiate into the three major types of cells found in the mature brain. The two main neurogenic areas of the adult mammalian brain are now known to be the olfactory bulb, which controls the sense of smell, and the hippocampus, a memory center.


Much of what scientists know about stem cells and their differentiation has come from studies in mice. The first stem cells were isolated from mouse embryos in 1981. Scientists treated these cell lines with various growth factors to stimulate the development of a particular cell type. For example, cells treated with vitamin A derivative differentiated into nerve cells. All types of blood cells and cardiac cells have been generated in similar fashions, and in 2000 scientists from StemCells, Inc., produced mature liver cells from the hematopoietic stem cells of mice. That same year, neuroscientists at Johns Hopkins University announced that they had successfully reversed paralysis in rats and mice by injecting them with embryonic stem cells. The cells migrated to a region of the spinal cord that contains motor nerve cells. Half of the rats regained movement in their hind feet. This success was heralded as a first step toward curing human neurological disorders with stem cells.


While mice are excellent models for research on human biology, they are not human. Ideally, research would be conducted on human cells. Human pluripotent stem cells were isolated for the first time by scientists Michael Shamblott and James Thomson, working independently, in 1998. In 2000, scientists were successful in isolating stem cells from human cadavers and directing their development from bone marrow stem cells into nerve cells. In the late 1990s and the early twenty-first century, a body of research accumulated to indicate that adult stem cells exist in more body tissues than originally believed. This finding has led scientists to explore using adult stem cells, rather than embryonic stem cells, as sources of transplant material. In 2008, the first organ transplant using a patient’s own stem cells was successfully performed. A team of doctors in Barcelona, Spain, replaced a thirty-year-old woman's trachea using a donor trachea that had been stripped of living cells and seeded with stem cells from the woman’s bone marrow. This area of research took yet another step forward in 2011, when scientists crafted an artificial trachea out of glass and seeded it with stem cells from a patient into whom it was then implanted by doctors in Sweden. Use of adult stem cells has the advantage of the transplant material being from the recipient, so it is not rejected by the body as is a foreign transplant.


Some adult stem cells have been shown, under the right conditions, to differentiate into a variety of cells that are not the tissue from which they were derived. In April 2003, it was reported that fourteen patients with severe heart disease improved after being injected with stem cells harvested from their own bone marrow. Other studies suggest that stem cells derived from umbilical cord blood could be stored and provide a source of stem cells for therapeutic use at a later time.


In 2007, American and Japanese scientists created adult human stem cells from differentiated skin cells. The process used to achieve this involved taking adult skin cells and infecting them with several genes known to be highly active in embryonic stem cells but less active in differentiated cells. The cells, called induced pluripotent stem cells (iPS), are therefore reprogrammed into an embryonic state. In 2009, another group produced iPS from adult fat cells. This case is particularly exciting because there is no shortage of fat cells available for reprogramming. Essentially, each human is carrying a supply of potential stem cells. However, severe problems are associated with the efficiency of the reprogramming process, and there are also concerns that some of the genes used to create iPS could cause cancer. Intense research is being performed, and in the future, iPS technology could be used as a therapeutic tool.


Because of the small amounts, scarcity, and lower developmental potential of adult stem cells, scientists believe that they must experiment with cells derived from fetuses and embryos if stem cell research is to progress and fulfill the promise of therapy for a host of dread diseases. Because embryos must be destroyed in order to isolate stem cells, the use of embryonic stem cells is controversial, particularly within the United States. In 2001, President George W. Bush banned the use of federal funds for embryonic stem-cell research except for the sixty-four stem-cell lines in existence at the time. In 2006, Bush vetoed a bill that reversed his previous decision and allowed use of federal funds for embryonic stem-cell research. As of 2006, fewer than one-third of the original sixty-four embryonic stem-cell lines were being studied because most either acquired deleterious mutations or propagated poorly and were discontinued.


Great Britain instituted no such restrictions on stem cell research, and in September 2002, plans were unveiled for the United Kingdom Stem Cell Bank, to be located in Hertfordshire. It was to be the world’s first center for storing and supplying tissue from human embryos and aborted fetuses to be used to repair diseased and damaged tissues. Subsequently, in 2005, the National Stem Cell Bank (NSCB) was established in the United States at WiCell Research Institute in Madison, Wisconsin, to acquire, characterize, and distribute the twenty-one of the original sixty-four human embryonic stem-cell lines that have been approved for federal government funding.


In 2004, the unfavorable attitude toward embryonic stem-cell research started to change, and voters in California passed Proposition 71, the California Stem Cell Research and Cures Initiative, which authorizes the sale of bonds to allocate $3 billion over ten years to stem cell research, with priority given to studies examining human embryonic stem cells. The initiative allowed the formation of the California Institute for Regenerative Medicine (CIRM), a stem cell agency that oversees the direction of research and distributes funding.


In 2009, President Barack Obama overturned the ban on the use of federal funding for embryonic stem-cell research. Also, with a huge increase in funding to scientists, the United States can now be competitive in the fast-moving, exciting field of embryonic stem-cell research. This should open doors to the future use of embryonic stem cells or iPS to cure human diseases.




Bibliography


Barber, Lionel, and John Rennie, eds. The Future of Stem Cells. Spec. issue of Scientific American and Financial Times July 2005: A3–35. Print.



Board on Life Sciences, National Research Council, and Board on Health Sciences Policy, Institute of Medicine of the National Academies. Guidelines for Human Embryonic Stem Cell Research. Washington: Natl. Acads., 2005. Print.



Committee on the Biological and Biomedical Applications of Stem Cell Research, Commission on Life Sciences, National Research Council. Stem Cells and the Future of Regenerative Medicine. Washington: Natl. Acad., 2002. PDF file.



Holland, Suzanne, Karen Lebacqz, and Laurie Zoloth, eds. The Human Embryonic Stem Cell Debate: Science, Ethics, and Public Policy. Cambridge: MIT P, 2001. Print.



Lanza, Robert, and Anthony Atala, eds. Essentials of Stem Cell Biology. 3rd ed. San Diego: Academic, 2014. Print.



Liu, Yunying, et al. "Generation of Functional Organs from Stem Cells." Cell Regeneration 2.1 (2013): n. pag. Web. 27 Aug. 2014.



Reece, Jane B., et al. Campbell Biology. 10th ed. San Francisco: Cummings, 2014. Print.



"Stem Cell Information." National Institutes of Health. Natl. Insts. of Health, 4 Apr. 2013. Web. 27 Aug. 2014.

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...