Thursday 30 November 2017

How does Mantag change throughout Ray Bradbury's Fahrenheit 451? How does he feel towards the end of the novel?

At the beginning of the novel, Montag feels numb and indifferent to his meaningless life until he meets Clarisse, whose presence encourages him to analyze himself. Montag becomes disenchanted with his job, his wife, and society in general. Montag begins to search for answers and commits a crime by possessing and reading books. As the novel progresses, Montag's search for meaning brings out his anger, until he finally snaps and kills Captain Beatty during a...

At the beginning of the novel, Montag feels numb and indifferent to his meaningless life until he meets Clarisse, whose presence encourages him to analyze himself. Montag becomes disenchanted with his job, his wife, and society in general. Montag begins to search for answers and commits a crime by possessing and reading books. As the novel progresses, Montag's search for meaning brings out his anger, until he finally snaps and kills Captain Beatty during a routine call. Montag officially becomes an enemy of the state but manages to escape by floating down the river and living with a band of hobo intellectuals outside of the city. Initially, Montag feels discouraged about his drastic life decisions until Granger explains to him the essence of life, which is to create, experience, and affect individuals and nature in a positive way. Montag is unsure of his ability to remember The Book of Ecclesiastes but slowly realizes the meaning of his life. By the end of the novel, Montag has transformed from a lost, disillusioned individual to an enthusiastic man with a purpose. He begins to remember significant verses from The Book of Ecclesiastes as he walks towards the recently destroyed city, in hopes of rebuilding a better society.

Wednesday 29 November 2017

What is the valency of sodium?

Valency refers to valence electrons, which are the electrons that are physically localized in the outermost regions of the atom, and therefore the highest in energy and the most available for bonding or interaction with other atoms and forces. Valency is a measure of what an atom can or is most likely to do in terms of its chemical properties. 


Valency for the first 18 atoms or so is a pretty straightforward consideration. Electrons are...

Valency refers to valence electrons, which are the electrons that are physically localized in the outermost regions of the atom, and therefore the highest in energy and the most available for bonding or interaction with other atoms and forces. Valency is a measure of what an atom can or is most likely to do in terms of its chemical properties. 


Valency for the first 18 atoms or so is a pretty straightforward consideration. Electrons are only "allowed" to arrange themselves in particular locations and orders around an atom, generally called orbitals and shells. When an inner, lower-energy shell is filled, electrons must go to the next available one, which is farther from the nucleus and higher in energy. Therefore, valence electrons will be all the electrons that are found in these "outer" shells, ignoring any electrons that are in "filled" shells. The innermost shell can only hold 2 electrons, but increasingly more are available as the elements get heavier. The easiest way to evaluate valency is to look at the periodic table, and simply count how many places the element is located from the left side of the chart. This is a little less accurate for the transition elements, but if you're still learning about valency you can deal with those weird exceptions later. 


Since sodium is our example (symbol Na) we can see that it's one step from the left side of the chart, and therefore has one valence electron. 

How did the field of genetics develop historically?


Charles Darwin


The prevailing public attitude of the mid-nineteenth century was that all species were the result of a special creation and were immutable; that is, they remained unchanged over time. The work of Charles Darwin challenged that attitude. As a young man, Darwin served as a naturalist on the HMS Beagle
, a British ship that mapped the coastline of South America from 1831 to 1836. Darwin’s observations of life-forms and their adaptations, especially those he encountered on the Galápagos Islands, led him to postulate that living species shared common ancestors with extinct species and that the pressures of nature—the availability of food and water, the ratio of predators to prey, and competition—exerted a strong influence over which species were best able to exploit a given habitat. Those best able to take advantage of an environment would survive, reproduce, and, by reproducing, pass their traits on to the next generation. He called this response to the pressures of nature “natural selection”: nature selected which species would be capable of surviving in any given environment and, by so doing, directed the development of species over time.












When Darwin returned to England, he shared his ideas with other eminent scientists but had no intention of publishing his notebooks, since he knew that his ideas would bring him into direct conflict with the society in which he lived. However, in 1858, he received a letter from a young naturalist named Alfred Russel Wallace. Wallace had done the same type of collecting in Malaysia that Darwin had done in South America, had observed the same phenomena, and had drawn the same conclusions. Wallace’s letter forced Darwin to publish his findings, and in 1858, a joint paper by both men on the topic of evolution was presented at the London meeting of the Linnean Society. In 1859, Darwin reluctantly published
On the Origin of Species by Means of Natural Selection
. The response was immediate and largely negative. While the book became a best seller, Darwin found himself under attack from religious leaders and other prominent scientists. In his subsequent works, he further delineated his proposals on the emergence of species, including man, but was never able to answer the pivotal question that dogged him until his death in 1882: If species are in fact mutable (capable of change over long periods of time), by what mechanism is this change possible?




Gregor Mendel

Ironically, it was only six years later that this question was answered, and nobody noticed. Gregor Mendel is now considered the “father” of genetics, but, in 1865, he was an Augustinian monk in a monastery in Brunn, Austria (now Brno, Czech Republic). From 1856 to 1863, he conducted a series of experiments using the sweet pea (Pisum sativum), in which he cultivated more than twenty-eight thousand plants and analyzed seven different physical traits. These traits included the height of the plant, the color of the seed pods and flowers, and the physical appearance of the seeds. He cross-pollinated tall plants with short plants, expecting the next generation of plants to be of medium height. Instead, all the plants produced from this cross, which he called the F1 (first filial) generation, were tall. When he crossed plants of the F1 generation, the next generation of plants (F2) were both tall and short at a 3:1 ratio; that is, 75 percent of the F2 generation of plants were tall, while 25 percent were short. This ratio held true whether he looked at one trait or multiple traits at the same time. He coined two phrases still used in genetics to describe this phenomenon: He called the trait that appeared in the F1 generation “dominant” and the trait that vanished in the F1 generation “recessive.” While he knew absolutely nothing about chromosomes or genes, he postulated that each visible physical trait, or phenotype, was the result of two “factors” and that each parent contributed one factor for a given trait to its offspring. His research led him to formulate several statements that are now called the Mendelian principles of genetics.


Mendel’s first principle is called the principle of segregation. While all body cells contain two copies of a factor (what are now called genes), gametes contain only one copy. The factors are segregated into gametes by meiosis, a specialized type of cell division that produces gametes. The principle of independent assortment states that this segregation is a random event. One factor will segregate into a gamete independently of other factors contained within the dividing cell. (It is now known that there are exceptions to this rule: two genes carried on the same chromosome will not assort independently.)


To make sense of the data he collected from twenty-eight thousand plants, Mendel kept detailed numerical records and subjected his numbers to statistical analysis. In 1865, he presented his work before the Natural Sciences Society. He received polite but indifferent applause. Until Mendel, scientists rarely quantified their findings; as a result, the scientists either did not understand Mendel’s math or were bored by it. In either case, the scientists completely overlooked the significance of his findings. Mendel published his work in 1866. Unlike Darwin’s work, it was not a best seller. Darwin himself died unaware of Mendel’s work, in spite of the fact that he had an unopened copy of Mendel’s paper in his possession. Mendel died in 1884, two years after Darwin, with no way of knowing the eventual impact his work was to have on the scientific community. That impact began in 1900, when three botanists, working in different countries with different plants, discovered the same principles as had Mendel. Hugo De Vries, Carl Correns, and Erich Tschermak von Seysenegg rediscovered Mendel’s paper, and all three cited it in their work. Some sixteen years after his death, Mendel’s research was given the respect it deserved, and the science of genetics was born.




Pivotal Research in Genetics

In 1877 Walter Fleming identified structures in the nuclei of cells that he called chromosomes; he later described the material of which chromosomes are composed as “chromatin.” In 1900, William Bateson introduced the term “genetics” to the scientific vocabulary. Wilhelm Johannsen expanded the terminology the following year with the introduction of the terms “gene,” “genotype,” and “phenotype.” In fact, 1901 was an exciting year in the history of genetics: the ABO blood group system was discovered by Karl Landsteiner; the role of the X chromosome in determining gender was described by Clarence McClung; Reginald Punnett and William Bateson discovered genetic linkage; and De Vries introduced the term “mutation” to describe spontaneous changes in the genetic material. Walter Sutton suggested a relationship between genes and chromosomes in 1903. Five years later, Archibald Garrod, studying a strange clinical condition in some of his patients, determined that their disorder, called alkaptonuria, was caused by an enzyme deficiency. He introduced the concept of “inborn errors of metabolism” as a cause of certain diseases. That same year, two researchers named Godfrey Hardy and Wilhelm Weinberg published their extrapolations on the principles of population genetics.


From 1910 to 1920, Thomas Hunt Morgan, with his graduate students Alfred Sturtevant, Calvin Bridges, and Hermann Müller, conducted a series of experiments with the fruit fly
Drosophila melanogaster
that confirmed Mendel’s principles of heredity and also confirmed the link between genes and chromosomes. The mapping of genes to the fruit fly chromosomes was complete by 1920. The use of research organisms such as the fruit fly became standard practice. For an organism to be suitable for this type of research, it must be small and easy to keep alive in a laboratory and must produce a great number of offspring. For this reason, bacteria (such as
Escherichia coli
), viruses (particularly those that infect bacteria, called bacteriophages), certain fungi (such as
Neurospora
), and the fruit fly have been used extensively in genetic research.


During the 1920s, Müller found that the rate at which mutations occur is increased by exposure to x-ray radiation. Frederick Griffith described “transformation,” a process by which genetic alterations occur in pneumococci bacteria. In the 1940s, Oswald Avery, Maclyn McCarty, and Colin MacLeod conducted a series of experiments that showed that the transforming agent Griffith had not been able to identify was, in fact, DNA. George Beadle and Edward Tatum proposed the concept of “one gene, one enzyme”; that is, a gene or a region of DNA that carries the information for a gene product codes for a particular enzyme. This concept was further refined to the “one gene, one protein” hypothesis and then to “one gene, one polypeptide.” (A polypeptide is a string of amino acids, which is the primary structure of all proteins.)


During the 1940s, it was thought that proteins were the genetic material. Chromosomes are made of chromatin; chromatin is 65 percent protein, 30 percent DNA, and 5 percent RNA. It was a logical conclusion that if the chromosomes were the carriers of genetic material, that material would make up the bulk of the chromosome structure. By the 1950s, however, it was fairly clear that DNA was the genetic material. Alfred Hershey and Martha Chase were able to prove in 1952 that DNA is the hereditary material in bacteriophages. From that point, the race was on to discover the structure of DNA.


For DNA or any other substance to be able to carry genetic information, it must be a stable molecule capable of self-replication. It was known that along with a five-carbon sugar and a phosphate group, DNA contains four different nitrogenous bases (adenine, thymine, cytosine, and guanine). Erwin Chargaff described the ratios of the four nitrogenous bases in what is now called Chargaff’s rule: adenine in equal concentrations to thymine, and cytosine in equal concentrations to guanine. What was not known was the manner in which these constituents bonded to each other and the three-dimensional shape of the molecule. Groups of scientists all over the world were working on the DNA puzzle. A group in Cambridge, England, was the first to solve it. James Watson and Francis Crick, supported by the work of Maurice Wilkins and Rosalind Franklin, described the structure of DNA in a landmark paper in Nature in 1953. They described the molecule as a double helix, a kind of spiral ladder in which alternating sugars and phosphate groups make up the backbone and paired nitrogenous bases make up the rungs. Arthur Kornberg created the first synthetic DNA in 1956. The structure of the molecule suggested ways in which it could self-replicate. In 1958, Matthew Meselson and Franklin Stahl proved that DNA replication is semiconservative; that is, each new DNA molecule consists of one template strand and one newly synthesized strand.




The Information Explosion

Throughout the 1950s and 1960s, genetic information grew exponentially. This period saw the description of the role of the Y chromosome in sex determination; the description of birth defects caused by chromosomal aberrations such as trisomy 21 (Down syndrome), trisomy 18 (Edwards syndrome), and trisomy 13 (Patau syndrome); the description of operon and gene regulation by François Jacob and Jacques Monod in 1961; and the deciphering of the genetic code by Har Gobind Khorana, Marshall Nirenberg, and Severo Ochoa in 1966.


The discovery of restriction endonucleases (enzymes capable of splicing DNA at certain sites) led to an entirely new field within genetics called biotechnology. Mutations, such as the sickle-cell mutation, could be identified using restriction endonucleases. Use of these enzymes and DNA banding techniques led to the development of DNA fingerprinting. In 1979 human insulin and human growth hormone were synthesized in Escherichia coli. In 1981, the first cloning experiments were successful when the nucleus from one mouse cell was transplanted into an enucleated mouse cell. By 1990, cancer-causing genes called oncogenes had been identified, and the first attempts at human gene therapy had taken place. In 1997, researchers in England successfully cloned a living sheep. As the result of a series of conferences between 1985 and 1987, an international collaboration to map the entire human genome began in 1990. A comprehensive, high-density genetic map was published in 1994. In 2003 the human genome was completed and the finished human genome sequence was published in 2004.


In the decade or so since the end of the Human Genome Project, genome sequences for several species have been completed. These include sequences for the rat and chicken (2004), dog and chimpanzee (2005), honey bee (2006), rhesus macaque (2007), platypus (2008), and bovine (2009). The first personal genome was sequenced in 2007. Human genome sequences for different geographic and cultural groups, such as the Han Chinese (2007), Yoruba (2008), Korean (2009), and Southern African (2010) have also been completed; the Neanderthal genome sequence was finished in 2010. In 2011 Eric D. Green, Mark S. Guyer, and the National Human Genome Research Institute reported that specific genes for about three thousand Mendelian (monogenic) diseases had been discovered, along with genetic associations between more than nine hundred genomic loci and multigenic traits. Such discoveries have been made possible by the collection of comprehensive catalogs of genetic variation in the human genome.




Impact and Applications

The impact of genetics is immeasurable. In less than one hundred years, humans went from complete ignorance about the existence of genes to the development of gene therapies for certain diseases. Genes have been manipulated in certain organisms for the production of drugs, pesticides, and fungicides. Genetic analysis has identified the causes of many hereditary disorders, and genetic counseling has aided innumerable couples in making difficult decisions about their reproductive lives. DNA analysis has led to clearer understanding of the manner in which all species are linked. Techniques such as DNA fingerprinting have had a tremendous impact on law enforcement.


Advances in genetics have also given rise to a wide range of ethical questions with which humans will be struggling for some time to come. Termination of pregnancies, in vitro fertilization, and cloning are just some of the technologies that carry with them serious philosophical and ethical problems. There are fears that biotechnology will make it possible for humans to “play God” and that the use of biotechnology to manipulate human genes may have unforeseen consequences for humankind. For all the hope that biotechnology offers, it carries with it possible societal changes that are unpredictable and potentially limitless. Humans may be able to direct their own evolution; no other species has ever had that capability. How genetic technology is used and the motives behind its use will be some of the critical issues of the future.




Key Terms




chromosome theory of heredity


:

the theory put forth by Walter Sutton that genes are carried on cellular structures called chromosomes





Mendelian genetics


:

genetic theory that arose from experiments conducted by Gregor Mendel in the 1860s, from which he deduced the principles of dominant traits, recessive traits, segregation, and independent assortment





model organisms


:

organisms, from unicellular to mammals, that are suitable for genetic research because they are small and easy to keep alive in a laboratory, reproduce a great number of offspring, and can produce many generations in a relatively short period of time





one gene-one enzyme hypothesis


:

the notion that a region of DNA that carries the information for a gene product codes for a particular enzyme, later refined to the “one gene-one protein” hypothesis and then to “one gene-one polypeptide” principle





Bibliography


Avise, John C. Conceptual Breakthroughs in Evolutionary Genetics: A Brief History of Shifting Paradigms. Amsterdam: Elsevier, 2014. Digital file.



Ayala, Francisco J., and Walter M. Fitch, eds. Genetics and the Origin of Species: From Darwin to Molecular Biology Sixty Years After Dobzhansky. Washington, DC: National Academies P, 1997. Digital file.



Babkov, V. V. The Dawn of Human Genetics. Cold Spring Harbor: CSHLP, 2013. Print.



Carlson, Elof Axel. Mendel’s Legacy: The Origin of Classical Genetics. Cold Spring Harbor: CSHLP, 2004. Print.



Corcos, A., and F. Monaghan. Gregor Mendel’s Experiments on Plant Hybrids: A Guided Study. New Brunswick: Rutgers UP, 1993. Print.



Darwin, Charles. The Variation of Animals and Plants Under Domestication. 1875. New York: NYUP, 2010. Print.



Fujimura, Joan H. Crafting Science: A Sociohistory of the Quest for the Genetics of Cancer. Cambridge: Harvard UP, 1996. Print.



Green, Eric D., Mark S. Guyer, and National Human Genome Research Institute. "Charting a Course for Genomic Medicine from Base Pairs to Bedside." Nature 470 (2011): 206–212. Genome.gov, 23 Mar. 2012. Web. 29 July 2014.



King, Robert C., William D. Stansfield, and Pamela Khipple Mulligan. A Dictionary of Genetics. 8th ed. New York: Oxford UP, 2013. Digital file.



Schwartz, James. In Pursuit of the Gene: From Darwin to DNA. Cambridge: Harvard UP, 2008. Print.



Sturtevant, A. H. A History of Genetics. 1965. Reprint. Cold Spring Harbor: CSHLP, 2001. Print.



Tudge, Colin. The Engineer in the Garden: Genes and Genetics, From the Idea of Heredity to the Creation of Life. New York: Hill, 1995. Print.



Tudge, Colin. In Mendel’s Footnotes: An Introduction to the Science and Technologies of Genes and Genetics from the Nineteenth Century to the Twenty-Second. London: Cape, 2000. Print.



Walloo, Keith, Alondra Nelson, and Catherine Lee. Genetics and the Unsettled Past: The Collision of DNA, Race, and History. New Brunswick: Rutgers UP, 2012. Digital file.



Watson, James. The Double Helix: A Personal Account of the Discovery of the Structure of DNA. 1968. London: Phoenix, 2011. Print.

What is color blindness? |


Causes and Symptoms

The retina of the eye is a thin, fragile membrane that contains millions of photoreceptor
cells. They convert light energy into an electrical signal, which is transmitted to the brain via the optic nerve. On a microscopic scale, the structure of the retina is like a carpet with its many fibers sticking upward. There are two types of photoreceptor cells, called rods and cones because of their distinctive shapes. Only the cones are important for color vision. There are three varieties of cones with peak sensitivities for red, green, and blue, respectively. The shades and tints of all other colors are mixtures of these three.



Color blindness involves a deficiency in these photoreceptor cells. A deficiency of green photoreceptor cells is much more common than a deficiency of red photoreceptors. Some people are totally color-blind, which means that they are completely unable to distinguish among red, orange, yellow, and green. Color blindness is quite rare in females (less than 1 percent of the population) but is more prevalent in males (about 8 percent).


Diagnostic tests are available to determine the extent of color blindness. The Ishihara color test, named after a Japanese ophthalmologist, consists of a mosaic of colored dots containing a letter of the alphabet made up of dots of a different color—for example, yellow dots in a background of green ones. Color-blind individuals would be unable to distinguish the letter because yellow and green look the same to them.


A more precise diagnostic test makes use of the Nagel anomaloscope, which has two colored light sources whose brightness can be adjusted. The patient tries to match a given color by superimposing the two light beams while varying their intensities. For normal eyes, red and green lights of similar intensities can be superimposed to create yellow. However, a patient who requires a considerably larger green component to create yellow evidently has a deficiency of green photoreceptor cells.




Treatment and Therapy

Color blindness is a genetic defect from birth, not a disease. No procedure is known by which it can be corrected. Color-blind people must find ways to counter the effects of their condition. For example, they can obtain driver’s licenses because they learn that stoplights are always red on top, yellow in the middle, and green on the bottom. Color-blind individuals may need help, however, with tasks such as clothing selection. Good color discrimination is required for some occupations, such as interior decorating, graphic design, advertising, or airplane piloting. Fortunately, color blindness is not a deterrent for most jobs.




Bibliography:


Cameron, John R., James G. Skofronick, and Roderick M. Grant. Medical Physics: Physics of the Body. Madison, Wis.: Medical Physics, 1992.



"Color Blindness." MedlinePlus, June 1, 2011.



Kasper, Dennis L., et al., eds. Harrison’s Principles of Internal Medicine. 18th ed. New York: McGraw-Hill, 2012.



Stresing, Diane. "Color Blindness." Health Library, March 15, 2013.



“Vision.” In Encyclopaedia Brittanica. 15th ed. Chicago: Encyclopaedia Britannica, 2002.

What is whiplash? |


Causes and Symptoms

When a moving car

collides with an obstacle, the driver and passengers suddenly feel themselves thrown forward. If an occupant’s head hits the dashboard or windshield, then serious injury can result. Seat belts, a padded dashboard, and air bags can reduce the severity of the impact. Conversely, when a car is hit by another vehicle from behind, the occupants will feel an extra forward push against the trunk of their body while the head snaps backward. This so-called whiplash effect is like the crack of a whip made by the driver of a team of horses, in which the whip handle is rapidly moved forward while the end of the rope snaps backward. In a rear-end automobile collision, if a person’s head flies backward beyond its normal range of motion, then the muscles and ligaments of the neck can be damaged. The person may not feel pain
right away, but it can
show up after a delay of some days. In severe cases, vertebrae of the spine can be knocked out of alignment or fractured. Most commonly, injury occurs at the junction of the fourth and fifth vertebrae. The upper four vertebrae are flexible and act as the lash, while the lower ones act as the handle of the whip.








Treatment and Therapy

Various treatments for whiplash are available, depending on the severity of the injury. Physically demanding activities such as sports or heavy lifting should be avoided. For pain control, aspirin or other anti-inflammatory drugs can be taken. If muscle spasms occur, then a physician may prescribe physical therapy, which includes heat treatment, massage, and stretching exercises. Wearing a neck collar can be useful to limit the motion of the head so that the muscles and ligaments can heal.




Perspective and Prospects

Most automobiles have a headrest attached to the top of each seatback. Its purpose is to prevent an occupant’s head from snapping backward in a rear-end collision. Whiplash injuries happen frequently in cases such as a multiple-car pileup on an interstate highway. Slower speeds and a greater distance between cars are especially important during foggy driving conditions or on an icy road.


Whiplash injury is not limited to car accidents. In football, a quarterback sometimes is tackled from behind, causing the same effect as a car collision from the rear. On the ski slope, a skier may lose control and crash into someone who has stopped to rest. During the snow season, some mountain towns have a tubing hill where people can slide down on inflated inner tubes, with frequent collisions resulting. Any activity that causes excessive flexion of the neck muscles and ligaments can result in whiplash injury.




Bibliography


American Medical Association. American Medical Association Family Medical Guide. 4th rev. ed. Hoboken, N.J.: John Wiley & Sons, 2004.



Carson-DeWitt, Rosalyn. "Whiplash." Health Library, June 24, 2013.



Foreman, Stephen M., and Arthur C. Croft. Whiplash Injuries: The Cervical Acceleration/Deceleration Syndrome. 3d ed. Philadelphia: Lippincott Williams & Wilkins, 2002.



Kasper, Dennis L., et al., eds. Harrison’s Principles of Internal Medicine. 16th ed. New York: McGraw-Hill, 2005.



Komaroff, Anthony, ed. Harvard Medical School Family Health Guide. New York: Free Press, 2005.



"Neck Pain." American Academy of Orthopaedic Surgeons, November 2009.



Rook, Jack L. Whiplash Injuries. Philadelphia: Butterworth-Heinemann, 2003.



"Whiplash." MedlinePlus, June 4, 2011.

Tuesday 28 November 2017

What is toxicology? |


Science and Profession

Since its inception, toxicology has gone through many paradigmatic shifts and has developed several subdisciplines, each with their own approaches and techniques but united by the fundamental challenge of understanding and controlling the interaction between toxic agents and physiological processes. The scale of analysis in which toxicological questions are investigated ranges from molecules to ecosystems, and toxicologists study all kinds of organisms, from the smallest viruses to the largest terrestrial and aquatic organisms.




The popular expression “the dose makes the poison” is one of the key principles of toxicology. It refers to the fact that adverse physiological effects can be produced by practically any substance if given at a dose large enough to overwhelm the body’s natural capacity to process it. Extremely toxic chemicals impart their effects at very small doses. A key measure of toxicity is the lethal dose (LD), which is defined as the amount of a toxic substance that kills an organism. Because the individuals in a group of organisms do not exhibit identical responses, the actual quantitative measure is termed “LD-50,” which is the dose that kills 50 percent of the individuals in an exposed population.


There are three major branches of toxicology: descriptive, mechanistic, and regulatory toxicology. All three branches contribute to risk assessment, which is the main societal application of toxicological knowledge. Mechanistic toxicology is concerned with elucidating the biochemical mechanisms underpinning the expression of toxic effects of poisons at the cellular and/or molecular levels. Assessing the potential toxicity risks associated with new chemicals depends largely on the work of mechanistic toxicologists, who are able to determine whether toxic effects observed in laboratory species are relevant to human exposure levels and physiological attributes. Mechanistic toxicologists also study dose-response relationships that are important for establishing safety thresholds of exposure for industrial chemicals used in manufacturing products and for pharmaceuticals used to treat diseases.


Regulatory toxicology involves the study of how best to protect people from toxic chemicals through the formulation of regulatory policies that govern the manufacture of commercial products, the use and disposal of potentially toxic chemicals, and the protection of workers from toxic exposures at occupational settings. The final responsibility for rejecting or approving specific chemicals for use in commerce rests with regulatory toxicologists, who are trained to evaluate data generated by mechanistic and descriptive toxicologists in light of federal and regional policies designed to protect public and environmental health. Regulatory toxicologists must make judgments following an evaluation of risks associated with short-term exposures and immediate effects (acute toxicity) as well as longer-term exposures and small doses that may result in symptoms long after the initial exposure occurs (chronic toxicity).


Descriptive toxicology forms the bridge between mechanistic and regulatory toxicology. Descriptive toxicologists are responsible for using toxicity testing for comparative risk assessment. For example, the US Food and Drug Administration (FDA) is charged with protecting public health through rigorous evaluation of toxicity levels of drugs and food additives, and descriptive toxicologists employed by the FDA are experts in selecting the best toxicity tests for that purpose. Descriptive toxicologists in the service of the US Environmental Protection Agency (EPA) or the US Department of Agriculture (USDA) collaborate in the comparative toxicity assessment of pesticides used on crops or to control disease vectors. Industrial toxicologists perform similar roles for chemicals used in manufacturing, to minimize adverse impacts on people and the environment.


Toxicologists are usually trained at graduate-level institutions that award master’s or doctorate degrees following specialization in one or more subdisciplines. Clinical toxicology is typically studied and practiced in the hospital setting to quickly recognize the symptoms of toxic exposure, usually in an uncommunicative patient, and to identify the responsible poison, followed by administration of an antidote or other forms of therapy.


Environmental toxicology is the study of the sources, transportation, transformation, and sinks of toxicants in the environment, and how humans come into contact with, and suffer from, exposure to these toxicants. Ecotoxicology is a subspecialty of environmental toxicology that deals strictly with the study of the effects of toxicants on wildlife and ecosystems.


Forensic toxicology is the study of how poisons kill people and how to measure residual levels of poisons in corpses in order to determine the cause and time of death. The practice of forensic toxicology is essential in cases of suicide or homicide involving poisons.


Molecular toxicology is the study of the effects and metabolism of toxic materials in the body at the level of molecules, typically involving molecular genetic analysis and biochemical enzymology. Molecular toxicologists also study how variability in individual genetic characteristics affects human sensitivity to toxic agents, just as age, gender, and body size can all influence human exposure and sensitivity to toxic substances.


Pharmacotoxicology is the study of the toxic effects of pharmaceutical products intended for human or animal consumption. This discipline is aimed at finding the appropriate dose of a chemical that has a healing effect without overwhelmingly toxic side effects.


Most practicing toxicologists belong to the professional Society of Toxicology, an organization that defines the responsibilities of toxicologists. The first is to develop new and improved ways of determining the potentially harmful effects of chemical and physical agents and the dose that will cause these effects. This responsibility requires a thorough understanding of the molecular, biochemical, and cellular processes responsible for diseases caused by exposure to toxic substances. The second responsibility is to study commercial chemicals and products using carefully designed and controlled empirical analyses and modeling to determine the conditions under which they can be used with minimum or no adverse effects on human health, wildlife, and ecosystems. The third is to conduct toxicological risk assessments, including estimating the probability that specific chemicals or processes pose significant risks to human health and/or the environment. The risk assessments form the basis for establishing rules and regulations that underpin government policies designed to protect public health and the environment.




Diagnostic and Treatment Techniques

The diagnostic and treatment techniques used by toxicologists depend largely on the branch of toxicology in which they practice. For example, clinical toxicologists in the hospital setting must be proficient at rapid diagnostic techniques for implementing emergency response to acute exposure to poisons. According to data published by the American Association of Poison Control Centers (AAPCC), which operates the National Poison Data System, 10,830 calls are made to poison centers in the United States each day; these poison response centers deal with a new poisoning case approximately every thirteen seconds. The AAPCC reported that US poison response centers received more than 3.1 million calls in 2013.


More than half of all poisoning cases occur in children younger than six years of age, although these incidents are rarely fatal. A major challenge for toxicologists is to quickly diagnose poisoning events in children who typically may not have the vocabulary or level of consciousness to describe the exposure event to their caregivers or to the emergency response staff when they are brought to the hospital. Most poisonings occur in the home from domestic items such as cosmetics and personal care products, cleaning fluids, medications, and pest control chemicals. Therefore, the first step in diagnosis is to identify as precisely as possible the specific chemical(s) that caused the poisoning; this can most easily be achieved through perusing the list of ingredients on the suspected container but is not always possible. It is more difficult if the poison is gaseous with a remote source. Therefore, body fluid samples (saliva, urine, or blood) can be tested using rapid techniques to identify major categories of common poisons, their known physiological effects, or biomarkers of exposure.


Application of first aid techniques is the first line of treatment for poisonings after ensuring that the patient is removed completely from the source of exposure. Follow-up treatment of poisoned patients involves three major steps. The first is to facilitate the elimination of ingested or injected poison from the body. Stomach pumping is sometimes effective for ingested poisons if applied within a time frame that occurs before a fatal dose is absorbed in the stomach. Typically, in a gastric lavage process, a siphon tube is inserted into the stomach through the mouth to repeatedly flush and empty the contents.


The second step is the application of effective antidotes that aid the excretion or inactivation of the poison either through natural liver functions or through specific biochemical reactions. Activated charcoal may be given, preferably to conscious patients through the mouth, for the purpose of absorbing the poison, thereby reducing the biologically available dose. In serious situations in which poisons are injected into the bloodstream or when poisons have been absorbed extensively from the stomach or lungs, hemodialysis may be performed to filter the blood directly through the use of artificial kidneys. Where artificial kidneys are not available, charcoal may be used for blood filtration (hemoperfusion). For poisoned patients exhibiting respiratory distress, breathing support through ventilators is an essential treatment strategy.


The third step is the treating of symptoms and the aiding of recovery. Depending on the nature of the poison, treatment may involve controlling seizures, correction of irregular heartbeat, regulation of blood pressure, and repair or replacement of damaged organs, including the kidneys and liver.


Posttreatment counseling is recommended to prevent further poison exposures through educational programs, drug rehabilitation, or mental health referrals in cases of suicide attempts. Poisoning cases may also involve substantial legal proceedings for forensic toxicologists or in cases of potential homicide.




Perspective and Prospects

Poisons and their effects on human health have been known since antiquity, but the scientific study of poisons and systematic information on their synthesis and mode of action is a relatively recent development. The German scientist Auroleus Phillipus Theostratus Bombastus von Hohenheim (1493–1541), popularly known as Paracelsus, is considered by many to be the world’s first authority on and founder of toxicology as a scientific discipline. Among his several notable accomplishments, Paracelsus is credited with introducing the use of mercury and arsenic into medical practice for curative purposes. Furthermore, he is the source of the famously paraphrased maxim “the dose makes the poison.” His exact statement in German translates as, “All things are poison and nothing is without poison; only the dose makes a thing not be poison.”


Toxicology is a rapidly evolving specialty, driven by innovations in chemical manufacturing and the growing number of toxic substances accessible to the general population. Progress in toxicology is also driven by discoveries in human genomics and proteomics. The more that is learned about the variability in the nucleotide sequences of individuals in a population, the better understood are the differences in human response to toxic chemicals. Additional work in mechanistic toxicology and descriptive toxicology remains to be done to understand adequately the interactions among genetics, age, gender, body size, and behavioral traits that mediate human exposure and response to poisons. Furthermore, the human body is exposed to a large number of chemicals on a daily basis. Very little is known about how these chemicals interact to make people more or less vulnerable to the toxic effects of poisons.


It is important to create a seamless strategy for translating laboratory data, including those based on animal or microbial model systems, into regulatory policies designed to protect the most vulnerable members of society. It is also important to create a seamless strategy for understanding the interactions of toxic chemicals in ecosystems and how these interactions influence human vulnerability and sensitivity to toxic exposures at the workplace, on the streets, and at home. Finally, toxicology has been neglected for too long by the engineering professions that create the products upon which society relies. Toxicology must be engaged as much as possible in the product design stage, before large-scale manufacturing of consumer products that end up endangering the public and ecosystems through the expression of toxicity at various stages of the product life cycle.




Bibliography


Amer. Assn. of Poison Control Centers. AAPCC Prevention. AAPCC, n.d. Web. 13 Feb. 2015.




Agency for Toxic Substances and Disease Registry. Centers for Disease Control and Prevention, 1 Aug. 2013.



"Common Toxicology Terms." Society of Toxicology, 2013.



Hodgson, Ernest, and Robert C. Smart, eds. Introduction to Biochemical Toxicology. 3d ed. New York: Wiley Interscience, 2001.



Hoffman, David J., et al. Handbook of Ecotoxicology. Boca Raton, Fla.: CRC Press, 1995.



Klaassen, Curtis D., ed. Casarett and Doull’s Toxicology. 8th ed. New York: McGraw-Hill, 2013.



Landis, Wayne G., Ruth M. Sofield, and Ming-ho Yu. Introduction to Environmental Toxicology. 4th ed. Boca Raton, Fla.: CRC Press, 2011.



Malachowski, M. J., and Arleen F. Goldberg. Health Effects of Toxic Substances. 2d ed. Rockville, Md.: Government Institutes, 1999.



Smart, Robert C., and Ernest Hodgson, eds. Molecular and Biochemical Toxicology. 4th ed. Hoboken, New Jersey: Wiley, 2008.

Monday 27 November 2017

How is the changing relationship between humans and nature represented in British literature? How are Beowulf, Paradise Lost, Oroonoko, and Heart...

In British literature, nature was originally regarded as threatening, but before the modern era, nature came to be associated with noble simplicity and goodness. By the turn of the 20th century, however, some works of British literature such as Heart of Darkness regarded both nature and civilization as evil. In Beowulf (written down around 1000 CE), nature is regarded as threatening and scary. Grendel and his mother are creations of nature, and they threaten Hrothgar and his men. Hrothgar and his community only feel safe within the confines of their great hall, called Heorot. The hall is described as "lofty and broad-gabled" (Childs translation), while Grendel is "the fell prowler about the borders of the homes of men, who held the moors, the fens." In other words, the lands beyond the community's homes are held by a monster, and he rules over the moors and the wilds. In this epic, the hall is the center of comfort and civilization, while nature beyond it is terrifying and uncontrollable.

In Paradise Lost (published 1667), the natural world is also threatening in many ways. While the Garden of Eden is nature perfected, it is bound on all sides, and it is still vulnerable to intruders. In Book IV, Satan finds his way into the garden, and in Book IX, he corrupts humankind by enticing Eve to eat from the tree of knowledge. In Paradise Lost, nature, in the form of fruit from a tree, is the source of evil. While the natural world is beautiful at first, it can also be corrupted and become the source of unhappiness. Satan is motivated to bring about humans' fall because the beauty of the Garden of Eden makes him jealous; therefore, nature is beautiful but also a source of corruption.


In Oroonoko (published 1688), nature is a source of innocence and beauty. The people in Surinam live naturally, simply, and innocently like Adam and Eve before their fall, and they are corrupted by civilization. Oroonoko himself is the picture of physical grace, as he comes from this natural world.


Unlike in the earlier works, in Heart of Darkness, evil lurks both in civilization and in nature. In fact, it lurks deep in the hearts of men like Kurtz. Marlow, the narrator, compares the Congo to a "snake," and as he penetrates deeper and deeper into the Congo, nature becomes more and more evil and threatening. However, in Marlow's dark world, civilization is no better. He describes Brussels, Belgium as "a white sepulcher," or tomb. While Africa might be dark and evil, Brussels is white and evil. By the point at which Heart of Darkness was written (1899), evil and darkness lurked in both nature and civilization.

What is mucormycosis? |


Definition

Mucormycosis is a serious infection caused by a fungus that
affects the sinuses, brain, and lungs. The infection occurs most often in people
who have a compromised immune system. The prognosis is usually poor, even with treatment.















Causes

The fungus is often found in soil and in decaying plants. It will not make most people sick. People are more likely to get the infection if they have a weakened immune system.




Risk Factors

The factors that increase the chance of developing mucormycosis include having
a weakened immune system caused by diabetes, acquired immunodeficiency
syndrome, leukemia, or lymphoma; recently
receiving an organ transplant; long-term steroid use; treatment with deferoxamine
(an antidote to iron poisoning); metabolic acidosis (too much acid in the blood);
having a sinus infection or pneumonia; and having mucormycosis of
the gastrointestinal tract, skin, or kidneys.




Symptoms

Symptoms of mucormycosis depend on the location of the infection. Infections of
the sinuses and the brain (rhinocerebral mucormycosis) include acute sinusitis,
fever, swollen or protruding eyes, dark nasal scabs, and redness of the skin over
the sinuses. Symptoms of infections of the lungs (pulmonary mucormycosis) include fever, cough, coughing up blood, and
shortness of breath. Symptoms of infections of the gastrointestinal tract (gastrointestinal mucormycosis) include abdominal pain and vomiting
blood. Symptoms of infections in the kidneys (renal mucormycosis) include fever and pain in the side between the
upper abdomen and the back.




Screening and Diagnosis

A doctor will ask about symptoms and medical history and will perform a
physical exam. Tests might include a magnetic resonance imaging (MRI) scan
(a scan that uses radio waves and a powerful magnet to produce detailed computer
images), a computed tomography (CT) scan (a detailed X-ray picture that
identifies abnormalities of fine tissue structure), and an analysis of a tissue
sample.




Treatment and Therapy

Treatment options for mucormycosis include aggressive surgery to remove all the dead or infected tissue; early surgery may improve the prognosis. Another treatment is antifungal therapy, in which IV antifungal medications are used to kill the fungus throughout the body; even with this treatment, however, the prognosis is usually poor.




Prevention and Outcomes

The fungus that causes mucormycosis is found in many places, so avoiding contact with it is difficult. The best prevention is to control or prevent the conditions related to this infection.




Bibliography


Alcamo, I. Edward. Microbes and Society: An Introduction to Microbiology. 2d ed. Sudbury, Mass.: Jones and Bartlett, 2008.



Murray, Patrick R., Ken S. Rosenthal, and Michael A. Pfaller. Medical Microbiology. 6th ed. Philadelphia: Mosby/Elsevier, 2009.



Radha, S., et al. “Gastric Zygomycosis (Mucormycosis).” Internet Journal of Pathology 5, no. 2 (2007).



Roden, M. M., et al. “Epidemiology and Outcome of Mucormycosis: A Review of 929 Reported Cases.” Clinical Infectious Diseases 41, no. 5 (September, 2005): 634-653.



Sugar, A. M. “Agents of Mucormycosis and Related Species.” In Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases, edited by Gerald L. Mandell, John F. Bennett, and Raphael Dolin. 7th ed. New York: Churchill Livingstone/Elsevier, 2010.

What is bilingualism? |


Introduction

Bilingualism is generally defined as the state of knowing two languages. This term is now commonly extended to include multilingualism, the state of knowing three or more languages. Bilingualism has long been of interest to psychologists because it raises interesting questions about the nature of linguistic knowledge and the nature of learning. In addition, because language is intimately tied to culture and one’s sense of group identification, bilingual people may have a more complex and multifaceted sense of self and group identity than monolinguals.








To Know a Language

Knowing a language requires, at a minimum, knowledge of vocabulary (words, how they are pronounced, and the concepts to which they refer) and grammar (the rules for combining words into well-formed sentences). Conventionally, knowing a language also means understanding how to read and write it and how to use it (for example, when to use formal or informal language, proper forms of address, and so forth). This last type of knowledge is often called "communicative competence."


Knowledge of one’s native language usually involves all these components. However, knowledge of a second or third language may be limited: for example, a bilingual person may be better at reading and writing in the second language than at listening and speaking, know only a specific vocabulary (such as that related to work), speak with a heavy accent, or produce ungrammatical sentences.




Types of Bilingualism

Bilingualism is considered to be coordinate, compound, or subordinate. In coordinate bilingualism, a person has parallel but separate systems for each language. This type of bilingualism is most common among people who grew up in two-language households and acquired both languages from infancy. In compound bilingualism, the person does not completely separate the two languages. Typically, the person has a unified concept for physical objects or abstract ideas that is expressed by two different words. Subordinate bilingualism arises when the second language is learned after childhood and sometimes in formal settings: in this case, the person is clearly less proficient in the second language than in the first. Also relevant to this discussion is the notion of language dominance. A bilingual person’s native language is usually the dominant one, but there are exceptions. For example, immigrant children who speak their native language at home may be more eloquent and literate in the ambient language, their second language.


Another common distinction is between simultaneous bilingualism, in which two languages are acquired at the same time in early childhood, and sequential bilingualism, in which the second language is learned later in life. Simultaneous bilingual people, sometimes called "early bilinguals," are typically fully proficient in both languages. However, it is also typical for one language to become more dominant than the other, based on the amount of use. Sequential, or late, bilinguals are likely to exhibit characteristics of nonnative speakers (such as foreign accents or errors in sentence construction), which has led to the idea that the age of language acquisition has an effect on the ability to learn language. The critical period hypothesis proposes that there is a critical developmental period for the acquisition of language, after which native proficiency may never be achieved.




Approaches to the Study of Bilingualism

Bilingualism is a complex, multifaceted area of study that can be approached from many perspectives, including linguistic and psycholinguistic, social, and pedagogical.



Linguistic and Psycholinguistic

In 1957, linguist Noam Chomsky proposed that human beings are endowed with an innate capacity to acquire language: all they need is exposure to language, and the acquisition device figures out the grammar. It has been a matter of some debate whether bilinguals, especially sequential bilinguals, are able to acquire their second language in the same fashion as their first or whether they require the use of general learning strategies, such as rote memorization, and the explicit learning of grammar.


It is clear that on the way to becoming bilingual, second-language learners, unlike native speakers, develop an interlanguage. Certain aspects of this interlanguage may be due to transfer of some aspect of a first language to a second: for example, second-language words may be pronounced with a foreign accent or inflections may be omitted. Other aspects may reflect a universal developmental sequence that learners of a first or second language go through. At a given point in time, a second-language learner develops a stable grammar, or set of rules, for the interlanguage.


Psycholinguistic approaches to bilingualism acknowledge a distinction between knowing a second language (as demonstrated in paper-and-pencil tests) and being able to use that knowledge under time constraints. As speakers and listeners, human beings are time bound: by some estimates, the average speaking rate is 180 words per minute, or 3 words per second. Listeners, of course, must be able to process spoken language efficiently or risk lagging behind and missing some portion of the spoken message. Reading rates, interestingly enough, are typically even faster, with proficient readers reading at a rate of 4 words per second. Hence, one focus of psycholinguistic research on bilingualism has been on the extent to which second-language learners are able to accurately extract the meaning conveyed by spoken or written language and whether they do so in the same time frame as native-language users do.


Research on the production of a second language focuses less on timing. Of course, speaking is also time constrained: listeners have trouble attending to very slow speech. However, speakers are certainly able to impose their own internal constraints on the language they produce, pausing, for example, in the face of word-retrieval difficulty. This is even truer of written production—writers may pause indefinitely—and this is one reason that this area has received relatively less attention. Issues of interest include how speakers manage to keep one language suppressed while speaking the other and the degree to which they can shut off the language that is not in use. Many bilinguals who interact regularly with other bilinguals do not do this; rather they routinely switch between languages (this is called "code switching"), sometimes several times per sentence.




Sociolinguistic

These approaches to the study of bilingualism emphasize communicative competence—knowledge of the implicit rules governing interactions with others in the same speech community. These rules include which topics are suitable in given situations, which speech styles are appropriate for different people, and even when to speak or be silent. If they lack communicative competence, even bilinguals with near-native linguistic competence will stand out as nonnative or be received uneasily by monolinguals in a given speech community. For example, American English speakers expect a response of “Fine,” “Great,” or even “Hey” to the question “How are you?” which is functionally a greeting rather than a question. Nonnative speakers may not know this.


Bilinguals’ varying degrees of communicative competence in their multiple speech communities can complicate their sense of identity and their sense of belonging to a specific community. This, combined with other factors, such as the relative social status of their languages, may increase or decrease the likelihood that they will desire (or be able) to belong to a certain speech community. Communicative competence can even vary across different situations, such as interactions with elders versus those with peers. Some bilinguals may report feeling that they do not completely belong in any given community or feeling uncomfortable using their native language because of the limited contextual rules they know for it. Others, however, report appreciating the larger social access they have because of their ability to communicate in more than one language.




Pedagogical

The pedagogical approach examines two major populations of interest: the students who are nonnative speakers of the community language (second-language learners) and the students who are native speakers of the community language and are learning another language (foreign-language learners). In general, these two types of learners acquire a target language under vastly different circumstances.


Many second-language learners are immigrants who are immersed in the new language and must gain communicative and academic competence quickly. In some cases, a student’s native language is not used at all to teach the new language. This is particularly true of school-aged students because in the United States, they are most likely to be subject to state laws regarding bilingual education. These laws determine whether and how long nonnative English speakers may receive instruction or support in their native or heritage language within American public schools. Many states do not allow any instruction in a student’s native language, and students are simply expected to acquire the language, along with communicative competence to interact in the new language in the new culture.


Foreign-language learners must gain some communicative competence in the relatively short amount of time they spend in the classroom. Foreign-language teaching methods vary depending on the context and the learners’ goals. Large classes and minimal instructor support generally require the grammar-translation method, as it includes little writing, speaking, or interaction, and instead focuses on grammar learning. Given the readily available teaching materials developed for this method, lesson preparation may be relatively less time consuming. This method is also commonly used for those who want to learn to read in a language for research purposes but do not plan to write or otherwise communicate in the language. If listening and speaking skills are the focus, then the audiolingual method may be employed. This involves listening to, repeating, and memorizing dialogues, giving a learner practice with vocabulary, word order, and pronunciation. Most basic language programs in American universities favor the communicative language teaching approach, which counts communicative competence as the ultimate learning goal, even if some grammatical accuracy is sacrificed. This is ideal for those who plan to travel, study, or work abroad for a limited amount of time, but who do not need to be highly proficient in the language. Those who need higher proficiency for work, study, or assimilation purposes typically move on to content-based learning, where a given field is studied in the foreign language (for example, business in German or literature in Russian).


Foreign-language students who wish to become highly fluent generally need a period of time in an immersion situation, living and interacting with speakers of the target language. Not only does this provide a context for the development of communicative competence, it provides a way for learners to achieve real fluency in the language through sheer practice.





Bibliography


Birdsong, David, ed. Second Language Acquisition and the Critical Period Hypothesis. Mahwah: Erlbaum, 1999. Print.



De Bot, Kees, Wander Lowie, and Marjolyn Verspoor. Second Language Acquisition: An Advanced Resource Book. New York: Routledge, 2005. Print.



Grosjean, François, and Ping Li. The Psycholinguistics of Bilingualism. Malden: Wiley-Blackwell, 2012. Print.



Kroll, Judith F., and Annette M. B. De Groot, eds. Handbook of Bilingualism: Psycholinguistic Approaches. Oxford: Oxford UP, 2009. Print.



Nicol, Janet, ed. One Mind, Two Languages: Bilingual Language Processing. Malden: Blackwell, 2001. Print.



Rosé, Carlos D. "Bilingual Families." KidsHealth.org. Nemours Foundation, Aug. 2011. Web. 18 Feb. 2014.



Sanz, Cristina, ed. Mind and Context in Adult Second Language Acquisition. Washington, DC: Georgetown UP, 2005. Print.



Saville-Troike, Muriel. Introducing Second Language Acquisition. 2nd ed. Cambridge: Cambridge UP, 2012. Print.



Spada, Nina, and Patsy M. Lightbown. How Languages Are Learned. 4th ed. Oxford: Oxford UP, 2013. Print.

What is the incentive theory of motivation?


Introduction

Motivation refers to a group of variables that determine what behavior—and how strong and how persistent a behavior—is to occur. Motivation is different from learning. Learning variables are the conditions under which a new association is formed. An association is the potential for a certain behavior; however, it does not become behavior until motivation is introduced. Thus, motivation is necessary to convert a behavioral potential into a behavioral manifestation. Motivation turns a behavior on and off.





Incentive motivation is an attracting force, while drive motivation is an expelling force. Incentive is said to “pull” and drive to “push” an individual toward a goal. The attracting force originates from the reward object in the goal and is based on expectation of the goal object in certain locations in the environment. The expelling force originates from within organisms as a need, which is related to disturbances in homeostasis in the body. The two forces jointly determine behavior in a familiar environment. In a novel environment, however, there is not yet an expectation; no incentive motivation is yet formed, and drive is the only force to cause behavior. The organism can be expected to manifest various responses until the goal-oriented responses emerge.


Once the organism achieves the goal, the reward stimuli elicit consummatory responses. Before the organism reaches the goal, the stimuli that antedate the goal would elicit responses; these are termed anticipatory goal responses. The anticipatory responses are based on the associational experience among the goal stimuli, the goal responses, and the situational stimuli present prior to reaching the goal. The anticipatory responses and their stimulus consequences provide the force of incentive motivation. Incentive refers to the expected amount of reward given certain behavior.


Though drive motivation and incentive motivation jointly determine behaviors, the importance of each differs for different behaviors. For example, bar-pressing behavior for drinking water by an animal in a Skinner box normally requires both drive motivation, induced by water deprivation, and the incentive motivation of a past experience of getting water. Under special conditions, however, the animal will press the bar to drink water even without being water deprived. In this case, drinking is no longer related to drive. This type of drinking is called nonhomeostatic drinking. Drinking a sweet solution, such as one containing sugar or saccharin, does not require any deprivation, so the behavior to get sweet solutions is based on incentive motivation alone. Under normal conditions, sexual behavior is elicited by external stimuli, so sexual drive is actually incentive motivation elicited from without.




Behavior and Incentives

Two experiments will illustrate how the concept of incentive motivation may be applied to explain behavior. Carl J. Warden of Columbia University conducted a study that is regarded as a classic. A rat was placed in the start box of a short runway, and a reward (food) was placed in its goal box at the other end. The food-deprived animal had to cross an electrified grid on the runway to reach the goal. When the animal reached the goal, it was repeatedly brought back to the start box. The number of times the animal would cross the grid in a twenty-minute period was recorded. It was found that the longer the food deprivation, the more times the animal crossed the grid, for up to about three days without food, after which the number decreased. The animal crossed only about two times with no food deprivation; however, the number increased to about seventeen at three days of food deprivation, then decreased to about seven at eight days without food. When the animal was water deprived, the animal crossed the grid about twenty times to the goal box containing water at one day without water. When the reward was an infant rat, a mother rat crossed about twenty times. A male rat crossed about thirteen times to a female rat after being sex deprived (without a female companion) for one day. A female rat in heat crossed thirteen times to a male rat. Even without any object in the goal box, the animal crossed about six times; Warden attributed this to a “novelty” reward. The reward variable in this experiment was the goal object, which was manipulated to fit the source of the drive induced by deprivation or hormonal state (as in an estrous female). The rat, placed in the start box, was induced by the goal.




Crespi Effect

The second study, conducted by Leo P. Crespi, established the concept of incentive motivation as an anticipatory response. He trained rats in a runway with different amounts of food and found that the animals reached different levels of performance. The speed of running was a function of the amount of reward: The more the food in the goal box, the faster the animal would run. There were three groups of rats. Group 1 was given 256 food pellets (about a full day’s ration) in the goal box; the animals would run at slightly over 1 meter (about 3.28 feet) per second after twenty training trials. Group 2 was given 16 pellets, and their speed was about 76 centimeters (2.5 feet) per second. Group 3 was given only 1 pellet, and the speed was about 15 centimeters (6 inches) per second.


When the speed became stable, Crespi shifted the amount of food. The rats in all groups were now given 16 pellets. The postshift speed eventually, but not immediately, settled to near that of the group originally given 16 pellets. An interesting transitional effect of so-called incentive contrast was observed. Immediately after the shift from the 256-pellet reward to the 16-pellet reward, the animal’s speed was much lower than the group continuously given the 16-pellet reward. Following the shift from the 1-pellet to the 16-pellet reward, however, the animal’s speed was higher than the group continuously given the 16-pellet reward. Crespi called these the elation effect and the depression effect, or the positive contrast effect and the negative contrast effect, respectively. Clark L. Hull and K. W. Spencer, two of the most influential theorists of motivation and learning, interpreted the Crespi effect as evidence of anticipatory responses. They theorized that the goal response had become conditioned to the runway stimuli such that the fractional goal responses were elicited. Because the responses occurred before the goal responses, they were anticipatory in nature. The fractional goal responses, along with their stimulus consequence, constitute the incentive motivation that would energize a learned associative potential to make it into a behavior.




Manipulation of Motivation

Incentive motivation has been manipulated in many other ways: the delay of reward presentation, the quality of the reward, and various partial reinforcement schedules. In relation to the delay variable, the sooner the reward presentation follows the responses, the more effective it is in energizing behavior, although the relationship is not linear. In the case of partial reinforcement, when the subject received a reward only part of the time, behavior was shown to be more resistant to extinction than when reward was delivered every time following a response; that is, following withdrawal of the reward, the behavior lasted longer when the reward was given only part of the time than when the reward was given every time following the response. The quality of the reward variable could be changed by, for example, giving a monkey a banana as a reward after it had been steadily given raisins. In Warden’s experiment, the various objects (water, food, male rat, female rat, or rat pup) placed in the goal box belong to the quality variable of incentive. Another incentive variable is how much effort a subject must exert to obtain a reward, such as climbing a slope to get to the goal versus running a horizontal path.




Intracranial Self-Stimulation

The term “ reinforcer” usually indicates any stimulus that would result in increasing the probability or magnitude of a response on its presentation following that response. When the response has reached its maximum strength, however, a reinforcer can no longer increase it; nevertheless, it has a maintenance effect. Without it, the response would soon cease. A reward reinforces and maintains a response. It is believed that the rewarding effects are mediated by the brain; the mechanism that serves as the substrate of the effects has been studied.


In a breakthrough experiment in this line of study, in 1954, James Olds and Peter Milner reported that a rat would press a bar repeatedly to stimulate certain areas of its brain. (If the bar press resulted in stimulation of certain other areas of the brain, the rat would not repeat the bar press.) Thus, this particular brain electrical stimulation has a rewarding effect. The phenomenon is termed intracranial self-stimulation. The rewarding effect is so powerful that the hungry animal would rather press the bar to stimulate its brain than eat. It has also been shown that animals will press a bar to self-inject cocaine, amphetamine, morphine, and many other drugs. The rewarding effect is so powerful that if rats or monkeys are given access to a bar that allows continuous self-administration of cocaine, they often die of an overdose. It is now known that the neurotransmitter involved in this rewarding effect, as well as in the rewarding effect of food, is dopamine, acting at the nucleus accumbens, a part of the limbic system in the brain. Addictions and drug-directed behaviors can be understood better because of studies related to the brain reward mechanism. This mechanism is defined as the rewarding effect of various stimuli, such as food, cocaine, and intracranial self-stimulation, as related to dopamine activity in the brain. Whether incentive motivation is mediated by the same brain mechanism can also be studied.




Achievement Motivation

In humans, achievement motivation
can be measured to predict what a subject would choose to do given tasks of different difficulty as well as how persevering the subject will be when he or she encounters failure. Achievement motivation is related to past experiences of rewards and failures to obtain a reward, so it becomes an incentive motivation of anticipating either success or failure to obtain a reward. Fear of failure is a negative motivational force; that is, it contributes negatively to achievement motivation. Those people with a strong fear of failure will choose easy tasks to ensure success, and on encountering failure they will give up quickly.


Unless an individual anticipates or believes that the effort will lead to some desired outcome, the person will not expend much effort. Expectancy theory
states that how much effort a person will expend depends on the expected outcome of the effort. If the expected outcome is positively correlated to the effort, the person will work as hard as possible. In a classroom setting, effort can be evaluated from a student’s attendance, note taking, and discussions with classmates or teachers. The expected outcome would be to earn a particular grade, as well as perhaps to obtain a scholarship, make the dean’s list, obtain a certain job, gain admission to graduate school, or gain respect from peers and parents. Unless the effort is perceived to be related to the outcome, little effort will be expended.


If one is expecting a big reward, one would work harder than if the reward were small. An Olympic gold medal is worth harder work than a school gold medal is. Anyone can affect other people’s behavior with proper incentive; behavior can be manipulated to promote learning in students and promote productivity in industry. The way incentive is used to promote productivity distinguishes the free enterprise system based on a market economy from a socialist society of controlled economy which is not based on market force. In a socialist economy, one’s reward is not based on the amount of one’s economic contribution; it is based on the degree of socialistic behavior. One’s political background, in terms of family, loyalty to the party, and “political consciousness,” are the things that matter most. It is difficult or impossible to predict, under this kind of reward situation, what kinds of activities will be reinforced and maintained. The expected outcome of an individual’s effort or behavior is the incentive motivation; teachers and managers must understand it to promote desired learning and production. For example, an employee will be motivated to perform certain tasks well by a pay raise only when he or she perceives the relationship between the effort and the raise. A student will be motivated to study only when he or she sees the relationship between the effort and the outcome.




Relationship to Pleasure

The concepts of incentive, reward, and reinforcement originated with the concept of pleasure, or hedonism. The assumption that a major motivation of behavior is the pursuit of pleasure has a long history. Epicurus, a fourth century b.c.e. Greek philosopher, asserted that pleasure is good and wholesome and that human life should maximize it. Later, Christian philosophers asserted that pleasure is bad and that if a behavior leads to pleasure it is most likely bad as well. John Locke, a seventeenth century British philosopher, asserted that behavior is based on maximizing anticipated pleasure. Whether a behavior would indeed lead to pleasure was another matter. Thus, Locke’s concept of hedonism became a behavioral principle. Modern incentive motivation, based on anticipation of reward, has the same tone as Locke’s behavioral principle. Both traditions involve the concepts of incentive and of reinforcement being a generator of behaviors.


There is a danger of circularity in this line of thought. For example, one may explain behavior in terms of it resulting in obtaining a reward, then explain or define reward in terms of behavior. There is no new understanding to be gained in such circular reasoning. Fortunately, there is an independent definition of the rewarding effect, in terms of the brain mechanism of reward. If this mechanism is related to pleasure, there could also be a definition of pleasure independent of behavior. Pleasure and reward are the motivating force, and anticipation of them is incentive motivation. Because it attracts people toward their sources, by manipulating the sources, the behavior can be predictably altered.




Bibliography


Bolles, Robert C. Theory of Motivation. 2d ed. New York: Harper, 1975. Print.



Comer, Ronald J., and Elizabeth Gould. Psychology Around Us. Hoboken: Wiley, 2013. Print.



Crespi, Leo P. “Quantitative Variation of Incentive and Performance in the White Rat.” American Journal of Psychology 55.4 (1942): 467–517. Print.



Deckers, Lambert. Motivation: Biological, Psychological, and Environmental. 3d ed. Boston: Allyn, 2009. Print.



Green, Russell. Human Motivation: A Social Psychological Approach. Pacific Grove: Brooks, 1995. Print.



Kohn, Alfie. Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes. Boston: Houghton, 1999. Print.



Liebman, Jeffrey M., and Steven J. Cooper, eds. The Neuropharmacological Basis of Reward. Oxford: Oxford UP, 1989. Print.



Logan, Frank A., and Douglas P. Ferraro. Systematic Analyses of Learning and Motivation. New York: Wiley, 1978. Print.



Olds, James, and Peter Milner. “Positive Reinforcement Produced by Electrical Stimulation of Septal Area and Other Regions of Rat Brain.” Journal of Comparative and Physiological Psychology 47 (1954): 419–427. Print.



Ryan, Richard M. The Oxford Handbook of Human Motivation. Oxford: Oxford UP, 2012. Print.



Shah, James Y., and Wendi L. Gardner, eds. Handbook of Motivation Science. New York: Guilford, 2008. Print.



Warden, Carl John. Animal Motivation: Experimental Studies on the Albino Rat. New York: Columbia UP, 1931. Print.

Give two examples of alliteration from the story. Write down the full sentences in which the alliterative elements exist.

Alliteration is a literary device that can be used to make a text sound more poetic. Unlike rhymes that modern audiences are more familiar with, where the sounds at the ends of the words are the same, alliteration is when the sounds at the beginning of the words are the same. This was used often in ancient epic poems like Beowulf, instead of an end rhyme. 


An example of alliteration would be something like black...

Alliteration is a literary device that can be used to make a text sound more poetic. Unlike rhymes that modern audiences are more familiar with, where the sounds at the ends of the words are the same, alliteration is when the sounds at the beginning of the words are the same. This was used often in ancient epic poems like Beowulf, instead of an end rhyme. 


An example of alliteration would be something like black bunnies burrow in the bleak winter or the squawking and shrieking of the seagulls skinned my ears.


In "The Necklace," there is quite a bit of alliteration as Madam Loisel imagines the beautiful life that she longs for and describes it in detail. In one example, she imagines dinner:



"When she sat down to dine, before a tablecloth three days old, in front of her husband, who lifted the cover of the tureen, declaring with an air of satisfaction, “Ah, the good pot-au-feu. I don’t know anything better than that,” she was thinking of delicate repasts, with glittering silver, with tapestries peopling the walls with ancient figures and with strange birds in a fairy-like forest; she was thinking of exquisite dishes, served in marvelous platters, of compliment whispered and heard with a sphinx-like smile, while she was eating the rosy flesh of a trout or the wings of a quail."



Still imagining her life, she moves to her effect on others:



"She would so much have liked to please, to be envied, to be seductive and sought after."



Alliteration and other literary devices are used in this part of the story to paint the picture of the beautiful life Madam Loisel craves. 

*It's important to note that not all translations of the story are the same. I used the text linked below.*

What's the summary of chapters 7-9 of the book Lyddie by Katherine Paterson?

Let me start very briefly with the ending of chapter 6.  Lyddie was fired from Cutler's tavern and decided to head toward Lowell to become a factory girl.  


Chapter 7 is about Lyddie's journey on the stagecoach to Lowell.  There are other passengers, and for the most part the trip is uneventful.  The stagecoach gets stuck in the mud once, and the men are unable to get it free.  A frustrated Lyddie then takes...

Let me start very briefly with the ending of chapter 6.  Lyddie was fired from Cutler's tavern and decided to head toward Lowell to become a factory girl.  


Chapter 7 is about Lyddie's journey on the stagecoach to Lowell.  There are other passengers, and for the most part the trip is uneventful.  The stagecoach gets stuck in the mud once, and the men are unable to get it free.  A frustrated Lyddie then takes charge of the situation and gets the stagecoach free.  Lyddie's actions impressed the coach driver so much that he decided to introduce Lyddie to his sister, Mrs. Bedlow.  Mrs. Bedlow owns a boarding house for factory working girls. 


In chapter 8 Lyddie meets her roommates at the boarding house.  They are Amelia, Prudence, and Betsy. Mrs. Bedlow gives Lyddie enough money to buy new clothes, and Mrs. Bedlow helps Lyddie get a job in the factory. 


Lyddie begins working at the factory in chapter 9.  She is amazed at how noisy and fast the factory work is.  Lyddie meets Diana who teaches Lyddie how to do the factory work.  Diana also encourages Lyddie to write a letter to her family.  Lyddie's roommates warn Lyddie to be careful with how much she trusts Diana, because Diana is a vocal proponent for better working conditions. 

Saturday 25 November 2017

What are DNA and RNA?


Structure and Functions

Each human being is a biologically unique individual. That uniqueness has its basis in one’s cellular makeup. Appearance derives from the arrangement of cells during fetal development, size depends on the cells’ ability to grow and divide, and the function of organs depends on the biochemical function of the individual cells that constitute each organ. The functions of cells depend on the types and amounts of the different proteins that they synthesize. The substance that holds the information that determines the structure of proteins, when they should be produced, and in what amounts is deoxyribonucleic acid (DNA).




DNA is the molecule of heredity, and as a child receives half of his or her DNA from each biological parent, each individual is the product of a mixture of information. Therefore, while children resemble their parents, they are unique. Each cell in an individual’s body (except for the sex cells) has a complete set of genetic information contained in the chromosomes of the cell’s nucleus. Human cells have forty-six chromosomes (twenty-three pairs). Each chromosome is a single piece of DNA associated with many types of proteins. The major function of DNA is to store, in a stable manner, the information that is the “blueprint” for all physiological aspects of an individual. Stability is one of the key attributes of DNA. An information storage molecule is of little use if it can be altered or damaged easily. Another key characteristic of DNA is its ability to be replicated. When a cell divides, the information in the DNA must be replicated so that each of the two new cells can have a
complete set.


Stability, the ability to be replicated, and the ability to store vast amounts of coded information have their basis in the structure of DNA. DNA is a long, incredibly thin fiber. The chromosomes in some cells would be as long as a foot or more if they were fully extended. The shape of the DNA molecule can be imagined as a long ladder whose rails are chains of two alternating molecules: deoxyribose (a sugar) and phosphate (an acid containing phosphorus and oxygen). The steps of the ladder are made of pairs of organic bases, of which there are four types: adenine (A), guanine (G), thymine (T), and cytosine (C). Adenine always pairs up with thymine to form a step in the ladder (A-T), and guanine always pairs with cytosine (C-G). This complementarity of base-pairing is the basis for DNA replication and for transferring information from DNA out of the nucleus and into the cytoplasm. Finally, the whole DNA molecule is twisted into a stable right-handed spiral, or helix. Because there is no restriction on the sequence in which the base pairs appear along the molecule, the bases have the potential to be used as a four-letter alphabet that can
encode information into “words” of varying lengths, called genes. Each information sequence, or gene, holds the information needed to synthesize a linear chain of amino acids, which are the building blocks of proteins. The information encoded in the base sequences of DNA determines the quantities and composition of all proteins made in the cell.


Under certain conditions, DNA can be separated lengthwise into two halves, or denatured, by breaking the base pairs so that one of each pair remains attached to one sugar-phosphate chain and the other base remains attached to the other sugar-phosphate chain. Because this forms two strands of DNA, whole DNA is usually referred to as being double-stranded. Such separation rarely happens by accident because of the extreme length of DNA. If any area becomes denatured, the rest of the base pairs hold the molecule together. In addition, an area of denaturation will automatically try to renature, since complementary bases have a natural attraction for each other. As stable as these traits make it, DNA must be capable of being duplicated so that each newly divided cell has a complete copy of the stored information. DNA is replicated by breaking the base pairs, separating the DNA into two halves, and building a new half onto each of the old halves. This is possible because the complementarity rule (A pairs with T, and C pairs with G) allows each half of a denatured DNA molecule to hold the information needed to construct a new second half. This is accomplished by special sets of proteins that separate the old DNA as they move along the molecule and build new DNA in their wake.


All the information needed to produce proteins is located in the DNA within the nucleus of the cell, but all protein synthesis occurs outside the nucleus in the cytoplasm. An information transfer molecule is required to copy or transcribe information from the genes of the DNA and carry it to the cytoplasm, where large globular protein complexes called ribosomes take the information and translate it into the amino acid structure of specific proteins. This information transfer molecule is ribonucleic acid (RNA). Many RNA copies can be made for any single piece of information on the DNA and used as a template to synthesize many proteins. In this way, the information in DNA is also amplified by RNA. RNA also participates in the synthesis of proteins from the genetic information. RNA resembles one half of a DNA molecule and is usually referred to as being single stranded. It consists of a single chain of alternating sugars and phosphates with a single organic base attached to each sugar. The sugar in this case is ribose, similar to deoxyribose, and the bases are identical to those in DNA with the exception of thymine, which is
replaced by a very similar base called uracil (U).


There are four major types of RNA. Messenger RNA (mRNA) is responsible for the transfer of information from the DNA sequences in the nucleus to the ribosomes in the cytoplasm. Ribosomal RNA (rRNA) interacts with dozens of proteins to form the ribosome. It aids in the interaction between mRNA and the ribosome. Transfer RNA (tRNA) is a group of small RNAs that helps translate the information coded in the mRNA into the structure of specific proteins. They carry the amino acids to the ribosome and match the correct amino acid to its corresponding sequence of bases in the mRNA.


The first step in producing a specific protein is the accurate copying or transcription of information in a gene into information on a piece of mRNA. There are specific sets of proteins that separate the double-stranded DNA in the immediate vicinity of a gene into two single-stranded portions and then, using the DNA as a template, build a piece of mRNA that is a complementary copy of the information in the gene. This is possible because RNA also uses organic bases in its structure. The A, C, G, and T of the single-stranded portion of DNA form base pairs with the U, G, C, and A of the mRNA, respectively. The complementary copy of mRNA, when complete, falls away from the DNA and moves to the cytoplasm of the cell.


In the cytoplasm, the mRNA binds to a ribosome. As the ribosome moves down the length of the mRNA, the tRNAs interact with both the ribosome and the mRNA in order to match the proper amino acid (carried by the tRNAs) to the proper sequence of bases in the mRNA. The order of amino acids in the protein is thus determined by the order of bases in the DNA. Achieving the correct order of amino acids is critical for the correct functioning of the protein. The order of amino acids in the chain determines the way in which it interacts with itself and folds into a three-dimensional structure. The function of all proteins depends on their assuming the correct shape for interaction with other molecules. Therefore, the sequence of bases in the DNA ultimately determines the shape and function of proteins.


Another class of RNA is involved in translation regulation, by a process called RNA interference, or RNAi. The two types of RNA in this class are short interfering RNA (siRNA) and micro RNA (miRNA). SiRNA is a double-stranded molecule of twenty to twenty-five base-pairs in length, whereas miRNA is single stranded and consists of nineteen to twenty-three nucleotides. SiRNA and miRNA become incorporated in a protein complex known as the RNA-induced silencing complex (RISC). The RISC-associated siRNA targets a specific sequence in its target mRNA, and when bound to the mRNA causes destruction. MiRNA bound RISC binds to the mRNA and inhibits translation of that mRNA; in this case, however, the mRNA is not destroyed. RNAi plays a role in diverse cellular functions such as cell differentiation, fetal development, cell proliferation, and cell death. It is also involved in pathogenic events such as viral infection and certain cancers.




Disorders and Diseases

When the normal structure of DNA is altered (a process called a mutation), the number of proteins produced and/or the functions of proteins may be affected. At one extreme, a mutation may cause no problem at all to the person involved. At the other extreme, it may cause devastating damage to the person and result in genetic disease or cancer.


Mutations are changes in the normal sequence of bases in the DNA that carry the information to build a protein or that regulate the amount of protein to be produced. There are different types of mutations, such as the alteration of one base into another, the deletion of one or many bases, or the insertion of bases that were not in the sequence previously. Mutations can have many different causes, such as ultraviolet rays, X rays, mutagenic chemicals, invading viruses, or even heat. Sometimes mutations are caused by mistakes made during the process of DNA replication or cell division. Cells have several systems that constantly repair mutations, but occasionally some of these alterations slip by and become permanent.


Mutations may affect protein structure in several ways. The protein may be too short or too long, with amino acids missing or new ones added. It might have new amino acids substituting for the correct ones. Sometimes as small a change as one amino acid can have noticeable effects. In any of these cases, changes in the amino acid sequence of a protein may drastically affect the way the protein interacts with itself and folds itself into a three-dimensional structure. If a protein does not assume the correct three-dimensional structure, its function may be impaired. It is important to note that how severely a protein’s function is affected by a mutation depends on which amino acids are involved. Some amino acids are more important than others in maintaining a protein’s shape and function. A change in amino acid sequence may have virtually no effect on a protein or it may destroy that protein’s ability to function.


If a mutation occurs that affects the regulation of a particular protein, that gene may be perfectly normal and the protein may be fully functional, but it may exist in the cell in an improper amount—too much, too little, or even none at all. It is important to note that the overproduction of a protein, as well as its underproduction or absence, can be harmful to the cell or to the person in general. The genetic disease known as Down syndrome, for example, is the result of the overproduction of many proteins at the same time.


The term “genetic disease” is used for a heritable disease that can be passed from parent to child. The mutation responsible for the disease is contributed by the parents to the affected child via the sperm or the egg or (as is usually the case) both. The parents are, for the most part, quite unaffected. Because all creatures more complex than bacteria have at least two copies of all their genes, a person may carry a mutated gene and be perfectly healthy because the other normal gene compensates by producing adequate amounts of normal protein. If two individuals carrying the same mutated gene produce a child, that child has a chance of obtaining two mutant genes—one from each parent. Every cell in that child’s body carries the error with no normal genes to compensate, and every cell that would normally use that gene must produce an abnormal protein or abnormal amounts of that protein. The medical consequences vary, depending on which gene is affected and which protein is altered. The following are two specific examples of genetic diseases in which the connection between specific mutations
and the disease states are well documented.


Sickle cell disease is a genetic disease that results from an error in the gene that carries the information for the protein beta globin. Beta globin is one of the building blocks of hemoglobin, the molecule that binds to and carries oxygen in the red blood cells. The error or mutation is a surprisingly small one and serves to illustrate the fact that the replacement of even a single amino acid can change the chemical nature and function of a protein. Normal beta globin has a glutamic acid as the sixth amino acid in the protein chain. The mutation of a single base in the DNA changes the coded information such that the amino acid valine replaces glutamic acid as the sixth position in the protein chain. This single alteration causes the hemoglobin in the red blood cell to crystallize under conditions of low oxygen concentration. As the crystals grow, they twist and deform the normally flexible and disk-shaped red blood cells into rigid sickle shapes. These affected cells lose their capacity to bind and hold oxygen, thereby causing anemia, and their new structure can cause blockages in small capillaries of the circulatory system, causing pain and widespread organ damage. There is no safe and effective treatment or cure for this condition.


Phenylketonuria (PKU) is caused by a mutation in the gene that controls the synthesis of the protein phenylalanine hydroxylase (PAH). There are several mutations of the PAH gene that can lead to a drastic decrease in PAH activity (by greater than 1 percent of normal activity). Some are changes in one base that lead to the replacement of a single amino acid for another. For example, one of the most common mutations in the PAH gene is the alteration of a C to a T that results in amino acid number 408 changing from an arginine to a tryptophan. Some mutations are deletions of whole sequences of bases in the gene. One such deletion removes the tail end of the gene. In any case, the amino acid structure of PAH is altered significantly enough to remove its ability to function. Without this protein, the amino acid phenylalanine cannot be converted into tyrosine, another useful amino acid. The problem is not a shortage of tyrosine, since there is plenty in most foods, but rather an accumulation of undesirable products that form as the unused phenylalanine begins to break down. Since developing brain cells are particularly sensitive to these
products, the condition can cause mental retardation unless treated immediately after birth. While there is no cure, the disease is easily diagnosed and treatment is simple. The patient must stay on a diet in which phenylalanine is restricted. Food products that contain the artificial sweetener aspartame (NutraSweet) must have warnings to PKU patients printed on them since phenylalanine is a major component of aspartame.




Perspective and Prospects

Genetics is a young science whose starting point is traditionally considered to be 1866, the year in which Gregor Mendel published his work on hereditary patterns in pea plants. While he knew nothing of DNA or its structure, Mendel showed mathematically that discrete units of inheritance, which are now called genes, existed as pairs in an organism and that different combinations of these units determined that organism’s characteristics. Unfortunately, Mendel’s work was ahead of its time and thus ignored until rediscovered by several researchers simultaneously in 1900.


DNA itself was discovered in 1869 by Friedrich Miescher, who extracted it from cell nuclei but did not realize its importance as the carrier of hereditary information. Chromosomes were first seen in the 1870’s as threadlike structures in the nucleus, and because of the precise way they are replicated and equally parceled out to newly divided cells, August Weismann and Theodor Boveri, in the 1880’s, postulated that chromosomes were the carriers of inheritance.


In 1900, Hugo de Vries, Karl Correns, and Erich Tschermak von Seysenegg—all plant biologists who were working on patterns of inheritance—independently rediscovered Mendel’s work. De Vries had in the meantime discovered mutation around 1890 as a source of hereditary variation, but he did not postulate a mechanism. Mendel’s theories and the then-current knowledge of chromosomes merged perfectly. Mendel’s units of inheritance were thought somehow to be carried on the chromosomes. Pairs of chromosomes would carry Mendel’s pairs of hereditary units, which, in 1909, were dubbed “genes.”


At that point, genes were still a theoretical concept and had not been proved to be carried on the chromosomes. In 1909, Thomas Hunt Morgan began the work that would provide that proof and allow the mapping of specific genes to specific areas of a chromosome. The nature of a gene, or how it expressed itself, was still a mystery. In 1941, George Beadle and Edward Tatum proved that genes regulated the production of proteins, but the nature of genes was still in debate. There were two candidates for the chemical substance of genes; one was protein and the other was the deceptively simple DNA. In 1944, Oswald Avery proved in experiments with pure DNA that DNA was indeed the molecule of inheritance. In 1953, James D. Watson and Francis Crick, using the work of Rosalind Franklin, elucidated the chemical structure of the double helix, and soon after, Matthew Meselson and Franklin Stahl proved that DNA replicated itself. By the end of the 1950’s, RNA was being implicated in protein synthesis, and much of the mechanism of translation was postulated by Marshall Nirenberg and Johann Matthaei in 1961.


Craig Mello and Andrew Fire were awarded the 2006 Nobel Prize in Physiology or Medicine for their discovery of siRNA and for their research on the RNAi system. In 1993, Victor Ambros was the first person to describe miRNA. Both siRNA and micro RNA have possible therapeutic use. Clinical trials involving siRNA and miRNA are in progress. Examples are the use of siRNA in the treatment of macular degeneration, an age-related eye disorder, and the use of miRNA in the treatment of chronic hepatitis C. The major barriers to the use of these molecules are inefficient delivery to target cells and off-target effects.


The concept of heritable genetic disease is also a relatively recent one. The first direct evidence that a mutation can result in the production of an altered protein came in 1949 with studies on sickle cell disease. Since then, thousands of genetic diseases have been characterized. The advent in the 1970’s of recombinant DNA technology, which allows the direct manipulation of DNA, has greatly increased the knowledge of these diseases, as well as demonstrated the genetic influences in maladies such as cancer and behavioral disorders. This technology has led to vastly improved diagnostic methods and therapies while pointing the way toward potential cures.




Bibliography


Campbell, Neil A., et al. Biology: Concepts and Connections. 6th ed. San Francisco: Pearson/Benjamin Cummings, 2008. This classic introductory textbook provides an excellent discussion of essential biological structures and mechanisms. Its extensive and detailed illustrations help to make even difficult concepts accessible to the nonspecialist. Of particular interest are the chapters constituting the unit titled “The Gene.”



Drlica, Karl. Understanding DNA and Gene Cloning: A Guide for the Curious. 4th ed. Hoboken, N.J.: Wiley, 2004. This book for the uninitiated explains the basic principles of genetic mechanisms without requiring knowledge of chemistry. The first third is especially good on the fundamentals, but the remainder may be too deep for some readers.



Frank-Kamenetskii, Maxim D. Unraveling DNA: The Most Important Molecule of Life. Translated by Lev Liapin. Rev. ed. Reading, Mass.: Addison-Wesley, 1997. This very readable book provides an excellent history of the discovery of DNA. Also describes the nature of DNA and discusses genetic engineering and the ethical questions that surround its use.



Glick, Bernard, Jack J. Pasternak, and Cheryl L. Patten. Molecular Biotechnology: Principles and Applications of Recombinant DNA. 4th ed. Washington, D.C.: ASM Press, 2010. Explores the scientific principles of recombinant DNA technology and its wide-ranging use in industry, agriculture, and the pharmaceutical and biomedical sectors.



Gonick, Larry, and Mark Wheelis. The Cartoon Guide to Genetics. Rev. ed. New York: Collins Reference, 2007. An effective mixture of humor and fact makes this book a nonthreatening reference on genetics. Presented using historical context, it covers DNA and RNA structure and function and much more.



Gribbin, John. In Search of the Double Helix. New York: Bantam Books, 1985. Gribbin is a renowned science writer who is capable of explaining complex subjects in a way that anyone can understand. In this book, he goes from Charles Darwin’s theories to quantum mechanics in his rendition of the history of the discovery of DNA. Very readable.



Hofstadter, Douglas R. “The Genetic Code: Arbitrary?” In Metamagical Themas: Questing for the Essence of Mind and Pattern. New York: Basic Books, 1985. While only a thirty-page chapter in a large book, this piece by Hofstadter is an excellent and thought-provoking explanation of transcription and translation written for the nonscientist.



Micklos, David A., Greg A. Freyer, and David A. Crotty. DNA Science: A First Course. 2d ed. Cold Springs Harbor, N.Y.: Cold Springs Harbor Press, 2003. Text that combines an introductory discussion of the principles of genetics, DNA structure and function, and methods for analyzing DNA with twelve laboratory experiments that illustrate the basic techniques of DNA restriction, transformation, isolation, and analysis.



Nicholl, Desmond S. T. Introduction to Genetic Engineering. 3d ed. New York: Cambridge University Press, 2008. A valuable textbook for the nonspecialist and anyone interested in genetic engineering. It provides an excellent foundation in molecular biology and builds on that foundation to show how organisms can be genetically engineered.



Paddison, Patrick J., and Peter J. Voght. RNA Interference. New York: Springer, 2008. A comprehensive book about the field of RNA interference that includes detailed and updated mechanistic descriptions of the RNAi process.



Watson, James D., and Andrew Berry. DNA: The Secret of Life. New York: Knopf, 2004. Nobel Prize-winning scientist Watson guides readers through the rapid advances in genetic technology and what these advances mean for modern life. Covers all aspects of the genome in a readable fashion.

How can a 0.5 molal solution be less concentrated than a 0.5 molar solution?

The answer lies in the units being used. "Molar" refers to molarity, a unit of measurement that describes how many moles of a solu...