Cognitive Science
Cognitive science is the interdisciplinary scientific study of the mind and its
processes.[1] It examines what cognition is, what it does and how it works. It includes
research on intelligence and behavior, especially focusing on how information is
represented, processed, and transformed (in faculties such as perception, language,
memory, reasoning, and emotion) within nervous systems (human or other animal) and
machines (e.g. computers). Cognitive science consists of multiple research disciplines,
including psychology, artificial intelligence, philosophy, neuroscience, linguistics, and
anthropology.[2] It spans many levels of analysis, from low-level learning and decision
mechanisms to high-level logic and planning; from neural circuitry to modular brain
organization. The fundamental concept of cognitive science is "that thinking can best
be understood in terms of representational structures in the mind and computational
procedures that operate on those structures."[2]
497
Contents
1 Principles
1.1 Levels of analysis
1.2 Interdisciplinary nature
1.3 Cognitive science: the term
2 Scope
2.1 Artificial intelligence
2.2 Attention
2.3 Knowledge and processing of language
2.4 Learning and development
2.5 Memory
2.6 Perception and action
3 Research methods
3.1 Behavioral experiments
3.2 Brain imaging
3.3 Computational modeling
3.4 Neurobiological methods
4 Key findings
5 History
6 Notable researchers
Principles
Levels of analysis
A central tenet of cognitive science is that a complete understanding of the mind/brain
cannot be attained by studying only a single level. An example would be the problem of
remembering a phone number and recalling it later. One approach to understanding
this process would be to study behavior through direct observation. A person could be
presented with a phone number, asked to recall it after some delay. Then the accuracy
of the response could be measured. Another approach would be to study the firings of
individual neurons while a person is trying to remember the phone number. Neither of
these experiments on its own would fully explain how the process of remembering a
phone number works. Even if the technology to map out every neuron in the brain in
real-time were available, and it were known when each neuron was firing, it would still
be impossible to know how a particular firing of neurons translates into the observed
behavior. Thus an understanding of how these two levels relate to each other is
needed. The Embodied Mind: Cognitive Science and Human Experience says “the new
sciences of the mind need to enlarge their horizon to encompass both lived human
experience and the possibilities for transformation inherent in human experience.”[3]
This can be provided by a functional level account of the process. Studying a particular
phenomenon from multiple levels creates a better understanding of the processes that
occur in the brain to give rise to a particular behavior. Marr[4] gave a famous
description of three levels of analysis:
1-the computational theory, specifying the goals of the computation;
2-representation and algorithm, giving a representation of the input and output
and the algorithm which transforms one into the other; and
3-the hardware implementation, how algorithm and representation may be
physically realized.
498
Interdisciplinary nature
Cognitive science is an interdisciplinary field with contributors from various fields,
including psychology, neuroscience, linguistics, philosophy of mind, computer science,
anthropology, sociology, and biology. Cognitive science tends to view the world outside
the mind much as other sciences do. Thus it too has an objective, observerindependent existence. The field is usually seen as compatible with the physical
sciences, and uses the scientific method as well as simulation or modeling, often
comparing the output of models with aspects of human behavior. Some doubt whether
there is a unified cognitive science and prefer to speak of the cognitive sciences in
plural.[5]
Many, but not all, who consider themselves cognitive scientists have a functionalist
view of the mind—the view that mental states are classified functionally, such that any
system that performs the proper function for some mental state is considered to be in
that mental state. According to some versions of functionalism, even non-human
systems, such as other animal species, alien life forms, or advanced computers can, in
principle, have mental states.
Cognitive science: the term
The term "cognitive" in "cognitive science" is "used for any kind of mental operation or
structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This
conceptualization is very broad, and should not be confused with how "cognitive" is
used in some traditions of analytic philosophy, where "cognitive" has to do only with
formal rules and truth conditional semantics.
The earliest entries for the word "cognitive" in the OED take it to mean roughly
pertaining "to the action or process of knowing". The first entry, from 1586, shows the
word was at one time used in the context of discussions of Platonic theories of
knowledge. Most in cognitive science, however, presumably do not believe their field is
the study of anything as certain as the knowledge sought by Plato.
Scope
Cognitive science is a large field, and covers a wide array of topics on cognition.
However, it should be recognized that cognitive science is not equally concerned with
every topic that might bear on the nature and operation of the mind or intelligence.
Social and cultural factors, emotion, consciousness, animal cognition, comparative and
evolutionary approaches are frequently de-emphasized or excluded outright, often
based on key philosophical conflicts. Another important mind-related subject that the
cognitive sciences tend to avoid is the existence of qualia, with discussions over this
issue being sometimes limited to only mentioning qualia as a philosophically open
matter. Some within the cognitive science community, however, consider these to be
vital topics, and advocate the importance of investigating them.[6]
Below are some of the main topics that cognitive science is concerned with. This is not
an exhaustive list, but is meant to cover the wide range of intelligent behaviors. See
List of cognitive science topics for a list of various aspects of the field.
Artificial intelligence
"... One major contribution of AI and cognitive science to psychology has been
the information processing model of human thinking in which the metaphor of
brain-as-computer is taken quite literally. ." AAAI Web pages.
Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One
of the practical goals of AI is to implement aspects of human intelligence in computers.
499
Computers are also widely used as a tool with which to study cognitive phenomena.
Computational modeling uses simulations to study how human intelligence may be
structured.[7] (See the section on computational modeling in the Research Methods
section.)
There is some debate in the field as to whether the mind is best viewed as a huge
array of small but individually feeble elements (i.e. neurons), or as a collection of
higher-level structures such as symbols, schemas, plans, and rules. The former view
uses connectionism to study the mind, whereas the latter emphasizes symbolic
computations. One way to view the issue is whether it is possible to accurately simulate
a human brain on a computer without accurately simulating the neurons that make up
the human brain.
Attention
Attention is the selection of important information. The human mind is bombarded with
millions of stimuli and it must have a way of deciding which of this information to
process. Attention is sometimes seen as a spotlight, meaning one can only shine the
light on a particular set of information. Experiments that support this metaphor include
the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack
and Rock, 1998). In the dichotic listening task, subjects are bombarded with two
different messages, one in each ear, and told to focus on only one of the messages. At
the end of the experiment, when asked about the content of the unattended message,
subjects cannot report it.
Knowledge and processing of language
The ability to learn and understand
language is an extremely complex
process. Language is acquired within
the first few years of life, and all
humans under normal circumstances
are able to acquire language
proficiently. A major driving force in
the theoretical linguistic field is
discovering the nature that language
must have in the abstract in order to
be learned in such a fashion. Some of
the driving research questions in
studying how the brain itself
processes language include: (1) To
what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for
adults to acquire a second-language than it is for infants to acquire their firstlanguage?, and (3) How are humans able to understand novel sentences?
The study of language processing ranges from the investigation of the sound patterns
of speech to the meaning of words and whole sentences. Linguistics often divides
language processing into orthography, phonology and phonetics, morphology, syntax,
semantics, and pragmatics. Many aspects of language can be studied from each of
these components and from their interaction.
The study of language processing in cognitive science is closely tied to the field of
linguistics. Linguistics was traditionally studied as a part of the humanities, including
studies of history, art and literature. In the last fifty years or so, more and more
researchers have studied knowledge and use of language as a cognitive phenomenon,
the main problems being how knowledge of language can be acquired and used, and
what precisely it consists of.[8] Linguists have found that, while humans form
500
sentences in ways apparently governed by very complex systems, they are remarkably
unaware of the rules that govern their own speech. Thus linguists must resort to
indirect methods to determine what those rules might be, if indeed rules as such exist.
In any event, if speech is indeed governed by rules, they appear to be opaque to any
conscious consideration.
Learning and development
Learning and development are the processes by which we acquire knowledge and
information over time. Infants are born with little or no knowledge (depending on how
knowledge is defined), yet they rapidly acquire the ability to use language, walk, and
recognize people and objects. Research in learning and development aims to explain
the mechanisms by which these processes might take place.
A major question in the study of cognitive development is the extent to which certain
abilities are innate or learned. This is often framed in terms of the nature and nurture
debate. The nativist view emphasizes that certain features are innate to an organism
and are determined by its genetic endowment. The empiricist view, on the other hand,
emphasizes that certain abilities are learned from the environment. Although clearly
both genetic and environmental input is needed for a child to develop normally,
considerable debate remains about how genetic information might guide cognitive
development. In the area of language acquisition, for example, some (such as Steven
Pinker)[9] have argued that specific information containing universal grammatical rules
must be contained in the genes, whereas others (such as Jeffrey Elman and
colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically
unrealistic. They argue that genes determine the architecture of a learning system, but
that specific "facts" about how grammar works can only be learned as a result of
experience.
Memory
Memory allows us to store information for later retrieval. Memory is often thought of
consisting of both a long-term and short-term store. Long-term memory allows us to
store information over prolonged periods (days, weeks, years). We do not yet know the
practical limit of long-term memory capacity. Short-term memory allows us to store
information over short time scales (seconds or minutes).
Memory is also often grouped into declarative and procedural forms. Declarative
memory—grouped into subsets of semantic and episodic forms of memory—refers to
our memory for facts and specific knowledge, specific meanings, and specific
experiences (e.g., Who was the first president of the U.S.A.?, or "What did I eat for
breakfast four days ago?). Procedural memory allows us to remember actions and
motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or
memory .
Cognitive scientists study memory just as psychologists do, but tend to focus in more
on how memory bears on cognitive processes, and the interrelationship between
cognition and memory. One example of this could be, what mental processes does a
person go through to retrieve a long-lost memory? Or, what differentiates between the
cognitive process of recognition (seeing hints of something before remembering it, or
memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")?
501
Perception and action
Perception is the ability to take in information
via the senses, and process it in some way.
Vision and hearing are two dominant senses
that allow us to perceive the environment.
Some questions in the study of visual
perception, for example, include: (1) How
are we able to recognize objects?, (2) Why
do we perceive a continuous visual
environment, even though we only see small
bits of it at any one time? One tool for
studying visual perception is by looking at
how people process optical illusions. The
image on the right of a Necker cube is an
example of a bistable percept, that is, the
cube can be interpreted as being oriented in
two different directions.
The study of haptic (tactile), olfactory,
and gustatory stimuli also fall into the
domain of perception.
Action is taken to refer to the output of
a system. In humans, this is
accomplished
through
motor
responses. Spatial planning and
movement, speech production, and
complex motor movements are all
aspects of action.
Research methods
Many different methodologies are used
to study cognitive science. As the field
is highly interdisciplinary, research
often cuts across multiple areas of
study, drawing on research methods from psychology, neuroscience, computer science
and systems theory.
Behavioral experiments
In order to have a description of what constitutes intelligent behavior, one must study
behavior itself. This type of research is closely tied to that in cognitive psychology and
psychophysics. By measuring behavioral responses to different stimuli, one can
understand something about how those stimuli are processed. Lewandowski and
Strohmetz (2009) review a collection of innovative uses of behavioral measurement in
psychology including behavioral traces, behavioral observations, and behavioral
choice.[10] Behavioral traces are pieces of evidence that indicate behavior occurred,
but the actor is not present (e.g., litter in a parking lot or readings on an electric meter).
Behavioral observations involve the direct witnessing of the actor engaging in the
behavior (e.g., watching how close a person sits next to another person). Behavioral
choices are when a person selects between two or more options (e.g., voting behavior,
choice of a punishment for another participant).
502
-Reaction time. The time between the presentation of a stimulus and an
appropriate response can indicate differences between two cognitive
processes, and can indicate some things about their nature. For example, if in a
search task the reaction times vary proportionally with the number of elements,
then it is evident that this cognitive process of searching involves serial instead
of parallel processing.
-Psychophysical responses. Psychophysical experiments are an old
psychological technique, which has been adopted by cognitive psychology.
They typically involve making judgments of some physical property, e.g. the
loudness of a sound. Correlation of subjective scales between individuals can
show cognitive or sensory biases as compared to actual physical
measurements. Some examples include:
-sameness judgments for colors, tones, textures, etc.
-threshold differences for colors, tones, textures, etc.
-Eye tracking. This methodology is used to study a variety of cognitive
processes, most notably visual perception and language processing. The
fixation point of the eyes is linked to an individual's focus of attention. Thus, by
monitoring eye movements, we can study what information is being processed
at a given time. Eye tracking allows us to study cognitive processes on
extremely short time scales. Eye movements reflect online decision making
during a task, and they provide us with some insight into the ways in which
those decisions may be processed.
Brain imaging
Brain imaging involves analyzing activity within the
brain while performing various tasks. This allows us
to link behavior and brain function to help
understand how information is processed. Different
types of imaging techniques vary in their temporal
(time-based) and spatial (location-based) resolution.
Brain imaging is often used in cognitive
neuroscience.
-Single
photon
emission
computed
tomography
and
Positron
emission
tomography.
SPECT
and
PET
use
radioactive isotopes, which are injected into
the subject's bloodstream and taken up by
the brain. By observing which areas of the
brain take up the radioactive isotope, we can
see which areas of the brain are more active than other areas. PET has similar
spatial resolution to fMRI, but it has extremely poor temporal resolution.
-Electroencephalography. EEG measures the electrical fields generated by
large populations of neurons in the cortex by placing a series of electrodes on
the scalp of the subject. This technique has an extremely high temporal
resolution, but a relatively poor spatial resolution.
-Functional magnetic resonance imaging. fMRI measures the relative amount of
oxygenated blood flowing to different parts of the brain. More oxygenated blood
in a particular region is assumed to correlate with an increase in neural activity
in that part of the brain. This allows us to localize particular functions within
different brain regions. fMRI has moderate spatial and temporal resolution.
503
-Optical imaging. This technique uses infrared transmitters and receivers to
measure the amount of light reflectance by blood near different areas of the
brain. Since oxygenated and deoxygenated blood reflects light by different
amounts, we can study which areas are more active (i.e., those that have more
oxygenated blood). Optical imaging has moderate temporal resolution, but poor
spatial resolution. It also has the advantage that it is extremely safe and can be
used to study infants' brains.
-Magnetoencephalography. MEG measures magnetic fields resulting from
cortical activity. It is similar to EEG, except that it has improved spatial
resolution since the magnetic fields it measures are not as blurred or attenuated
by the scalp, meninges and so forth as the electrical activity measured in EEG
is. MEG uses SQUID sensors to detect tiny magnetic fields.
Computational modeling
Computational
models
require
a
mathematically
and
logically
formal
representation of a problem. Computer
models are used in the simulation and
experimental verification of different specific
and general properties of intelligence.
Computational modeling can help us to
understand the functional organization of a
particular cognitive phenomenon. There are
two basic approaches to cognitive modeling.
The first is focused on abstract mental
functions of an intelligent mind and operates using symbols, and the second, which
follows the neural and associative properties of the human brain, is called subsymbolic.
-Symbolic modeling evolved from the computer science paradigms using the
technologies of Knowledge-based systems, as well as a philosophical
perspective, see for example "Good Old-Fashioned Artificial Intelligence"
(GOFAI). They are developed by the first cognitive researchers and later used
in information engineering for expert systems . Since the early 1990s it was
generalized in systemics for the investigation of functional human-like
intelligence models, such as personoids, and, in parallel, developed as the
SOAR environment. Recently, especially in the context of cognitive decision
making, symbolic cognitive modeling is extended to socio-cognitive approach
including social and organization cognition interrelated with a sub-symbolic not
conscious layer.
-Subsymbolic modeling includes Connectionist/neural network models.
Connectionism relies on the idea that the mind/brain is composed of simple
nodes and that the power of the system comes primarily from the existence and
manner of connections between the simple nodes. Neural nets are textbook
implementations of this approach. Some critics of this approach feel that while
these models approach biological reality as a representation of how the system
works, they lack explanatory powers because complicated systems of
connections with even simple rules are extremely complex and often less
interpretable than the system they model.
Other approaches gaining in popularity include the use of dynamical systems theory
and also techniques putting symbolic models and connectionist models into
correspondence (Neural-symbolic integration). Bayesian models, often drawn from
machine learning, are also gaining popularity.
504
All the above approaches tend to be generalized to the form of integrated
computational models of a synthetic/abstract intelligence, in order to be applied to the
explanation and improvement of individual and social/organizational decision-making
and reasoning.
Neurobiological methods
Research methods borrowed directly from neuroscience and neuropsychology can also
help us to understand aspects of intelligence. These methods allow us to understand
how intelligent behavior is implemented in a physical system.
-Single-unit recording
-Direct brain stimulation
-Animal models
-Postmortem studies
Key findings
Cognitive science has given rise to models of human cognitive bias and risk
perception, and has been influential in the development of behavioral finance, part of
economics. It has also given rise to a new theory of the philosophy of mathematics,
and many theories of artificial intelligence, persuasion and coercion. It has made its
presence known in the philosophy of language and epistemology - a modern revival of
rationalism - as well as constituting a substantial wing of modern linguistics. Fields of
cognitive science have been influential in understanding the brain's particular functional
systems (and functional deficits) ranging from speech production to auditory processing
and visual perception. It has made progress in understanding how damage to particular
areas of the brain affect cognition, and it has helped to uncover the root causes and
results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect.
History
Cognitive science has a pre-history traceable back to ancient Greek philosophical texts
(see Plato's Meno and Aristotle's De Anima); and includes writers such as Descartes,
David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre
Cabanis, Leibniz and John Locke. However, although these early writers contributed
greatly to the philosophical discovery of mind and this would ultimately lead to the
development of psychology, they were working with an entirely different set of tools and
core concepts than those of the cognitive scientist.
The modern culture of cognitive science can be traced back to the early cyberneticists
in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to
understand the organizing principles of the mind. McCulloch and Pitts developed the
first variants of what are now known as artificial neural networks, models of
computation inspired by the structure of biological neural networks.
Another precursor was the early development of the theory of computation and the
digital computer in the 1940s and 1950s. Alan Turing and John von Neumann were
instrumental in these developments. The modern computer, or Von Neumann machine,
would play a central role in cognitive science, both as a metaphor for the mind, and as
a tool for investigation.
In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal
Behavior. At the time, Skinner's behaviorist paradigm dominated psychology: Most
psychologists focused on functional relations between stimulus and response, without
positing internal representations. Chomsky argued that in order to explain language,
505
we needed a theory like generative grammar, which not only attributed internal
representations but characterized their underlying order.
The term cognitive science was coined by Christopher Longuet-Higgins in his 1973
commentary on the Lighthill report, which concerned the then-current state of Artificial
Intelligence research.[11] In the same decade, the journal Cognitive Science and the
Cognitive Science Society were founded.[12] In 1982, Vassar College became the first
institution in the world to grant an undergraduate degree in Cognitive Science.[13]
In the 1970s and early 1980s, much cognitive science research focused on the
possibility of artificial intelligence. Researchers such as Marvin Minsky would write
computer programs in languages such as LISP to attempt to formally characterize the
steps that human beings went through, for instance, in making decisions and solving
problems, in the hope of better understanding human thought, and also in the hope of
creating artificial minds. This approach is known as "symbolic AI".
Eventually the limits of the symbolic AI research program became apparent. For
instance, it seemed to be unrealistic to comprehensively list human knowledge in a
form usable by a symbolic computer program. The late 80s and 90s saw the rise of
neural networks and connectionism as a research paradigm. Under this point of view,
often attributed to James McClelland and David Rumelhart, the mind could be
characterized as a set of complex associations, represented as a layered network.
Critics argue that there are some phenomena which are better captured by symbolic
models, and that connectionist models are often so complex as to have little
explanatory power. Recently symbolic and connectionist models have been combined,
making it possible to take advantage of both forms of explanation.[14]
Notable researchers
Some of the more recognized names in cognitive science are usually either the most
controversial or the most cited. Within philosophy familiar names include Daniel
Dennett who writes from a computational systems perspective, John Searle known for
his controversial Chinese room, Jerry Fodor who advocates functionalism, David
Chalmers who advocates Dualism, also known for creating the hard problem of
consciousness, Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which
questions the nature of words and thought. In the realm of linguistics, Noam Chomsky
and George Lakoff have been influential (both have also become notable as political
commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, Allen Newell,
and Kevin Warwick are prominent. Popular names in the discipline of psychology
include George A. Miller, James McClelland, Philip Johnson-Laird, and Steven Pinker.
Anthropologists Dan Sperber, Edwin Hutchins, Scott Atran, Pascal Boyer, and Joseph
Henrich have been involved in collaborative projects with cognitive and social
psychologists, political scientists and evolutionary biologists in attempts to develop
general theories of culture formation, religion and political association.
506
Cognitive Neuroscience
Cognitive neuroscience is an academic field concerned with the scientific study of
biological substrates underlying cognition,[1] with a specific focus on the neural
substrates of mental processes. It addresses the questions of how
psychological/cognitive functions are produced by the brain. Cognitive neuroscience is
a branch of both psychology and neuroscience, overlapping with disciplines such as
physiological psychology, cognitive psychology and neuropsychology.[2] Cognitive
neuroscience relies upon theories in cognitive science coupled with evidence from
neuropsychology, and computational modeling.[2]
Due to its multidisciplinary nature, cognitive neuroscientists may have various
backgrounds. Other than the associated disciplines just mentioned, cognitive
neuroscientists may have backgrounds in these disciplines: neurobiology,
bioengineering, psychiatry, neurology, physics, computer science, linguistics,
philosophy and mathematics.
Methods employed in cognitive neuroscience include experimental paradigms from
psychophysics and cognitive psychology, functional neuroimaging, electrophysiology,
cognitive genomics and behavioral genetics. Studies of patients with cognitive deficits
due to brain lesions constitute an important aspect of cognitive neuroscience (see
neuropsychology). Theoretical approaches include computational neuroscience and
cognitive psychology.
Contents
1 Historical origins
1.1 Consciousness
1.2 Origins in philosophy
1.3 19th century
1.3.1 Phrenology
1.3.2 Localizationist view
1.3.3 Aggregate field view
1.3.4 Emergence of neuropsychology
1.3.5 Mapping the brain
1.4 20th century
1.4.1 Cognitive revolution
1.4.2 Neuron doctrine
1.5 Mid-late 20th century
1.5.1 Brain mapping
2 Emergence of a new discipline
2.1 Birth of cognitive science
2.2 Combining neuroscience and cognitive science
3 Recent trends
4 Cognitive neuroscience topics
5 Cognitive neuroscience methods
507
Historical origins
Consciousness
Cognitive neuroscience is an interdisciplinary area of study that has emerged from
many other fields, perhaps most significantly neuroscience, psychology, and computer
science.[3] There were several stages in these disciplines that changed the way
researchers approached their investigations and that led to the field becoming fully
established.
Although the task of cognitive neuroscience is to describe how the brain creates the
mind, historically it has progressed by investigating how a certain area of the brain
supports a given mental faculty. However, early efforts to subdivide the brain proved
problematic. The phrenologist movement failed to supply a scientific basis for its
theories and has since been rejected. The aggregate field view, meaning that all areas
of the brain participated in all behavior,[4] was also rejected as a result of brain
mapping, which began with Hitzig and Fritsch’s experiments [5] and eventually
developed through methods such as positron emission tomography (PET) and
functional magnetic resonance imaging (fMRI).[6] Gestalt theory, neuropsychology, and
the cognitive revolution were major turning points in the creation of cognitive
neuroscience as a field, bringing together ideas and techniques that enabled
researchers to make more links between behavior and its neural substrates.
Origins in philosophy
Philosophers have always been interested in the mind. For example, Aristotle thought
the brain was the body’s cooling system and the capacity for intelligence was located in
the heart. It has been suggested that the first person to believe otherwise was the
Roman physician Galen in the second century AD, who declared that the brain was the
source of mental activity [7] although this has also been accredited to Alcmaeon.[8]
Psychology, a major contributing field to cognitive neuroscience, emerged from
philosophical reasoning about the mind.[9]
508
19th century
Phrenology
One of the predecessors to cognitive
neuroscience
was
phrenology,
a
pseudoscientific approach that claimed that
behavior could be determined by the shape
of the scalp. In the early 19th century, Franz
Joseph Gall and J. G. Spurzheim believed
that the human brain was localized into
approximately 35 different sections. In his
book, The Anatomy and Physiology of the
Nervous System in General, and of the Brain
in Particular, Gall claimed that a larger bump
in one of these areas meant that that area of
the brain was used more frequently by that
person. This theory gained significant public
attention, leading to the publication of
phrenology journals and the creation of
phrenometers, which measured the bumps
on a human subject's head. While
phrenology remained a fixture at fairs and
carnivals, it did not enjoy wide acceptance
within the scientific community.[10] The
major criticism of phrenology is that
researchers were not able to test theories
empirically.[3]
Localizationist view
The localizationist view was concerned with mental abilities being localized to specific
areas of the brain rather than on what the characteristics of the abilities were and how
to measure them.[3] Studies performed in Europe, such as those of John Hughlings
Jackson, supported this view. Jackson studied patients with brain damage, particularly
those with epilepsy. He discovered that the epileptic patients often made the same
clonic and tonic movements of muscle during their seizures, leading Jackson to believe
that they must be occurring in the same place every time. Jackson proposed that
specific functions were localized to specific areas of the brain,[11] which was critical to
future understanding of the brain lobes.
Aggregate field view
According to the aggregate field view, all areas of the brain participate in every mental
function.[4]
Pierre Flourens, a French experimental psychologist, challenged the localizationist
view by using animal experiments.[3] He discovered that removing the cerebellum in
rabbits and pigeons affected their sense of muscular coordination, and that all cognitive
functions were disrupted in pigeons when the cerebral hemispheres were removed.
From this he concluded that the cerebral cortex, cerebellum, and brainstem functioned
together as a whole.[12] His approach has been criticised on the basis that the tests
were not sensitive enough to notice selective deficits had they been present.[3]
509
Emergence of neuropsychology
Perhaps the first serious attempts to localize mental functions to specific locations in
the brain was by Broca and Wernicke. This was mostly achieved by studying the
effects of injuries to different parts of the brain on psychological functions.[13] In 1861,
French neurologist Paul Broca came across a man who was able to understand
language but unable to speak. The man could only produce the sound "tan". It was
later discovered that the man had damage to an area of his left frontal lobe now known
as Broca's area. Carl Wernicke, a German neurologist, found a patient who could
speak fluently but non-sensibly. The patient had been the victim of a stroke, and could
not understand spoken or written language. This patient had a lesion in the area where
the left parietal and temporal lobes meet, now known as Wernicke's area. These cases,
which suggested that lesions caused specific behavioral changes, strongly supported
the localizationist view.
Mapping the brain
In 1870, German physicians Eduard Hitzig and Gustav Fritsch published their findings
about the behavior of animals. Hitzig and Fritsch ran an electrical current through the
cerebral cortex of a dog, causing different muscles to contract depending on which
areas of the brain were electrically stimulated. This led to the proposition that individual
functions are localized to specific areas of the brain rather than the cerebrum as a
whole, as the aggregate field view suggests.[5] Brodmann was also an important figure
in brain mapping; his experiments based on Franz Nissl’s tissue staining techniques
divided the brain into fifty-two areas.
20th century
Cognitive revolution
At the start of the 20th century, attitudes in America were characterised by pragmatism,
which led to a preference for behaviorism as the primary approach in psychology. J.B.
Watson was a key figure with his stimulus-response approach. By conducting
experiments on animals he was aiming to be able to predict and control behaviour.
Behaviourism eventually failed because it could not provide realistic psychology of
human action and thought – it was too based in physical concepts to explain
phenomena like memory and thought. This led to what is often termed as the "cognitive
revolution".[14]
Neuron doctrine
In the early 20th century, Santiago Ramón y Cajal and Camillo Golgi began working on
the structure of the neuron. Golgi developed a silver staining method that could entirely
stain several cells in a particular area, leading him to believe that neurons were directly
connected with each other in one cytoplasm. Cajal challenged this view after staining
areas of the brain that had less myelin and discovering that neurons were discrete
cells. Cajal also discovered that cells transmit electrical signals down the neuron in one
direction only. Both Golgi and Cajal were awarded a Nobel Prize in Physiology or
Medicine in 1906 for this work on the neuron doctrine.<[15]
Mid-late 20th century
Several findings in the 20th century continued to advance the field, such as the
discovery of ocular dominance columns, recording of single nerve cells in animals, and
coordination of eye and head movements. Experimental psychology was also
significant in the foundation of cognitive neuroscience. Some particularly important
510
results were the demonstration that some tasks are accomplished via discrete
processing stages, the study of attention, and the notion that behavioural data do not
provide enough information by themselves to explain mental processes. As a result,
some experimental psychologists began to investigate neural bases of behaviour.
Wilder Penfield built up maps of primary sensory and motor areas of the brain by
stimulating cortices of patients during surgery. Sperry and Gazzaniga’s work on split
brain patients in the 1950s was also instrumental in the progress of the field.[7]
Brain mapping
New brain mapping technology, particularly fMRI and PET, allowed researchers to
investigate experimental strategies of cognitive psychology by observing brain function.
Although this is often thought of as a new method (most of the technology is relatively
recent), the underlying principle goes back as far as 1878 when blood flow was first
associated with brain function.[6] Angelo Mosso, an Italian psychologist of the 19th
century, had monitored the pulsations of the adult brain through neurosurgically
created bony defects in the skulls of patients. He noted that when the subjects
engaged in tasks such as mathematical calculations the pulsations of the brain
increased locally. Such observations led Mosso to conclude that blood flow of the brain
followed function.[6]
Emergence of a new discipline
Birth of cognitive science
On September 11, 1956, a large-scale meeting of cognitivists took place at the
Massachusetts Institute of Technology. George A. Miller presented his "The Magical
Number Seven, Plus or Minus Two" paper while Noam Chomsky and Newell & Simon
presented their findings on computer science. Ulric Neisser commented on many of the
findings at this meeting in his 1967 book Cognitive Psychology. The term "psychology"
had been waning in the 1950s and 1960s, causing the field to be referred to as
"cognitive science". Behaviorists such as Miller began to focus on the representation of
language rather than general behavior. David Marr concluded that one should
understand any cognitive process at three levels of analysis. These levels include
computational, algorithmic/representational, and physical levels of analysis.[16]
Combining neuroscience and cognitive science
Before the 1980s, interaction between neuroscience and cognitive science was
scarce.[17] The term 'cognitive neuroscience' was coined by George Miller and Michael
Gazzaniga toward the end of the 1970s.[17] Cognitive neuroscience began to integrate
the newly laid theoretical ground in cognitive science, that emerged between the 1950s
and 1960s, with approaches in experimental psychology, neuropsychology and
neuroscience. (Neuroscience was not established as a unified discipline until
1971[18]). In the very late 20th century new technologies evolved that are now the
mainstay of the methodology of cognitive neuroscience, including TMS (1985) and
fMRI (1991). Earlier methods used in cognitive neuroscience includes EEG (human
EEG 1920) and MEG (1968). Occasionally cognitive neuroscientists utilize other brain
imaging methods such as PET and SPECT. An upcoming technique in neuroscience is
NIRS which uses light absorption to calculate changes in oxy- and deoxyhemoglobin in
cortical areas. In some animals Single-unit recording can be used. Other methods
include microneurography, facial EMG, and eye-tracking. Integrative neuroscience
attempts to consolidate data in databases, and form unified descriptive models from
various fields and scales: biology, psychology, anatomy, and clinical practice.[19]
511
Recent trends
Recently the foci of research have expanded from the localization of brain area(s) for
specific functions in the adult brain using a single technology, studies have been
diverging in several different directions [20] such as monitoring REM sleep via
polygraphy, a machine that is capable of recording the electrical activity of a sleeping
brain. Advances in non-invasive functional neuroimaging and associated data analysis
methods have also made it possible to use highly naturalistic stimuli and tasks such as
feature films depicting social interactions in cognitive neuroscience studies.[21]
512
Consciousness
Consciousness is the quality or state of
being aware of an external object or
something within oneself.[1][2] It has been
defined
as:
sentience,
awareness,
subjectivity, the ability to experience or to
feel, wakefulness, having a sense of
selfhood, and the executive control system
of the mind.[3] Despite the difficulty in
definition, many philosophers believe that
there is a broadly shared underlying
intuition about what consciousness is.[4] As
Max Velmans and Susan Schneider wrote
in
The
Blackwell
Companion
to
Consciousness: "Anything that we are
aware of at a given moment forms part of
our consciousness, making conscious
experience at once the most familiar and
most mysterious aspect of our lives."[5]
Philosophers since the time of Descartes
and Locke have struggled to comprehend
the nature of consciousness and pin down
its essential properties. Issues of concern in
the philosophy of consciousness include
whether the concept is fundamentally valid;
whether consciousness can ever be explained mechanistically; whether non-human
consciousness exists and if so how it can be recognized; how consciousness relates to
language; whether consciousness can be understood in a way that does not require a
dualistic distinction between mental and physical states or properties; and whether it
may ever be possible for computing machines like computers or robots to be
conscious.
At one time consciousness was viewed with skepticism by many scientists, but in
recent years it has become a significant topic of research in psychology and
neuroscience. The primary focus is on understanding what it means biologically and
psychologically for information to be present in consciousness—that is, on determining
the neural and psychological correlates of consciousness. The majority of experimental
studies assess consciousness by asking human subjects for a verbal report of their
experiences (e.g., "tell me if you notice anything when I do this"). Issues of interest
include phenomena such as subliminal perception, blindsight, denial of impairment,
and altered states of consciousness produced by psychoactive drugs or spiritual or
meditative techniques.
In medicine, consciousness is assessed by observing a patient's arousal and
responsiveness, and can be seen as a continuum of states ranging from full alertness
and comprehension, through disorientation, delirium, loss of meaningful
communication, and finally loss of movement in response to painful stimuli.[6] Issues of
practical concern include how the presence of consciousness can be assessed in
severely ill, comatose, or anesthetized people, and how to treat conditions in which
consciousness is impaired or disrupted.[7]
513
Contents
1 Etymology and early history
2 In philosophy
2.1 The validity of the concept
2.2 Types of consciousness
2.3 Mind–body problem
2.4 Problem of other minds
2.5 Animal consciousness
2.6 Artifact consciousness
3 Scientific study
3.1 Measurement
3.2 Neural correlates
3.3 Biological function and evolution
3.4 States of consciousness
3.5 Phenomenology
4 Medical aspects
4.1 Assessment
4.2 Disorders of consciousness
4.3 Anosognosia
5 Stream of consciousness
6 Spiritual approaches
Etymology and early history
The origin of the modern concept of
consciousness is often attributed to John
Locke's
Essay
Concerning
Human
Understanding, published in 1690.[8] Locke
defined consciousness as "the perception of
what passes in a man's own mind".[9] His
essay influenced the 18th-century view of
consciousness, and his definition appeared in
Samuel Johnson's celebrated Dictionary
(1755).[10]
The earliest English language uses of
"conscious" and "consciousness" date back,
however, to the 1500s. The English word
"conscious" originally derived from the Latin
conscius (con- "together" + scio "to know"), but
the Latin word did not have the same meaning
as our word—it meant knowing with, in other
words having joint or common knowledge with
another.[11] There were, however, many
occurrences in Latin writings of the phrase
conscius sibi, which translates literally as "knowing with oneself", or in other words
sharing knowledge with oneself about something. This phrase had the figurative
meaning of knowing that one knows, as the modern English word "conscious" does. In
its earliest uses in the 1500s, the English word "conscious" retained the meaning of the
Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more
514
men, know of one and the same fact, they are said to be Conscious of it one to
another."[12] The Latin phrase conscius sibi, whose meaning was more closely related
to the current concept of consciousness, was rendered in English as "conscious to
oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of
"being so conscious unto myself of my great weakness".[13] Locke's definition from
1690 illustrates that a gradual shift in meaning had taken place.
A related word was conscientia, which primarily means moral conscience. In the literal
sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first
appears in Latin juridical texts by writers such as Cicero.[14] Here, conscientia is the
knowledge that a witness has of the deed of someone else.[15] René Descartes
(1596–1650) is generally taken to be the first philosopher to use "conscientia" in a way
that does not fit this traditional meaning.[16] Descartes used "conscientia" the way
modern speakers would use "conscience". In Search after Truth he says "conscience
or internal testimony" (conscientia vel interno testimonio).[17]
In philosophy
The philosophy of mind has given rise to many stances regarding consciousness. Any
attempt to impose an organization on them is bound to be somewhat arbitrary. Stuart
Sutherland exemplified the difficulty in the entry he wrote for the 1989 version of the
Macmillan Dictionary of Psychology:
Consciousness—The having of perceptions, thoughts, and feelings;
awareness. The term is impossible to define except in terms that are
unintelligible without a grasp of what consciousness means. Many fall into the
trap of equating consciousness with self-consciousness—to be conscious it is
only necessary to be aware of the external world. Consciousness is a
fascinating but elusive phenomenon: it is impossible to specify what it is, what it
does, or why it has evolved. Nothing worth reading has been written on it.[18]
Most writers on the philosophy of consciousness have been concerned to defend a
particular point of view, and have organized their material accordingly. For surveys, the
most common approach is to follow a historical path by associating stances with the
philosophers who are most strongly associated with them, for example Descartes,
Locke, Kant, etc. An alternative is to organize philosophical stances according to basic
issues.
The validity of the concept
Philosophers and non-philosophers differ in their intuitions about what consciousness
is.[19] While most people have a strong intuition for the existence of what they refer to
as consciousness,[20] skeptics argue that this intuition is false, either because the
concept of consciousness is intrinsically incoherent, or because our intuitions about it
are based in illusions. Gilbert Ryle, for example, argued that traditional understanding
of consciousness depends on a Cartesian dualist outlook that improperly distinguishes
between mind and body, or between mind and world. He proposed that we speak not
of minds, bodies, and the world, but of individuals, or persons, acting in the world.
Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that
there is any sort of thing as consciousness separated from behavioral and linguistic
understandings.[21] More generally, many philosophers and scientists have been
unhappy about the difficulty of producing a definition that does not involve circularity or
fuzziness.[18]
515
Types of consciousness
Many philosophers have argued that consciousness is a unitary concept that is
understood intuitively by the majority of people in spite of the difficulty in defining it.[20]
Others, though, have argued that the level of disagreement about the meaning of the
word indicates that it either means different things to different people (for instance, the
objective versus subjective aspects of consciousness), or else is an umbrella term
encompassing a variety of distinct meanings with no simple element in common.[22]
Ned Block proposed a distinction between two types of consciousness that he called
phenomenal (P-consciousness) and access (A-consciousness).[23] P-consciousness,
according to Block, is simply raw experience: it is moving, colored forms, sounds,
sensations, emotions and feelings with our bodies and responses at the center. These
experiences, considered independently of any impact on behavior, are called qualia. Aconsciousness, on the other hand, is the phenomenon whereby information in our
minds is accessible for verbal report, reasoning, and the control of behavior. So, when
we perceive, information about what we perceive is access conscious; when we
introspect, information about our thoughts is access conscious; when we remember,
information about the past is access conscious, and so on. Although some
philosophers, such as Daniel Dennett, have disputed the validity of this distinction,[24]
others have broadly accepted it. David Chalmers has argued that A-consciousness can
in principle be understood in mechanistic terms, but that understanding Pconsciousness is much more challenging: he calls this the hard problem of
consciousness.[25]
Some philosophers believe that Block's two types of consciousness are not the end of
the story. William Lycan, for example, argued in his book Consciousness and
Experience that at least eight clearly distinct types of consciousness can be identified
(organism consciousness; control consciousness; consciousness of; state/event
consciousness; reportability; introspective consciousness; subjective consciousness;
self-consciousness)—and that even this list omits several more obscure forms.[26]
Mind–body problem
The first influential philosopher to discuss this
question specifically was Descartes, and the
answer he gave is known as Cartesian
dualism.
Descartes
proposed
that
consciousness resides within an immaterial
domain he called res cogitans (the realm of
thought), in contrast to the domain of material
things, which he called res extensa (the
realm of extension).[27] He suggested that
the interaction between these two domains
occurs inside the brain, perhaps in a small
midline structure called the pineal gland.[28]
Although it is widely accepted that Descartes
explained the problem cogently, few later
philosophers have been happy with his
solution, and his ideas about the pineal gland
have
especially
been
ridiculed.[29]
Alternative solutions, however, have been
very diverse. They can be divided broadly
into two categories: dualist solutions that
maintain Descartes' rigid distinction between
the realm of consciousness and the realm of
matter but give different answers for how the
516
two realms relate to each other; and monist solutions that maintain that there is really
only one realm of being, of which consciousness and matter are both aspects. Each of
these categories itself contains numerous variants. The two main types of dualism are
substance dualism (which holds that the mind is formed of a distinct type of substance
not governed by the laws of physics) and property dualism (which holds that the laws of
physics are universally valid but cannot be used to explain the mind). The three main
types of monism are physicalism (which holds that the mind consists of matter
organized in a particular way), idealism (which holds that only thought truly exists, and
matter is merely an illusion), and neutral monism (which holds that both mind and
matter are aspects of a distinct essence that is itself identical to neither of them). There
are also, however, a large number of idiosyncratic theories that cannot cleanly be
assigned to any of these camps.[30]
Since the dawn of Newtonian science with its vision of simple mechanical principles
governing the entire universe, some philosophers have been tempted by the idea that
consciousness could be explained in purely physical terms. The first influential writer to
propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a
Machine (L'homme machine). His arguments, however, were very abstract.[31] The
most influential modern physical theories of consciousness are based on psychology
and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman[32]
and Antonio Damasio,[33] and by philosophers such as Daniel Dennett,[34] seek to
explain consciousness in terms of neural events occurring within the brain. Many other
neuroscientists, such as Christof Koch,[35] have explored the neural basis of
consciousness without attempting to frame all-encompassing global theories. At the
same time, computer scientists working in the field of artificial intelligence have
pursued the goal of creating digital computer programs that can simulate or embody
consciousness.[36]
A few theoretical physicists have argued that classical physics is intrinsically incapable
of explaining the holistic aspects of consciousness, but that quantum theory provides
the missing ingredients. Several theorists have therefore proposed quantum mind (QM)
theories of consciousness.[37] Notable theories falling into this category include the
holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory
formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer
descriptions of phenomenal consciousness, as well as QM interpretations of access
consciousness. None of the quantum mechanical theories has been confirmed by
experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H.
Briegel[38] could falsify proposals such as those of Hameroff, which rely on quantum
entanglement in protein. At the present time many scientists and philosophers consider
the arguments for an important role of quantum phenomena to be unconvincing.[39]
Apart from the general question of the "hard problem" of consciousness, roughly
speaking, the question of how mental experience arises from a physical basis,[40] a
more specialized question is how to square the subjective notion that we are in control
of our decisions (at least in some small measure) with the customary view of causality
that subsequent events are caused by prior events. The topic of free will is the
philosophical and scientific examination of this conundrum.
Problem of other minds
Many philosophers consider experience to be the essence of consciousness, and
believe that experience can only fully be known from the inside, subjectively. But if
consciousness is subjective and not visible from the outside, why do the vast majority
of people believe that other people are conscious, but rocks and trees are not?[41] This
is called the problem of other minds.[42] It is particularly acute for people who believe
in the possibility of philosophical zombies, that is, people who think it is possible in
517
principle to have an entity that is physically indistinguishable from a human being and
behaves like a human being in every way but nevertheless lacks consciousness.[43]
The most commonly given answer is that we attribute consciousness to other people
because we see that they resemble us in appearance and behavior: we reason that if
they look like us and act like us, they must be like us in other ways, including having
experiences of the sort that we do.[44] There are, however, a variety of problems with
that explanation. For one thing, it seems to violate the principle of parsimony, by
postulating an invisible entity that is not necessary to explain what we observe.[44]
Some philosophers, such as Daniel Dennett in an essay titled The Unimagined
Preposterousness of Zombies, argue that people who give this explanation do not
really understand what they are saying.[45] More broadly, philosophers who do not
accept the possibility of zombies generally believe that consciousness is reflected in
behavior (including verbal behavior), and that we attribute consciousness on the basis
of behavior. A more straightforward way of saying this is that we attribute experiences
to people because of what they can do, including the fact that they can tell us about
their experiences.[46]
Animal consciousness
The topic of animal consciousness is beset by a number of difficulties. It poses the
problem of other minds in an especially severe form, because animals, lacking the
ability to express human language, cannot tell us about their experiences.[47] Also, it is
difficult to reason objectively about the question, because a denial that an animal is
conscious is often taken to imply that it does not feel, its life has no value, and that
harming it is not morally wrong. Descartes, for example, has sometimes been blamed
for mistreatment of animals due to the fact that he believed only humans have a nonphysical mind.[48] Most people have a strong intuition that some animals, such as cats
and dogs, are conscious, while others, such as insects, are not; but the sources of this
intuition are not obvious, and are often based on personal interactions with pets and
other animals they have observed.[47]
Philosophers who consider subjective experience the essence of consciousness also
generally believe, as a correlate, that the existence and nature of animal
consciousness can never rigorously be known. Thomas Nagel spelled out this point of
view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism
is conscious "if and only if there is something that it is like to be that organism —
something it is like for the organism"; and he argued that no matter how much we know
about an animal's brain and behavior, we can never really put ourselves into the mind
of the animal and experience its world in the way it does itself.[49] Other thinkers, such
as Douglas Hofstadter, dismiss this argument as incoherent.[50] Several psychologists
and ethologists have argued for the existence of animal consciousness by describing a
range of behaviors that appear to show animals holding beliefs about things they
cannot directly perceive — Donald Griffin's 2001 book Animal Minds reviews a
substantial portion of the evidence.[51]
Artifact consciousness
The idea of an artifact made conscious is an ancient theme of mythology, appearing for
example in the Greek myth of Pygmalion, who carved a statue that was magically
brought to life, and in medieval Jewish stories of the Golem, a magically animated
homunculus built of clay.[52] However, the possibility of actually constructing a
conscious machine was probably first discussed by Ada Lovelace, in a set of notes
written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor
(never built) to modern electronic computers. Lovelace was essentially dismissive of
the idea that a machine such as the Analytical Engine could think in a humanlike way.
She wrote:
518
It is desirable to guard against the possibility of exaggerated ideas that might
arise as to the powers of the Analytical Engine. ... The Analytical Engine has no
pretensions whatever to originate anything. It can do whatever we know how to
order it to perform. It can follow analysis; but it has no power of anticipating any
analytical relations or truths. Its province is to assist us in making available what
we are already acquainted with.[53]
One of the most influential contributions to this question was an essay written in 1950
by pioneering computer scientist Alan Turing, titled Computing Machinery and
Intelligence. Turing disavowed any interest in terminology, saying that even "Can
machines think?" is too loaded with spurious connotations to be meaningful; but he
proposed to replace all such questions with a specific operational test, which has
become known as the Turing test.[54] To pass the test, a computer must be able to
imitate a human well enough to fool interrogators. In his essay Turing discussed a
variety of possible objections, and presented a counterargument to each of them. The
Turing test is commonly cited in discussions of artificial intelligence as a proposed
criterion for machine consciousness; it has provoked a great deal of philosophical
debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything
capable of passing the Turing test is necessarily conscious,[55] while David Chalmers
argues that a philosophical zombie could pass the test, yet fail to be conscious.[56]
In a lively exchange over what has come to be referred to as "the Chinese room
argument", John Searle sought to refute the claim of proponents of what he calls
"strong artificial intelligence (AI)" that a computer program can be conscious, though he
does agree with advocates of "weak AI" that computer programs can be formatted to
"simulate" conscious states. His own view is that consciousness has subjective, firstperson causal powers by being essentially intentional due simply to the way human
brains function biologically; conscious persons can perform computations, but
consciousness is not inherently computational the way computer programs are. To
make a Turing machine that speaks Chinese, Searle imagines a room stocked with
computers and algorithms programmed to respond to Chinese questions, i.e., Turing
machines, programmed to correctly answer in Chinese any questions asked in
Chinese. Searle argues that with such a machine, he would be able to process the
inputs to outputs perfectly without having any understanding of Chinese, nor having
any idea what the questions and answers could possibly mean. And this is all a current
computer program would do. If the experiment were done in English, since Searle
knows English, he would be able to take questions and give answers without any
algorithms for English questions, and he would be affectively aware of what was being
said and the purposes it might serve. Searle would pass the Turing test of answering
the questions in both languages, but he is only conscious of what he is doing when he
speaks English. Another way of putting the argument is to say that computational
computer programs can pass the Turing test for processing the syntax of a language,
but that semantics cannot be reduced to syntax in the way strong AI advocates hoped.
Processing semantics is conscious and intentional because we use semantics to
consciously produce meaning by what we say.[57]
In the literature concerning artificial intelligence, Searle's essay has been second only
to Turing's in the volume of debate it has generated.[57] Searle himself was vague
about what extra ingredients it would take to make a machine conscious: all he
proposed was that what was needed was "causal powers" of the sort that the brain has
and that computers lack. But other thinkers sympathetic to his basic argument have
suggested that the necessary (though perhaps still not sufficient) extra conditions may
include the ability to pass not just the verbal version of the Turing test, but the robotic
version,[58] which requires grounding the robot's words in the robot's sensorimotor
capacity to categorize and interact with the things in the world that its words are about,
Turing-indistinguishably from a real person. Turing-scale robotics is an empirical
branch of research on embodied cognition and situated cognition.[59]
519
Scientific study
For many decades, consciousness as a research topic was avoided by the majority of
mainstream scientists, because of a general feeling that a phenomenon defined in
subjective terms could not properly be studied using objective experimental
methods.[60] In 1975 George Mandler published an influential psychological study
which distinguished between slow, serial, and limited conscious processes and fast,
parallel and extensive unconscious ones.[61] Starting in the 1980s, an expanding
community of neuroscientists and psychologists have associated themselves with a
field called Consciousness Studies, giving rise to a stream of experimental work
published in books,[62] journals such as Consciousness and Cognition, and
methodological work published in journals such as the Journal of Consciousness
Studies, along with regular conferences organized by groups such as the Association
for the Scientific Study of Consciousness.[63]
Modern scientific investigations into consciousness are based on psychological
experiments (including, for example, the investigation of priming effects using
subliminal stimuli), and on case studies of alterations in consciousness produced by
trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core
concepts. The first identifies the content of consciousness with the experiences that are
reported by human subjects; the second makes use of the concept of consciousness
that has been developed by neurologists and other medical professionals who deal
with patients whose behavior is impaired. In either case, the ultimate goals are to
develop techniques for assessing consciousness objectively in humans as well as
other animals, and to understand the neural and psychological mechanisms that
underlie it.[35]
Measurement
Experimental
research
on
consciousness
presents special difficulties, due to the lack of a
universally accepted operational definition. In the
majority of experiments that are specifically about
consciousness, the subjects are human, and the
criterion that is used is verbal report: in other
words, subjects are asked to describe their
experiences, and their descriptions are treated as
observations
of
the
contents
of
consciousness.[64] For example, subjects who
stare continuously at a Necker cube usually report
that they experience it "flipping" between two 3D
configurations, even though the stimulus itself
remains the same.[65] The objective is to
understand the relationship between the conscious awareness of stimuli (as indicated
by verbal report) and the effects the stimuli have on brain activity and behavior. In
several paradigms, such as the technique of response priming, the behavior of subjects
is clearly influenced by stimuli for which they report no awareness.[66]
Verbal report is widely considered to be the most reliable indicator of consciousness,
but it raises a number of issues.[67] For one thing, if verbal reports are treated as
observations, akin to observations in other branches of science, then the possibility
arises that they may contain errors—but it is difficult to make sense of the idea that
subjects could be wrong about their own experiences, and even more difficult to see
how such an error could be detected.[68] Daniel Dennett has argued for an approach
he calls heterophenomenology, which means treating verbal reports as stories that
may or may not be true, but his ideas about how to do this have not been widely
520
adopted.[69] Another issue with verbal report as a criterion is that it restricts the field of
study to humans who have language: this approach cannot be used to study
consciousness in other species, pre-linguistic children, or people with types of brain
damage that impair language. As a third issue, philosophers who dispute the validity of
the Turing test may feel that it is possible, at least in principle, for verbal report to be
dissociated from consciousness entirely: a philosophical zombie may give detailed
verbal reports of awareness in the absence of any genuine awareness.[70]
Although verbal report is in practice the "gold standard" for ascribing consciousness, it
is not the only possible criterion.[67] In medicine, consciousness is assessed as a
combination of verbal behavior, arousal, brain activity and purposeful movement. The
last three of these can be used as indicators of consciousness when verbal behavior is
absent.[71] The scientific literature regarding the neural bases of arousal and
purposeful movement is very extensive. Their reliability as indicators of consciousness
is disputed, however, due to numerous studies showing that alert human subjects can
be induced to behave purposefully in a variety of ways in spite of reporting a complete
lack of awareness.[66] Studies of the neuroscience of free will have also shown that
the experiences that people report when they behave purposefully sometimes do not
correspond to their actual behaviors or to the patterns of electrical activity recorded
from their brains.[72]
Another approach applies specifically to the study of self-awareness, that is, the ability
to distinguish oneself from others. In the 1970s Gordon Gallup developed an
operational test for self-awareness, known as the mirror test. The test examines
whether animals are able to differentiate between seeing themselves in a mirror versus
seeing other animals. The classic example involves placing a spot of coloring on the
skin or fur near the individual's forehead and seeing if they attempt to remove it or at
least touch the spot, thus indicating that they recognize that the individual they are
seeing in the mirror is themselves.[73] Humans (older than 18 months) and other great
apes, bottlenose dolphins, pigeons, and elephants have all been observed to pass this
test.[74]
Neural correlates
A major part of the scientific literature on consciousness consists of studies that
examine the relationship between the experiences reported by subjects and the activity
that simultaneously takes place in their brains—that is, studies of the neural correlates
of consciousness. The hope is to find that activity in a particular part of the brain, or a
particular pattern of global brain activity, will be strongly predictive of conscious
awareness. Several brain imaging techniques, such as EEG and fMRI, have been used
for physical measures of brain activity in these studies.[75]
521
One idea that has drawn attention for several decades is that consciousness is
associated with high-frequency (gamma band) oscillations in brain activity. This idea
arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that
gamma oscillations could solve the so-called binding problem, by linking information
represented in different parts of the brain into a unified experience.[76] Rodolfo Llinás,
for example, proposed that consciousness results from recurrent thalamo-cortical
resonance where the specific thalamocortical systems (content) and the non-specific
(centromedial thalamus) thalamocortical systems (context) interact in the gamma band
frequency via synchronous oscillations.[77]
A number of studies have shown that activity in primary sensory areas of the brain is
not sufficient to produce consciousness: it is possible for subjects to report a lack of
awareness even when areas such as the primary visual cortex show clear electrical
responses to a stimulus.[78] Higher brain areas are seen as more promising, especially
the prefrontal cortex, which is involved in a range of higher cognitive functions
collectively known as executive functions. There is substantial evidence that a "topdown" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory
areas) is more predictive of conscious awareness than a "bottom-up" flow of
activity.[79] The prefrontal cortex is not the only candidate area, however: studies by
Nikos Logothetis and his colleagues have shown, for example, that visually responsive
neurons in parts of the temporal lobe reflect the visual perception in the situation when
conflicting visual images are presented to different eyes (i.e., bistable percepts during
binocular rivalry).[80]
In 2011 Graziano and Kastner[81] proposed the "attention schema" theory of
awareness. In that theory specific cortical machinery, notably in the superior temporal
sulcus and the temporo-parietal junction, is used to build the construct of awareness
and attribute it to other people. The same cortical machinery is also used to attribute
awareness to oneself. Damage to this cortical machinery can lead to deficits in
consciousness such as hemispatial neglect. In the attention schema theory, the value
of constructing the feature of awareness and attributing it to a person is to gain a useful
predictive model of that person's attentional processing. Attention is a style of
information processing in which a brain focuses its resources on a limited set of
interrelated signals. Awareness, in this theory, is a useful, simplified schema that
represents attentional state. To be aware of X is to construct a model of one's
attentional focus on X.
Biological function and evolution
Regarding the primary function of conscious processing, a recurring idea in recent
theories is that phenomenal states somehow integrate neural activities and informationprocessing that would otherwise be independent.[82] This has been called the
integration consensus. Another example has been proposed by Gerald Edelman called
dynamic core hypothesis which puts emphasis on reentrant connections that
reciprocally link areas of the brain in a massively parallel manner.[83] These theories of
integrative function present solutions to two classic problems associated with
consciousness: differentiation and unity. They show how our conscious experience can
discriminate between infinitely different possible scenes and details (differentiation)
because it integrates those details from our sensory systems, while the integrative
nature of consciousness in this view easily explains how our experience can seem
unified as one whole despite all of these individual parts. However, it remains
unspecified which kinds of information are integrated in a conscious manner and which
kinds can be integrated without consciousness. Nor is it explained what specific causal
role conscious integration plays, nor why the same functionality cannot be achieved
without consciousness. Obviously not all kinds of information are capable of being
disseminated consciously (e.g., neural activity related to vegetative functions, reflexes,
522
unconscious motor programs, low-level perceptual analyses, etc.) and many kinds of
information can be disseminated and combined with other kinds without
consciousness, as in intersensory interactions such as the ventriloquism effect.[84]
Hence it remains unclear why any of it is conscious. For a review of the differences
between conscious and unconscious integrations, see the article of E. Morsella.[84]
As noted earlier, even among writers who consider consciousness to be a well-defined
thing, there is widespread dispute about which animals other than humans can be said
to possess it.[85] Thus, any examination of the evolution of consciousness is faced with
great difficulties. Nevertheless, some writers have argued that consciousness can be
viewed from the standpoint of evolutionary biology as an adaptation in the sense of a
trait that increases fitness.[86] In his article "Evolution of consciousness", John Eccles
argued that special anatomical and physical properties of the mammalian cerebral
cortex gave rise to consciousness.[87] Bernard Baars proposed that once in place, this
"recursive" circuitry may have provided a basis for the subsequent development of
many of the functions that consciousness facilitates in higher organisms.[88] Peter
Carruthers has put forth one such potential adaptive advantage gained by conscious
creatures by suggesting that consciousness allows an individual to make distinctions
between appearance and reality.[89] This ability would enable a creature to recognize
the likelihood that their perceptions are deceiving them (e.g. that water in the distance
may be a mirage) and behave accordingly, and it could also facilitate the manipulation
of others by recognizing how things appear to them for both cooperative and devious
ends.
Other philosophers, however, have suggested that consciousness would not be
necessary for any functional advantage in evolutionary processes.[90][91] No one has
given a causal explanation, they argue, of why it would not be possible for a
functionally equivalent non-conscious organism (i.e., a philosophical zombie) to
achieve the very same survival advantages as a conscious organism. If evolutionary
processes are blind to the difference between function F being performed by conscious
organism O and non-conscious organism O*, it is unclear what adaptive advantage
consciousness could provide.[92] As a result, an exaptive explanation of
consciousness has gained favor with some theorists that posit consciousness did not
evolve as an adaptation but was an exaptation arising as a consequence of other
developments such as increases in brain size or cortical rearrangement.[93]
States of consciousness
There are some states in which consciousness seems to be abolished, including sleep,
coma, and death. There are also a variety of circumstances that can change the
relationship between the mind and the world in less drastic ways, producing what are
known as altered states of consciousness. Some altered states occur naturally; others
can be produced by drugs or brain damage.[94] Altered states can be accompanied by
changes in thinking, disturbances in the sense of time, feelings of loss of control,
changes in emotional expression, alternations in body image and changes in meaning
or significance.[95]
The two most widely accepted altered states are sleep and dreaming. Although dream
sleep and non-dream sleep appear very similar to an outside observer, each is
associated with a distinct pattern of brain activity, metabolic activity, and eye
movement; each is also associated with a distinct pattern of experience and cognition.
During ordinary non-dream sleep, people who are awakened report only vague and
sketchy thoughts, and their experiences do not cohere into a continuous narrative.
During dream sleep, in contrast, people who are awakened report rich and detailed
experiences in which events form a continuous progression, which may however be
interrupted by bizarre or fantastic intrusions. Thought processes during the dream state
frequently show a high level of irrationality. Both dream and non-dream states are
523
associated with severe disruption of memory: it usually disappears in seconds during
the non-dream state, and in minutes after awakening from a dream unless actively
refreshed.[96]
A variety of psychoactive drugs have notable
effects on consciousness. These range from a
simple dulling of awareness produced by
sedatives, to increases in the intensity of
sensory qualities produced by stimulants,
cannabis, or most notably by the class of drugs
known as psychedelics.[94] LSD, mescaline,
psilocybin, and others in this group can
produce major distortions of perception,
including hallucinations; some users even
describe their drug-induced experiences as
mystical or spiritual in quality. The brain
mechanisms underlying these effects are not
well understood, but there is substantial
evidence that alterations in the brain system
that uses the chemical neurotransmitter
serotonin play an essential role.[97]
There has been some research into
physiological changes in yogis and people who
practise various techniques of meditation. Some research with brain waves during
meditation has reported differences between those corresponding to ordinary
relaxation and those corresponding to meditation. It has been disputed, however,
whether there is enough evidence to count these as physiologically distinct states of
consciousness.[98]
The most extensive study of the characteristics of altered states of consciousness was
made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of
consciousness as made up of a number of component processes, including
exteroception (sensing the external world); interoception (sensing the body); inputprocessing (seeing meaning); emotions; memory; time sense; sense of identity;
evaluation and cognitive processing; motor output; and interaction with the
environment.[99] Each of these, in his view, could be altered in multiple ways by drugs
or other manipulations. The components that Tart identified have not, however, been
validated by empirical studies. Research in this area has not yet reached firm
conclusions, but a recent questionnaire-based study identified eleven significant factors
contributing to drug-induced states of consciousness: experience of unity; spiritual
experience; blissful state; insightfulness; disembodiment; impaired control and
cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia;
and changed meaning of percepts.[100]
Phenomenology
Phenomenology is a method of inquiry that attempts to examine the structure of
consciousness in its own right, putting aside problems regarding the relationship of
consciousness to the physical world. This approach was first proposed by the
philosopher Edmund Husserl, and later elaborated by other philosophers and
scientists.[101] Husserl's original concept gave rise to two distinct lines of inquiry, in
philosophy and psychology. In philosophy, phenomenology has largely been devoted
to fundamental metaphysical questions, such as the nature of intentionality
("aboutness"). In psychology, phenomenology largely has meant attempting to
investigate consciousness using the method of introspection, which means looking into
one's own mind and reporting what one observes. This method fell into disrepute in the
524
early twentieth century because of grave doubts about its reliability, but has been
rehabilitated to some degree, especially when used in combination with techniques for
examining brain activity.[102]
Introspectively, the world of conscious experience
seems to have considerable structure. Immanuel
Kant asserted that the world as we perceive it is
organized according to a set of fundamental
"intuitions", which include object (we perceive the
world as a set of distinct things); shape; quality
(color, warmth, etc.); space (distance, direction,
and location); and time.[103] Some of these
constructs, such as space and time, correspond
to the way the world is structured by the laws of
physics; for others the correspondence is not as
clear. Understanding the physical basis of
qualities, such as redness or pain, has been
particularly challenging. David Chalmers has
called
this
the
hard
problem
of
consciousness.[25] Some philosophers have
argued that it is intrinsically unsolvable, because
qualities ("qualia") are ineffable; that is, they are
"raw feels", incapable of being analyzed into
component processes.[104] Most psychologists
and neuroscientists reject these arguments —
nevertheless it is clear that the relationship
between a physical entity such as light and a
perceptual quality such as color is extraordinarily
complex and indirect, as demonstrated by a
variety of optical illusions such as neon color
spreading.[105]
In neuroscience, a great deal of effort has gone
into investigating how the perceived world of
conscious awareness is constructed inside the
brain. The process is generally thought to involve two primary mechanisms: (1)
hierarchical processing of sensory inputs, and (2) memory. Signals arising from
sensory organs are transmitted to the brain and then processed in a series of stages,
which extract multiple types of information from the raw input. In the visual system, for
example, sensory signals from the eyes are transmitted to the thalamus and then to the
primary visual cortex; inside the cerebral cortex they are sent to areas that extract
features such as three-dimensional structure, shape, color, and motion.[106] Memory
comes into play in at least two ways. First, it allows sensory information to be evaluated
in the context of previous experience. Second, and even more importantly, working
memory allows information to be integrated over time so that it can generate a stable
representation of the world—Gerald Edelman expressed this point vividly by titling one
of his books about consciousness The Remembered Present.[107]
Despite the large amount of information available, the most important aspects of
perception remain mysterious. A great deal is known about low-level signal processing
in sensory systems, but the ways by which sensory systems interact with each other,
with "executive" systems in the frontal cortex, and with the language system are very
incompletely understood. At a deeper level, there are still basic conceptual issues that
remain unresolved.[106] Many scientists have found it difficult to reconcile the fact that
information is distributed across multiple brain areas with the apparent unity of
consciousness: this is one aspect of the so-called binding problem.[108] There are also
some scientists who have expressed grave reservations about the idea that the brain
525
forms representations of the outside world at all: influential members of this group
include psychologist J. J. Gibson and roboticist Rodney Brooks, who both argued in
favor of "intelligence without representation".[109]
Medical aspects
The medical approach to consciousness is practically oriented. It derives from a need
to treat people whose brain function has been impaired as a result of disease, brain
damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to
the degree that they can help to guide treatments. Whereas the philosophical approach
to consciousness focuses on its fundamental nature and its contents, the medical
approach focuses on the amount of consciousness a person has: in medicine,
consciousness is assessed as a "level" ranging from coma and brain death at the low
end, to full alertness and purposeful responsiveness at the high end.[110]
Consciousness is of concern to patients and physicians, especially neurologists and
anesthesiologists. Patients may suffer from disorders of consciousness, or may need to
be anesthetized for a surgical procedure. Physicians may perform consciousnessrelated interventions such as instructing the patient to sleep, administering general
anesthesia, or inducing medical coma.[110] Also, bioethicists may be concerned with
the ethical implications of consciousness in medical cases of patients such as Karen
Ann Quinlan,[111] while neuroscientists may study patients with impaired
consciousness in hopes of gaining information about how the brain works.[112]
Assessment
In medicine, consciousness is examined using a set of procedures known as
neuropsychological assessment.[71] There are two commonly used methods for
assessing the level of consciousness of a patient: a simple procedure that requires
minimal training, and a more complex procedure that requires substantial expertise.
The simple procedure begins by asking whether the patient is able to move and react
to physical stimuli. If so, the next question is whether the patient can respond in a
meaningful way to questions and commands. If so, the patient is asked for name,
current location, and current day and time. A patient who can answer all of these
questions is said to be "oriented times three" (sometimes denoted "Ox3" on a medical
chart), and is usually considered fully conscious.[113]
The more complex procedure is known as a neurological examination, and is usually
carried out by a neurologist in a hospital setting. A formal neurological examination
runs through a precisely delineated series of tests, beginning with tests for basic
sensorimotor reflexes, and culminating with tests for sophisticated use of language.
The outcome may be summarized using the Glasgow Coma Scale, which yields a
number in the range 3—15, with a score of 3 indicating brain death (the lowest defined
level of consciousness), and 15 indicating full consciousness. The Glasgow Coma
Scale has three subscales, measuring the best motor response (ranging from "no
motor response" to "obeys commands"), the best eye response (ranging from "no eye
opening" to "eyes opening spontaneously") and the best verbal response (ranging from
"no verbal response" to "fully oriented"). There is also a simpler pediatric version of the
scale, for children too young to be able to use language.[110]
In 2013, an experimental procedure was developed to measure degrees of
consciousness, the procedure involving stimulating the brain with a magnetic pulse,
measuring resulting waves of electrical activity, and developing a consciousness score
based on the complexity of the brain activity.[114]
526
Disorders of consciousness
Medical conditions that inhibit consciousness are considered disorders of
consciousness.[115] This category generally includes minimally conscious state and
persistent vegetative state, but sometimes also includes the less severe locked-in
syndrome and more severe chronic coma.[115][116] Differential diagnosis of these
disorders is an active area of biomedical research.[117][118][119] Finally, brain death
results in an irreversible disruption of consciousness.[115] While other conditions may
cause a moderate deterioration (e.g., dementia and delirium) or transient interruption
(e.g., grand mal and petit mal seizures) of consciousness, they are not included in this
category.
Disorder
Description
Locked-in
syndrome
The patient has awareness, sleep-wake cycles, and meaningful behavior (viz.,
eye-movement), but is isolated due to quadriplegia and pseudobulbar palsy.
Minimally
conscious state
The patient has intermittent periods of awareness and wakefulness and
displays some meaningful behavior.
Persistent
vegetative state
The patient has sleep-wake cycles, but lacks awareness and only displays
reflexive and non-purposeful behavior.
Chronic coma
The patient lacks awareness and sleep-wake cycles and only displays reflexive
behavior.
Brain death
The patient lacks awareness, sleep-wake cycles, and brain-mediated reflexive
behavior.
Anosognosia
One of the most striking disorders of consciousness goes by the name anosognosia, a
Greek-derived term meaning unawareness of disease. This is a condition in which
patients are disabled in some way, most commonly as a result of a stroke, but either
misunderstand the nature of the problem or deny that there is anything wrong with
them.[120] The most frequently occurring form is seen in people who have experienced
a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a
syndrome known as hemispatial neglect, characterized by an inability to direct action or
attention toward objects located to the right with respect to their bodies. Patients with
hemispatial neglect are often paralyzed on the right side of the body, but sometimes
deny being unable to move. When questioned about the obvious problem, the patient
may avoid giving a direct answer, or may give an explanation that doesn't make sense.
Patients with hemispatial neglect may also fail to recognize paralyzed parts of their
bodies: one frequently mentioned case is of a man who repeatedly tried to throw his
own paralyzed right leg out of the bed he was lying in, and when asked what he was
doing, complained that somebody had put a dead leg into the bed with him. An even
more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring
condition in which patients become blind but claim to be able to see normally, and
persist in this claim in spite of all evidence to the contrary.[121]
Stream of consciousness
William James is usually credited with popularizing the idea that human consciousness
flows like a stream, in his Principles of Psychology of 1890. According to James, the
"stream of thought" is governed by five characteristics: "(1) Every thought tends to be
part of a personal consciousness. (2) Within each personal consciousness thought is
527
always changing. (3) Within each personal consciousness thought is sensibly
continuous. (4) It always appears to deal with objects independent of itself. (5) It is
interested in some parts of these objects to the exclusion of others".[122] A similar
concept appears in Buddhist philosophy, expressed by the Sanskrit term Cittasa tāna, which is usually translated as mindstream or "mental continuum". In the
Buddhist view, though, the "mindstream" is viewed primarily as a source of noise that
distracts attention from a changeless underlying reality.[123]
In the west, the primary impact of the idea has been on literature rather than science:
stream of consciousness as a narrative mode means writing in a way that attempts to
portray the moment-to-moment thoughts and experiences of a character. This
technique perhaps had its beginnings in the monologues of Shakespeare's plays, and
reached its fullest development in the novels of James Joyce and Virginia Woolf,
although it has also been used by many other noted writers.[124]
Here for example is a passage from Joyce's Ulysses about the thoughts of Molly
Bloom:
Yes because he never did a thing like that before as ask to get his breakfast in
bed with a couple of eggs since the City Arms hotel when he used to be
pretending to be laid up with a sick voice doing his highness to make himself
interesting for that old faggot Mrs Riordan that he thought he had a great leg of
and she never left us a farthing all for masses for herself and her soul greatest
miser ever was actually afraid to lay out 4d for her methylated spirit telling me
all her ailments she had too much old chat in her about politics and earthquakes
and the end of the world let us have a bit of fun first God help the world if all the
women were her sort down on bathingsuits and lownecks of course nobody
wanted her to wear them I suppose she was pious because no man would look
at her twice I hope Ill never be like her a wonder she didnt want us to cover our
faces but she was a welleducated woman certainly and her gabby talk about Mr
Riordan here and Mr Riordan there I suppose he was glad to get shut of
her.[125]
Spiritual approaches
To most philosophers, the word "consciousness" connotes the relationship between the
mind and the world. To writers on spiritual or religious topics, it frequently connotes the
relationship between the mind and God, or the relationship between the mind and
deeper truths that are thought to be more fundamental than the physical world. Krishna
consciousness, for example, is a term used to mean an intimate linkage between the
mind of a worshipper and the god Krishna.[126] The mystical psychiatrist Richard
Maurice Bucke distinguished between three types of consciousness: Simple
Consciousness, awareness of the body, possessed by many animals; Self
Consciousness, awareness of being aware, possessed only by humans; and Cosmic
Consciousness, awareness of the life and order of the universe, possessed only by
humans who are enlightened.[127] Many more examples could be given. The most
thorough account of the spiritual approach may be Ken Wilber's book The Spectrum of
Consciousness, a comparison of western and eastern ways of thinking about the mind.
Wilber described consciousness as a spectrum with ordinary awareness at one end,
and more profound types of awareness at higher levels.[128]
528
Neural correlates of consciousness
The neural correlates of consciousness (NCC) constitute the minimal set of neuronal
events and mechanisms sufficient for a specific conscious percept.[2] Neuroscientists
use empirical approaches to discover neural correlates of subjective phenomena.[3]
The set should be minimal because, if the brain is sufficient to give rise to any given
conscious experience, the question is which of its components is necessary to produce
it.
Contents
1 Neurobiological approach to consciousness
2 Level of arousal and content of consciousness
3 The neuronal basis of perception
4 Global disorders of consciousness
5 Forward versus feedback projections
Neurobiological approach to consciousness
A science of consciousness must explain the exact relationship between subjective
mental states and brain states, the nature of the relationship between the conscious
mind and the electro-chemical interactions in the body. Progress in neurophilosophy
has come from focusing on the body rather than the mind. In this context the neuronal
correlates of consciousness may be viewed as its causes, and consciousness may be
thought of as a state-dependent property of some undefined complex, adaptive, and
highly interconnected biological system.[4]
Discovering and characterizing neural correlates does not offer a theory of
consciousness that can explain how particular systems experience anything at all, or
how they are associated with consciousness, the so-called hard problem of
consciousness,[5] but understanding the NCC may be a step toward such a theory.
Most neurobiologists assume that the variables giving rise to consciousness are to be
529
found at the neuronal level, governed by classical physics, though a few scholars have
proposed theories of quantum consciousness based on quantum mechanics.[6]
There is great apparent redundancy and parallelism in neural networks so, while
activity in one group of neurons may correlate with a percept in one case, a different
population might mediate a related percept if the former population is lost or
inactivated. It may be that every phenomenal, subjective state has a neural correlate.
Where the NCC can be induced artificially the subject will experience the associated
percept, while perturbing or inactivating the region of correlation for a specific percept
will affect the percept or cause it to disappear, giving a cause-effect relationship from
the neural region to the nature of the percept.
What characterizes the NCC? What are the commonalities between the NCC for
seeing and for hearing? Will the NCC involve all pyramidal neurons in cortex at any
given point in time? Or only a subset of long-range projection cells in frontal lobes that
project to the sensory cortices in the back? Neurons that fire in a rhythmic manner?
Neurons that fire in a synchronous manner? These are some of the proposals that
have been advanced over the years.[7]
The growing ability of neuroscientists to manipulate neurons using methods from
molecular biology in combination with optical tools (e.g., Adamantidis et al. 2007)
depends on the simultaneous development of appropriate behavioral assays and
model organisms amenable to large-scale genomic analysis and manipulation. It is the
combination of such fine-grained neuronal analysis in animals with ever more sensitive
psychophysical and brain imaging techniques in humans, complemented by the
development of a robust theoretical predictive framework, that will hopefully lead to a
rational understanding of consciousness, one of the central mysteries of life.
Level of arousal and content of consciousness
There are two common but distinct dimensions of the term consciousness,[8] one
involving arousal and states of consciousness and the other involving content of
consciousness and conscious states. To be conscious of anything the brain must be in
a relatively high state of arousal (sometimes called vigilance), whether in wakefulness
or REM sleep, vividly experienced in dreams although usually not remembered. Brain
arousal level fluctuates in a circadian rhythm but may be influenced by lack of sleep,
drugs and alcohol, physical exertion, etc. Arousal can be measured behaviorally by the
signal amplitude that triggers some criterion reaction (for instance, the sound level
necessary to evoke an eye movement or a head turn toward the sound source).
Clinicians use scoring systems such as the Glasgow Coma Scale to assess the level of
arousal in patients.
High arousal states are associated with conscious states that have specific content,
seeing, hearing, remembering, planning or fantasizing about something. Different
levels or states of consciousness are associated with different kinds of conscious
experiences. The "awake" state is quite different from the "dreaming" state (for
instance, the latter has little or no self-reflection) and from the state of deep sleep. In all
three cases the basic physiology of the brain is affected, as it also is in altered states of
consciousness, for instance after taking drugs or during meditation when conscious
perception and insight may be enhanced compared to the normal waking state.
Clinicians talk about impaired states of consciousness as in "the comatose state", "the
persistent vegetative state" (PVS), and "the minimally conscious state" (MCS). Here,
"state" refers to different "amounts" of external/physical consciousness, from a total
absence in coma, persistent vegetative state and general anesthesia, to a fluctuating
and limited form of conscious sensation in a minimally conscious state such as sleep
walking or during a complex partial epileptic seizure.[9] The repertoire of conscious
states or experiences accessible to a patient in a minimally conscious state is
530
comparatively limited. In brain death there is no arousal, but it is unknown whether the
subjectivity of experience has been interrupted, rather than its observable link with the
organism.
The potential richness of conscious experience appears to increase from deep sleep to
drowsiness to full wakefulness, as might be quantified using notions from complexity
theory that incorporate both the dimensionality as well as the granularity of conscious
experience to give an integrated-information-theoretical account of consciousness.[10]
As behavioral arousal increases so does the range and complexity of possible
behavior. Yet in REM sleep there is a characteristic atonia, low motor arousal and the
person is difficult to wake up, but there is still high metabolic and electric brain activity
and vivid perception.
Many nuclei with distinct chemical signatures in the thalamus, midbrain and pons must
function for a subject to be in a sufficient state of brain arousal to experience anything
at all. These nuclei therefore belong to the enabling factors for consciousness.
Conversely it is likely that the specific content of any particular conscious sensation is
mediated by particular neurons in cortex and their associated satellite structures,
including the amygdala, thalamus, claustrum and the basal ganglia.
The neuronal basis of perception
The possibility of precisely manipulating visual percepts in time and space has made
vision a preferred modality in the quest for the NCC. Psychologists have perfected a
number of techniques – masking, binocular rivalry, continuous flash suppression,
motion induced blindness, change blindness, inattentional blindness – in which the
seemingly simple and unambiguous relationship between a physical stimulus in the
world and its associated percept in the privacy of the subject's mind is disrupted.[11] In
particular a stimulus can be perceptually suppressed for seconds or even minutes at a
time: the image is projected into one of the observer's eyes but is invisible, not seen. In
this manner the neural mechanisms that respond to the subjective percept rather than
the physical stimulus can be isolated, permitting visual consciousness to be tracked in
the brain. In a perceptual illusion, the physical stimulus remains fixed while the percept
fluctuates. The best known example is the Necker cube whose 12 lines can be
perceived in one of two different ways in depth.
A perceptual illusion that can be
precisely controlled is binocular rivalry.
Here, a small image, e.g., a horizontal
grating, is presented to the left eye,
and another image, e.g., a vertical
grating, is shown to the corresponding
location in the right eye. In spite of the
constant visual stimulus, observers
consciously see the horizontal grating
alternate every few seconds with the
vertical one. The brain does not allow
for the simultaneous perception of
both images.
Logothetis
and
colleagues[13]
recorded a variety of visual cortical
areas in awake macaque monkeys
performing a binocular rivalry task.
Macaque monkeys can be trained to
report whether they see the left or the right image. The distribution of the switching
times and the way in which changing the contrast in one eye affects these leaves little
doubt that monkeys and humans experience the same basic phenomenon. In the
531
primary visual cortex (V1) only a small fraction of cells weakly modulated their
response as a function of the percept of the monkey while most cells responded to one
or the other retinal stimulus with little regard to what the animal perceived at the time.
But in a high-level cortical area such as the inferior temporal cortex along the ventral
stream almost all neurons responded only to the perceptually dominant stimulus, so
that a "face" cell only fired when the animal indicated that it saw the face and not the
pattern presented to the other eye. This implies that NCC involve neurons active in the
inferior temporal cortex: it is likely that specific reciprocal actions of neurons in the
inferior temporal and parts of the prefrontal cortex are necessary.
A number of fMRI experiments that have exploited binocular rivalry and related illusions
to identify the hemodynamic activity underlying visual consciousness in humans
demonstrate quite conclusively that BOLD activity in the upper stages of the ventral
pathway (e.g., the fusiform face area and the parahippocampal place area) as well as
in early regions, including V1 and the lateral geniculate nucleus (LGN), follow the
percept and not the retinal stimulus.[14] Further, a number of fMRI experiments[15][16]
suggest V1 is necessary but not sufficient for visual consciousness.[17]
In a related perceptual phenomenon, flash suppression, the percept associated with an
image projected into one eye is suppressed by flashing another image into the other
eye while the original image remains. Its methodological advantage over binocular
rivalry is that the timing of the perceptual transition is determined by an external trigger
rather than by an internal event. The majority of cells in the inferior temporal cortex and
the superior temporal sulcus of monkeys trained to report their percept during flash
suppression follow the animal's percept: when the cell's preferred stimulus is
perceived, the cell responds. If the picture is still present on the retina but is
perceptually suppressed, the cell falls silent, even though primary visual cortex neurons
fire.[18][19] Single-neuron recordings in the medial temporal lobe of epilepsy patients
during flash suppression likewise demonstrate abolishment of response when the
preferred stimulus is present but perceptually masked.[20]
Global disorders of consciousness
Given the absence of any accepted
criterion of the minimal neuronal
correlates
necessary
for
consciousness the distinction between
a persistently vegetative patient, who
shows regular sleep-wave transitions
and may be able to move or smile,
and a minimally conscious patient who
can communicate (on occasion) in a
meaningful manner (for instance, by
differential eye movements) and who
shows some signs of consciousness,
is often difficult. In global anesthesia
the patient should not experience
psychological trauma but the level of
arousal should be compatible with
clinical exigencies.
Blood-oxygen-level-dependent fMRI
(BOLD fMRI) have demonstrated normal patterns of brain activity in a patient in a
vegetative state following a severe traumatic brain injury when asked to imagine
playing tennis or visiting rooms in his/her house.[22] Differential brain imaging of
patients with such global disturbances of consciousness (including akinetic mutism)
reveal that dysfunction in a widespread cortical network including medial and lateral
532
prefrontal and parietal associative areas is associated with a global loss of
awareness.[23] Impaired consciousness in epileptic seizures of the temporal lobe was
likewise accompanied by a decrease in cerebral blood flow in frontal and parietal
association cortex and an increase in midline structures such as the mediodorsal
thalamus.[24]
Relatively local bilateral injuries to midline (paramedian) subcortical structures can also
cause a complete loss of awareness.[citation needed] These structures therefore
enable and control brain arousal (as determined by metabolic or electrical activity) and
are necessary neural correlates. One such example is the heterogeneous collection of
more than two dozen nuclei on each side of the upper brainstem (pons, midbrain and in
the posterior hypothalamus), collectively referred to as the reticular activating system
(RAS). Their axons project widely throughout the brain. These nuclei – threedimensional collections of neurons with their own cyto-architecture and neurochemical
identity
–
release
distinct
neuromodulators
such
as
acetylcholine,
noradrenaline/norepinephrine, serotonin, histamine and orexin/hypocretin to control the
excitability of the thalamus and forebrain, mediating alternation between wakefulness
and sleep as well as general level of behavioral and brain arousal. After such trauma,
however, eventually the excitability of the thalamus and forebrain can recover and
consciousness can return.[25] Another enabling factor for consciousness are the five or
more intralaminar nuclei (ILN) of the thalamus. These receive input from many
brainstem nuclei and project strongly, directly to the basal ganglia and, in a more
distributed manner, into layer I of much of the neocortex. Comparatively small (1 cm3
or less) bilateral lesions in the thalamic ILN completely knock out all awareness.[26]
Forward versus feedback projections
Many actions in response to sensory inputs are rapid, transient, stereotyped, and
unconscious.[27] They could be thought of as cortical reflexes and are characterized by
rapid and somewhat stereotyped responses that can take the form of rather complex
automated behavior as seen, e.g., in complex partial epileptic seizures. These
automated responses, sometimes called zombie behaviors,[28] could be contrasted by
a slower, all-purpose conscious mode that deals more slowly with broader, less
stereotyped aspects of the sensory inputs (or a reflection of these, as in imagery) and
takes time to decide on appropriate thoughts and responses. Without such a
consciousness mode, a vast number of different zombie modes would be required to
react to unusual events.
A feature that distinguishes humans from most animals is that we are not born with an
extensive repertoire of behavioral programs that would enable us to survive on our own
("physiological prematurity"). To compensate for this, we have an unmatched ability to
learn, i.e., to consciously acquire such programs by imitation or exploration. Once
consciously acquired and sufficiently exercised, these programs can become
automated to the extent that their execution happens beyond the realms of our
awareness. Take, as an example, the incredible fine motor skills exerted in playing a
Beethoven piano sonata or the sensorimotor coordination required to ride a motorcycle
along a curvy mountain road. Such complex behaviors are possible only because a
sufficient number of the subprograms involved can be executed with minimal or even
suspended conscious control. In fact, the conscious system may actually interfere
somewhat with these automated programs.[29]
From an evolutionary standpoint it clearly makes sense to have both automated
behavioral programs that can be executed rapidly in a stereotyped and automated
manner, and a slightly slower system that allows time for thinking and planning more
complex behavior. This latter aspect may be one of the principal functions of
consciousness.
533
It seems possible that visual zombie modes in the cortex mainly use the dorsal stream
in the parietal region.[27] However, parietal activity can affect consciousness by
producing attentional effects on the ventral stream, at least under some circumstances.
The conscious mode for vision depends largely on the early visual areas (beyond V1)
and especially on the ventral stream.
Seemingly complex visual processing (such as detecting animals in natural, cluttered
scenes) can be accomplished by the human cortex within 130–150 ms,[30][31] far too
brief for eye movements and conscious perception to occur. Furthermore, reflexes
such as the oculovestibular reflex take place at even more rapid time-scales. It is quite
plausible that such behaviors are mediated by a purely feed-forward moving wave of
spiking activity that passes from the retina through V1, into V4, IT and prefrontal cortex,
until it affects motorneurons in the spinal cord that control the finger press (as in a
typical laboratory experiment). The hypothesis that the basic processing of information
is feedforward is supported most directly by the short times (approx. 100 ms) required
for a selective response to appear in IT cells.
Conversely, conscious perception is believed to require more sustained, reverberatory
neural activity, most likely via global feedback from frontal regions of neocortex back to
sensory cortical areas[17] that builds up over time until it exceeds a critical threshold.
At this point, the sustained neural activity rapidly propagates to parietal, prefrontal and
anterior cingulate cortical regions, thalamus, claustrum and related structures that
support short-term memory, multi-modality integration, planning, speech, and other
processes intimately related to consciousness. Competition prevents more than one or
a very small number of percepts to be simultaneously and actively represented. This is
the core hypothesis of the global workspace theory of consciousness.[32][33]
In brief, while rapid but transient neural activity in the thalamo-cortical system can
mediate complex behavior without conscious sensation, it is surmised that
consciousness requires sustained but well-organized neural activity dependent on
long-range cortico-cortical feedback.
534
Altered level of consciousness
An altered level of consciousness is any
measure of arousal other than normal. Level
of consciousness (LOC) is a measurement
of
a
person's
arousability
and
responsiveness to stimuli from the
environment.[1] A mildly depressed level of
consciousness or alertness may be classed
as lethargy; someone in this state can be
aroused with little difficulty.[1] People who
are obtunded have a more depressed level
of consciousness and cannot be fully
aroused.[1][2] Those who are not able to be
aroused from a sleep-like state are said to
be stuporous.[1][2] Coma is the inability to
make any purposeful response.[1][2] Scales
such as the Glasgow coma scale have been
designed to measure the level of
consciousness.
An altered level of consciousness can result
from a variety of factors, including
alterations in the chemical environment of
the brain (e.g. exposure to poisons or
intoxicants), insufficient oxygen or blood flow in the brain, and excessive pressure
within the skull. Prolonged unconsciousness is understood to be a sign of a medical
emergency.[3] A deficit in the level of consciousness suggests that both of the cerebral
hemispheres or the reticular activating system have been injured.[4] A decreased level
of consciousness correlates to increased morbidity (disability) and mortality (death).[5]
Thus it is a valuable measure of a patient's medical and neurological status. In fact,
some sources consider level of consciousness to be one of the vital signs.[3][6]
Contents
1 Definition
1.1 Glasgow Coma Scale
1.2 Others
2 Differential diagnosis
3 Pathophysiology
4 Diagnostic approach
5 Treatment
535
Definition
Scales and terms to classify the levels of consciousness differ, but in general, reduction
in response to stimuli indicates an altered level of consciousness:
Levels of consciousness
Level
Summary
(Kruse)[2]
Conscious Normal
Description
Assessment of LOC involves checking orientation: people who are able
promptly and spontaneously to state their name, location, and the date or
time are said to be oriented to self, place, and time, or "oriented X3".[7] A
normal sleep stage from which a person is easily awakened is also
considered a normal level of consciousness.[8] "Clouding of
consciousness" is a term for a mild alteration of consciousness with
alterations in attention and wakefulness.[8]
Confused
Disoriented;
impaired
thinking and
responses
People who do not respond quickly with information about their name,
location, and the time are considered "obtuse" or "confused".[7] A
confused person may be bewildered, disoriented, and have difficulty
following instructions.[8] The person may have slow thinking and
possible memory time loss. This could be caused by sleep deprivation,
malnutrition, allergies, environmental pollution, drugs (prescription and
nonprescription), and infection.
Delirious
Disoriented;
restlessness,
hallucinations,
sometimes
delusions
Some scales have "delirious" below this level, in which a person may be
restless or agitated and exhibit a marked deficit in attention.[2]
Somnolent Sleepy
Obtunded
Decreased
alertness;
slowed
psychomotor
responses
Sleep-like state
(not
unconscious);
Stuporous
little/no
spontaneous
activity
Comatose
Cannot be
aroused; no
response to
stimuli
A somnolent person shows excessive drowsiness and responds to
stimuli only with incoherent mumbles or disorganized movements.[7]
In obtundation, a person has a decreased interest in their surroundings,
slowed responses, and sleepiness.[8]
People with an even lower level of consciousness, stupor, only respond
by grimacing or drawing away from painful stimuli.[7]
Comatose people do not even make this response to stimuli, have no
corneal or gag reflex, and they may have no pupillary response to
light.[7]
Glasgow Coma Scale
The most commonly used tool for measuring LOC objectively is the Glasgow Coma
Scale (GCS). It has come into almost universal use for assessing people with brain
injury,[2] or an altered level of consciousness. Verbal, motor, and eye-opening
responses to stimuli are measured, scored, and added into a final score on a scale of
3–15, with a lower score being a more decreased level of consciousness.
536
Others
The AVPU scale is another means of measuring LOC: people are assessed to
determine whether they are alert, responsive to verbal stimuli, responsive to painful
stimuli, or unresponsive.[3][6] To determine responsiveness to voice, a caregiver
speaks to, or, failing that, yells at the person.[3] Responsiveness to pain is determined
with a mild painful stimulus such as a pinch; moaning or withdrawal from the stimulus is
considered a response to pain.[3] The ACDU scale, like AVPU, is easier to use than
the GCS and produces similarly accurate results.[9] Using ACDU, a patient is assessed
for alertness, confusion, drowsiness, and unresponsiveness.[9]
The Grady Coma Scale classes people on a scale of I to V along a scale of confusion,
stupor, deep stupor, abnormal posturing, and coma.[8]
Differential diagnosis
A lowered level of consciousness indicate a deficit in brain function.[4] Level of
consciousness can be lowered when the brain receives insufficient oxygen (as occurs
in hypoxia); insufficient blood (as occurs in shock); or has an alteration in the brain's
chemistry.[3] Metabolic disorders such as diabetes mellitus and uremia can alter
consciousness.[10] Hypo- or hypernatremia (decreased and elevated levels of sodium,
respectively) as well as dehydration can also produce an altered LOC.[11] A pH
outside of the range the brain can tolerate will also alter LOC.[8] Exposure to drugs
(e.g. alcohol) or toxins may also lower LOC,[3] as may a core temperature that is too
high or too low (hyperthermia or hypothermia). Increases in intracranial pressure (the
pressure within the skull) can also cause altered LOC. It can result from traumatic brain
injury such as concussion.[10] Stroke and intracranial hemorrhage are other
causes.[10] Infections of the central nervous system may also be associated with
decreased LOC; for example, an altered LOC is the most common symptom of
encephalitis.[12] Neoplasms within the intracranial cavity can also affect
consciousness,[10] as can epilepsy and post-seizure states.[8] A decreased LOC can
also result from a combination of factors.[10] A concussion, which is a mild traumatic
brain injury (MTBI) may result in decreased LOC.
Pathophysiology
Although the neural science behind alertness, wakefulness, and arousal are not fully
known, the reticular formation is known to play a role in these.[8] The ascending
reticular activating system is a postulated group of neural connections that receives
sensory input and projects to the cerebral cortex through the midbrain and thalamus
from the retucular formation.[8] Since this system is thought to modulate wakefulness
and sleep, interference with it, such as injury, illness, or metabolic disturbances, could
alter the level of consciousness.[8]
Normally, stupor and coma are produced by interference with the brain stem, such as
can be caused by a lesion or indirect effects, such as brain herniation.[8] Mass lesions
in the brain stem normally cause coma due to their effects on the reticular
formation.[13] Mass lesions that occur above the tentorium cerebelli (pictured) normally
do not significantly alter the level of consciousness unless they are very large or affect
both cerebral hemispheres.[8]
Diagnostic approach
Assessing LOC involves determining an individual's response to external stimuli.[10]
Speed and accuracy of responses to questions and reactions to stimuli such as touch
and pain are noted.[10] Reflexes, such as the cough and gag reflexes, are also means
of judging LOC.[10] Once the level of consciousness is determined, clinicians seek
537
clues for the cause of any alteration.[8] Usually the first tests in the ER are pulse
oximetry to determine if there is hypoxia, serum glucose levels to rule out
hypoglycemia. A urine drug screen may be sent. A CT head is very important to obtain
to rule out bleed. In case, meningitis is suspected, a lumbar puncture must be
performed. A serum TSH is an important test to order. In select groups consider
vitamin B12 levels. Checking serum ammonia is not advised.
Treatment
Treatment depends on the degree of decrease in consciousness and its underlying
cause. Initial treatment often involves the administration of dextrose if the blood sugar
is low as well as the administration of naloxone and thiamine.
538
Mind
A mind is the set of cognitive faculties that
enables consciousness, perception, thinking,
judgement, and memory—a characteristic of
humans, but which also may apply to other life
forms.[3][4]
A lengthy tradition of inquiries in philosophy,
religion, psychology and cognitive science has
sought to develop an understanding of what a
mind is and what its distinguishing properties
are. The main questions regarding the nature
of mind is its relation to the physical brain and
nervous system – a question which is often
framed as the Mind-body problem, which
considers whether mind is somehow separate
from physical existence (dualism and
idealism[5]), deriving from and reducible to
physical phenomena such as neurological
processes (physicalism), or whether the mind
is identical with the brain or some activity of
the brain.[6] Another question concerns which
types of beings are capable of having minds, for example whether mind is exclusive to
humans, possessed also by some or all animals, by all living things, or whether mind
can also be a property of some types of man-made machines.
Whatever its relation to the physical body it is generally agreed that mind is that which
enables a being to have subjective awareness and intentionality towards their
environment, to perceive and respond to stimuli with some kind of agency, and to have
consciousness, including thinking and feeling.[3][7]
Important philosophers of mind include Plato, Descartes, Leibniz, Kant, Martin
Heidegger, John Searle, Daniel Dennett and many others. The description and
definition is also a part of psychology where psychologists such as Sigmund Freud and
William James have developed influential theories about the nature of the human mind.
In the late 20th and early 21st centuries the field of cognitive science emerged and
developed many varied approaches to the description of mind and its related
phenomena. The possibility of non-human minds is also explored in the field of artificial
intelligence, which works closely in relation with cybernetics and information theory to
understand the ways in which human mental phenomena can be replicated by
machines.
The concept of mind is understood in many different ways by many different cultural
and religious traditions. Some see mind as a property exclusive to humans whereas
others ascribe properties of mind to non-living entities (e.g. panpsychism and animism),
to animals and to deities. Some of the earliest recorded speculations linked mind
(sometimes described as identical with soul or spirit) to theories concerning both life
after death, and cosmological and natural order, for example in the doctrines of
Zoroaster, the Buddha, Plato, Aristotle, and other ancient Greek, Indian and, later,
Islamic and medieval European philosophers.
539
Contents
1 Etymology
2 Definitions
3 Mental faculties
4 Mental content
4.1 Memetics
5 Relation to the brain
6 Evolutionary history of the human mind
7 Philosophy of mind
7.1 Mind/body perspectives
8 Scientific study
8.1 Neuroscience
8.2 Cognitive Science
8.3 Psychology
9 Mental health
10 Non-human minds
10.1 Animal intelligence
10.2 Artificial intelligence
11 In religion
11.1 Buddhism
11.2 Mortality of the mind
12 In pseudoscience
12.1 Parapsychology
Etymology
The original meaning of Old English gemynd was the faculty of memory, not of thought
in general. Hence call to mind, come to mind, keep in mind, to have mind of, etc. Old
English had other words to express "mind", such as hyge "mind, spirit".
The meaning of "memory" is shared with Old Norse, which has munr. The word is
originally from a PIE verbal root *men-, meaning "to think, remember", whence also
Latin mens "mind", Sanskrit manas "mind" and Greek μένος "mind, courage, anger".
The generalization of mind to include all mental faculties, thought, volition, feeling and
memory, gradually develops over the 14th and 15th centuries.[8]
Definitions
Which attributes make up the mind is much debated. Some psychologists argue that
only the "higher" intellectual functions constitute mind, particularly reason and memory.
In this view the emotions—love, hate, fear, joy—are more primitive or subjective in
nature and should be seen as different from the mind as such. Others argue that
various rational and emotional states cannot be so separated, that they are of the same
nature and origin, and should therefore be considered all part of what we call the mind.
In popular usage mind is frequently synonymous with thought: the private conversation
with ourselves that we carry on "inside our heads." Thus we "make up our minds,"
"change our minds" or are "of two minds" about something. One of the key attributes of
the mind in this sense is that it is a private sphere to which no one but the owner has
access. No one else can "know our mind." They can only interpret what we consciously
or unconsciously communicate.
540
Mental faculties
Broadly speaking, mental faculties are the
various functions of the mind, or things the
mind can "do".
Thought is a mental act that allows humans
to make sense of things in the world, and to
represent and interpret them in ways that
are significant, or which accord with their
needs, attachments, goals, commitments,
plans, ends, desires, etc. Thinking involves
the symbolic or semiotic mediation of ideas
or data, as when we form concepts, engage
in problem solving, reasoning and making
decisions. Words that refer to similar
concepts
and
processes
include
deliberation, cognition, ideation, discourse
and imagination.
Thinking is sometimes described as a
"higher" cognitive function and the analysis
of thinking processes is a part of cognitive
psychology. It is also deeply connected with
our capacity to make and use tools; to
understand cause and effect; to recognize
patterns of significance; to comprehend and
disclose unique contexts of experience or activity; and to respond to the world in a
meaningful way.
Memory is the ability to preserve, retain, and subsequently recall, knowledge,
information or experience. Although memory has traditionally been a persistent theme
in philosophy, the late nineteenth and early twentieth centuries also saw the study of
memory emerge as a subject of inquiry within the paradigms of cognitive psychology.
In recent decades, it has become one of the pillars of a new branch of science called
cognitive neuroscience, a marriage between cognitive psychology and neuroscience.
Imagination is the activity of generating or evoking novel situations, images, ideas or
other qualia in the mind. It is a characteristically subjective activity, rather than a direct
or passive experience. The term is technically used in psychology for the process of
reviving in the mind percepts of objects formerly given in sense perception. Since this
use of the term conflicts with that of ordinary language, some psychologists have
preferred to describe this process as "imaging" or "imagery" or to speak of it as
"reproductive" as opposed to "productive" or "constructive" imagination. Things that are
imagined are said to be seen in the "mind's eye". Among the many practical functions
of imagination are the ability to project possible futures (or histories), to "see" things
from another's perspective, and to change the way something is perceived, including to
make decisions to respond to, or enact, what is imagined.
Consciousness in mammals (this includes humans) is an aspect of the mind generally
thought to comprise qualities such as subjectivity, sentience, and the ability to perceive
the relationship between oneself and one's environment. It is a subject of much
research in philosophy of mind, psychology, neuroscience, and cognitive science.
Some philosophers divide consciousness into phenomenal consciousness, which is
subjective experience itself, and access consciousness, which refers to the global
availability of information to processing systems in the brain.[9] Phenomenal
consciousness has many different experienced qualities, often referred to as qualia.
Phenomenal consciousness is usually consciousness of something or about
something, a property known as intentionality in philosophy of mind.
541
Mental content
Mental contents are those items that are thought of as being "in" the mind, and capable
of being formed and manipulated by mental processes and faculties. Examples include
thoughts, concepts, memories, emotions, percepts and intentions. Philosophical
theories of mental content include internalism, externalism, representationalism and
intentionality.
Memetics
Memetics is a theory of mental content based on an analogy with Darwinian evolution,
which was originated by Richard Dawkins and Douglas Hofstadter in the 1980s. It is an
evolutionary model of cultural information transfer. A meme, analogous to a gene, is an
idea, belief, pattern of behaviour (etc.) which is "hosted" in one or more individual
minds, and which can reproduce itself from mind to mind. Thus what would otherwise
be regarded as one individual influencing another to adopt a belief is seen memetically
as a meme reproducing itself. As with genetics, particularly under Dawkins's
interpretation, a meme's success may be due its contribution to the effectiveness of its
host (i.e., the meme is a useful, beneficial idea), or may be "selfish", in which case it
could be considered a "virus of the mind".
Relation to the brain
In animals, the brain, or encephalon (Greek for "in the head"), is the control center of
the central nervous system, responsible for thought. In most animals, the brain is
located in the head, protected by the skull and close to the primary sensory apparatus
of vision, hearing, equilibrioception, taste and olfaction. While all vertebrates have a
brain, most invertebrates have either a centralized brain or collections of individual
ganglia. Primitive animals such as sponges do not have a brain at all. Brains can be
extremely complex. For example, the human brain contains more than 100 billion
neurons, each linked to as many as 10,000 others.[10][11]
Understanding the relationship between the brain and the mind – mind-body problem is
one of the central issues in the history of philosophy – is a challenging problem both
philosophically and scientifically.[12] There are three major philosophical schools of
thought concerning the answer: dualism, materialism, and idealism. Dualism holds that
the mind exists independently of the brain;[13] materialism holds that mental
phenomena are identical to neuronal phenomena;[14] and idealism holds that only
mental phenomena exist.[14]
Through most of history many philosophers found it inconceivable that cognition could
be implemented by a physical substance such as brain tissue (that is neurons and
synapses).[15] Descartes, who thought extensively about mind-brain relationships,
found it possible to explain reflexes and other simple behaviors in mechanistic terms,
although he did not believe that complex thought, and language in particular, could be
explained by reference to the physical brain alone.[16]
The most straightforward scientific evidence that there is a strong relationship between
the physical brain matter and the mind is the impact physical alterations to the brain
have on the mind, such as with traumatic brain injury and psychoactive drug use.[17]
Philosopher Patricia Churchland notes that this drug-mind interaction indicates an
intimate connection between the brain and the mind.[18]
In addition to the philosophical questions, the relationship between mind and brain
involves a number of scientific questions, including understanding the relationship
between mental activity and brain activity, the exact mechanisms by which drugs
influence cognition, and the neural correlates of consciousness.
542
Evolutionary history of the human mind
The evolution of human intelligence refers to a set of theories that attempt to explain
how human intelligence has evolved. The question is closely tied to the evolution of the
human brain, and to the emergence of human language.
The timeline of human evolution spans some 7 million years, from the separation of the
Pan genus until the emergence of behavioral modernity by 50,000 years ago. Of this
timeline, the first 3 million years concern Sahelanthropus, the following 2 million
concern Australopithecus, while the final 2 million span the history of actual human
species (the Paleolithic).
Many traits of human intelligence, such as empathy, theory of mind, mourning, ritual,
and the use of symbols and tools, are already apparent in great apes although in lesser
sophistication than in humans.
There is a debate between supporters of the idea of a sudden emergence of
intelligence, or "Great leap forward" and those of a gradual or continuum hypothesis.
Theories of the evolution of intelligence include:
-Robin Dunbar's social brain hypothesis[19]
-Geoffrey Miller's sexual selection hypothesis[20]
-The ecological dominance-social competition (EDSC)[21] explained by Mark V.
Flinn, David C. Geary and Carol V. Ward based mainly on work by Richard D.
Alexander.
-The idea of intelligence as a signal of good health and resistance to disease.
-The Group selection theory contends that organism characteristics that provide
benefits to a group (clan, tribe, or larger population) can evolve despite
individual disadvantages such as those cited above.
-The idea that intelligence is connected with nutrition, and thereby with
status[22] A higher IQ could be a signal that an individual comes from and lives
in a physical and social environment where nutrition levels are high, and vice
versa.
Philosophy of mind
Philosophy of mind is the branch of philosophy that studies the nature of the mind,
mental events, mental functions, mental properties, consciousness and their
relationship to the physical body. The mind-body problem, i.e. the relationship of the
mind to the body, is commonly seen as the central issue in philosophy of mind,
although there are other issues concerning the nature of the mind that do not involve its
relation to the physical body.[23] José Manuel Rodriguez Delgado writes, "In present
popular usage, soul and mind are not clearly differentiated and some people, more or
less consciously, still feel that the soul, and perhaps the mind, may enter or leave the
body as independent entities."[24]
Dualism and monism are the two major schools of thought that attempt to resolve the
mind-body problem. Dualism is the position that mind and body are in some way
separate from each other. It can be traced back to Plato,[25] Aristotle[26][27][28] and
the Samkhya and Yoga schools of Hindu philosophy,[29] but it was most precisely
formulated by René Descartes in the 17th century.[30] Substance dualists argue that
the mind is an independently existing substance, whereas Property dualists maintain
that the mind is a group of independent properties that emerge from and cannot be
reduced to the brain, but that it is not a distinct substance.[31]
The 20th century philosopher Martin Heidegger suggested that subjective experience
and activity (i.e. the "mind") cannot be made sense of in terms of Cartesian
"substances" that bear "properties" at all (whether the mind itself is thought of as a
distinct, separate kind of substance or not). This is because the nature of subjective,
qualitative experience is incoherent in terms of – or semantically incommensurable with
543
the concept of – substances that bear properties. This is a fundamentally ontological
argument.[32]
The philosopher of cognitive science Daniel Dennett, for example, argues that there is
no such thing as a narrative center called the "mind", but that instead there is simply a
collection of sensory inputs and outputs: different kinds of "software" running in
parallel.[33] Psychologist B.F. Skinner argued that the mind is an explanatory fiction
that diverts attention from environmental causes of behavior;[34] he considered the
mind a "black box" and thought that mental processes may be better conceived of as
forms of covert verbal behavior.[35][36]
Mind/body perspectives
Monism is the position that mind and body are not physiologically and ontologically
distinct kinds of entities. This view was first advocated in Western Philosophy by
Parmenides in the 5th Century BC and was later espoused by the 17th Century
rationalist Baruch Spinoza.[37] According to Spinoza's dual-aspect theory, mind and
body are two aspects of an underlying reality which he variously described as "Nature"
or "God".
-Physicalists argue that only the entities postulated by physical theory exist, and
that the mind will eventually be explained in terms of these entities as physical
theory continues to evolve.
-Idealists maintain that the mind is all that exists and that the external world is
either mental itself, or an illusion created by the mind.
-Neutral monists adhere to the position that perceived things in the world can be
regarded as either physical or mental depending on whether one is interested in
their relationship to other things in the world or their relationship to the
perceiver. For example, a red spot on a wall is physical in its dependence on
the wall and the pigment of which it is made, but it is mental in so far as its
perceived redness depends on the workings of the visual system. Unlike dualaspect theory, neutral monism does not posit a more fundamental substance of
which mind and body are aspects.
The most common monisms in the 20th and 21st centuries have all been variations of
physicalism; these positions include behaviorism, the type identity theory, anomalous
monism and functionalism.[38]
Many modern philosophers of mind adopt either a reductive or non-reductive
physicalist position, maintaining in their different ways that the mind is not something
separate from the body.[38] These approaches have been particularly influential in the
sciences, e.g. in the fields of sociobiology, computer science, evolutionary psychology
and the various neurosciences.[39][40][41][42] Other philosophers, however, adopt a
non-physicalist position which challenges the notion that the mind is a purely physical
construct.
-Reductive physicalists assert that all mental states and properties will eventually be
explained by scientific accounts of physiological processes and states.[43][44][45]
-Non-reductive physicalists argue that although the brain is all there is to the mind, the
predicates and vocabulary used in mental descriptions and explanations are
indispensable, and cannot be reduced to the language and lower-level explanations of
physical science.[46][47]
Continued progress in neuroscience has helped to clarify many of these issues, and its
findings strongly support physicalists' assertions.[48][49] Nevertheless our knowledge
is incomplete, and modern philosophers of mind continue to discuss how subjective
qualia and the intentional mental states can be naturally explained.[50][51]
544
Scientific study
Neuroscience
Neuroscience
studies
the
nervous system, the physical
basis of the mind. At the
systems level, neuroscientists
investigate how biological neural
networks
form
and
physiologically
interact
to
produce mental functions and
content such as reflexes,
multisensory integration, motor
coordination, circadian rhythms,
emotional responses, learning,
and memory. At a larger scale,
efforts
in
computational
neuroscience have developed
large-scale models that simulate
simple, functioning brains.[52]
As of 2012, such models include
the thalamus, basal ganglia,
prefrontal cortex, motor cortex,
and
occipital
cortex,
and
consequentially simulated brains
can learn, respond to visual
stimuli, coordinate motor responses, form short-term memories, and learn to respond
to patterns. Currently, researchers aim to program the hippocampus and limbic system,
hypothetically imbuing the simulated mind with long-term memory and crude
emotions.[53]
By contrast, affective neuroscience studies the neural mechanisms of personality,
emotion, and mood primarily through experimental tasks.
Cognitive Science
Cognitive science examines the mental functions that give rise to information
processing, termed cognition. These include attention, memory, producing and
understanding language, learning, reasoning, problem solving, and decision making.
Cognitive science seeks to understand thinking "in terms of representational structures
in the mind and computational procedures that operate on those structures".[54]
Psychology
Psychology is the scientific study of human behavior, mental functioning, and
experience. As both an academic and applied discipline, Psychology involves the
scientific study of mental processes such as perception, cognition, emotion,
personality, as well as environmental influences, such as social and cultural influences,
and interpersonal relationships, in order to devise theories of human behavior.
Psychology also refers to the application of such knowledge to various spheres of
human activity, including problems of individuals' daily lives and the treatment of mental
health problems.
Psychology differs from the other social sciences (e.g., anthropology, economics,
political science, and sociology) due to its focus on experimentation at the scale of the
individual, or individuals in small groups as opposed to large groups, institutions or
societies. Historically, psychology differed from biology and neuroscience in that it was
545
primarily concerned with mind rather than brain. Modern psychological science
incorporates physiological and neurological processes into its conceptions of
perception, cognition, behaviour, and mental disorders.
Mental health
By analogy with the health of the body, one can speak metaphorically of a state of
health of the mind, or mental health. Merriam-Webster defines mental health as "A
state of emotional and psychological well-being in which an individual is able to use his
or her cognitive and emotional capabilities, function in society, and meet the ordinary
demands of everyday life." According to the World Health Organization (WHO), there is
no one "official" definition of mental health. Cultural differences, subjective
assessments, and competing professional theories all affect how "mental health" is
defined. In general, most experts agree that "mental health" and "mental illness" are
not opposites. In other words, the absence of a recognized mental disorder is not
necessarily an indicator of mental health.
One way to think about mental health is by looking at how effectively and successfully
a person functions. Feeling capable and competent; being able to handle normal levels
of stress, maintaining satisfying relationships, and leading an independent life; and
being able to "bounce back," or recover from difficult situations, are all signs of mental
health.
Psychotherapy is an interpersonal, relational intervention used by trained
psychotherapists to aid clients in problems of living. This usually includes increasing
individual sense of well-being and reducing subjective discomforting experience.
Psychotherapists employ a range of techniques based on experiential relationship
building, dialogue, communication and behavior change and that are designed to
improve the mental health of a client or patient, or to improve group relationships (such
as in a family). Most forms of psychotherapy use only spoken conversation, though
some also use various other forms of communication such as the written word, art,
drama, narrative story, or therapeutic touch. Psychotherapy occurs within a structured
encounter between a trained therapist and client(s). Purposeful, theoretically based
psychotherapy began in the 19th century with psychoanalysis; since then, scores of
other approaches have been developed and continue to be created.
Non-human minds
Animal intelligence
Animal cognition, or cognitive ethology, is the title given to a modern approach to the
mental capacities of animals. It has developed out of comparative psychology, but has
also been strongly influenced by the approach of ethology, behavioral ecology, and
evolutionary psychology. Much of what used to be considered under the title of "animal
intelligence" is now thought of under this heading. Animal language acquisition,
attempting to discern or understand the degree to which animal cognition can be
revealed by linguistics-related study, has been controversial among cognitive linguists.
Artificial intelligence
In 1950 Alan M. Turing published "Computing machinery and intelligence" in Mind, in
which he proposed that machines could be tested for intelligence using questions and
answers. This process is now named the Turing Test. The term Artificial Intelligence
(AI) was first used by John McCarthy who considered it to mean "the science and
engineering of making intelligent machines".[56] It can also refer to intelligence as
exhibited by an artificial (man-made, non-natural, manufactured) entity. AI is studied in
overlapping fields of computer science, psychology, neuroscience and engineering,
546
dealing with intelligent behavior, learning and adaptation and usually developed using
customized machines or computers.
Research in AI is concerned with
producing machines to automate tasks
requiring
intelligent
behavior.
Examples include control, planning
and scheduling, the ability to answer
diagnostic and consumer questions,
handwriting, natural language, speech
and facial recognition. As such, the
study of AI has also become an
engineering discipline, focused on
providing solutions to real life
problems, knowledge mining, software
applications, strategy games like
computer chess and other video
games. One of the biggest limitations
of AI is in the domain of actual
machine
comprehension.
Consequentially
natural
language
understanding and connectionism
(where behavior of neural networks is
investigated) are areas of active
research and development.
The debate about the nature of the mind is relevant to the development of artificial
intelligence. If the mind is indeed a thing separate from or higher than the functioning of
the brain, then hypothetically it would be much more difficult to recreate within a
machine, if it were possible at all. If, on the other hand, the mind is no more than the
aggregated functions of the brain, then it will be possible to create a machine with a
recognisable mind (though possibly only with computers much different from today's),
by simple virtue of the fact that such a machine already exists in the form of the human
brain.
In religion
Many religions associate spiritual qualities to the human mind. These are often tightly
connected to their mythology and afterlife.
The Indian philosopher-sage Sri Aurobindo attempted to unite the Eastern and Western
psychological traditions with his integral psychology, as have many philosophers and
New religious movements. Judaism teaches that "moach shalit al halev", the mind rules
the heart. Humans can approach the Divine intellectually, through learning and
behaving according to the Divine Will as enclothed in the Torah, and use that deep
logical understanding to elicit and guide emotional arousal during prayer. Christianity
has tended to see the mind as distinct from the soul (Greek nous) and sometimes
further distinguished from the spirit. Western esoteric traditions sometimes refer to a
mental body that exists on a plane other than the physical. Hinduism's various
philosophical schools have debated whether the human soul (Sanskrit atman) is
distinct from, or identical to, Brahman, the divine reality. Taoism sees the human being
as contiguous with natural forces, and the mind as not separate from the body.
Confucianism sees the mind, like the body, as inherently perfectible.
547
Buddhism
According to Buddhist philosopher Dharmakirti,
the mind has two fundamental qualities: "clarity
and knowing". If something is not those two
qualities, it cannot validly be called mind. "Clarity"
refers to the fact that mind has no color, shape,
size, location, weight, or any other physical
characteristic, and that it gives rise to the
contents of experience. "Knowing" refers to the
fact that mind is aware of the contents of
experience, and that, in order to exist, mind must
be cognizing an object. You cannot have a mind whose function is to cognize an object - existing
without cognizing an object. For this reason, mind
is often described in Buddhism as "that which has
contents".[57]
Mind, in Buddhism, is also described as being
"space-like" and "illusion-like". Mind is space-like
in the sense that it is not physically obstructive. It
has no qualities which would prevent it from
existing. Mind is illusion-like in the sense that it is
empty of inherent existence. This does not mean
it does not exist, it means that it exists in a
manner that is counter to our ordinary way of misperceiving how phenomena exist,
according to Buddhism. When the mind is itself cognized properly, without
misperceiving its mode of existence, it appears to exist like an illusion. There is a big
difference however between being "space and illusion" and being "space-like" and
"illusion-like". Mind is not composed of space, it just shares some descriptive
similarities to space. Mind is not an illusion, it just shares some descriptive qualities
with illusions.
Buddhism posits that there is no inherent, unchanging identity (Inherent I, Inherent Me)
or phenomena (Ultimate self, inherent self, Atman, Soul, Self-essence, Jiva, Ishvara,
humanness essence, etc.) which is the experiencer of our experiences and the agent
of our actions. In other words, human beings consist of merely a body and a mind, and
nothing extra. Within the body there is no part or set of parts which is - by itself or
themselves - the person. Similarly, within the mind there is no part or set of parts which
are themselves "the person". A human being merely consists of five aggregates, or
skandhas and nothing else (please see Valid Designation).
In the same way, "mind" is what can be validly conceptually labelled onto our mere
experience of clarity and knowing. There is not something separate and apart from
clarity and knowing which is "mind", in Buddhism. "Mind" is that part of experience
which can be validly referred to as mind by the concept-term "mind". There is also not
"objects out there, mind in here, and experience somewhere in-between". There is not
a third thing called "experience" which exists between the contents of mind and what
mind cognizes. There is only the clarity (arising of mere experience: shapes, colors, the
components of smell, components of taste, components of sound, components of
touch) and nothing else; this means, expressly, that there is not a third thing called
"experience" and not a third thing called "experiencer who has the experience". This is
deeply related to "no-self".
Clearly, the experience arises and is known by mind, but there is not a third thing which
sits apart from that which is the "real experiencer of the experience". This is the claim
of Buddhism, with regards to mind and the ultimate nature of minds (and persons).
548
Mortality of the mind
Due to the mind-body problem, much interest and debate surround the question of
what happens to one's conscious mind as one's body dies. According to
neuropsychology, all brain function halts permanently upon brain death, and the mind
fails to survive brain death and ceases to exist. This permanent loss of consciousness
after death is often called "eternal oblivion". The belief that some spiritual or immaterial
component exists and is preserved after death is described by the term "afterlife".
In pseudoscience
Parapsychology
Parapsychology is the scientific study of certain types of paranormal phenomena, or of
phenomena which appear to be paranormal,[58] for instance precognition, telekinesis
and telepathy. The term is based on the Greek para (beside/beyond), psyche
(soul/mind), and logos (account/explanation) and was coined by psychologist Max
Dessoir in or before 1889.[59] J. B. Rhine later popularized "parapsychology" as a
replacement for the earlier term "psychical research", during a shift in methodologies
which brought experimental methods to the study of psychic phenomena.[59]
Parapsychology is controversial, with many scientists believing that psychic abilities
have not been demonstrated to exist.[60][61][62][63][64] The status of parapsychology
as a science has also been disputed,[65] with many scientists regarding the discipline
as pseudoscience.[66][67][68]
549
550
Awareness
Awareness is the state or ability to perceive, to feel, or to be conscious of events,
objects, or sensory patterns. In this level of consciousness, sense data can be
confirmed by an observer without necessarily implying understanding. More broadly, it
is the state or quality of being aware of something. In biological psychology, awareness
is defined as a human's or an animal's perception and cognitive reaction to a condition
or event.
Contents
1 Concept
2 Self-awareness
3 Neuroscience
3.1 Basic awareness
3.2 Basic interests
3.3 Changes in awareness
4 Living systems view
5 Communications and information systems
6 Covert awareness
7 Other uses
Concept
Awareness is a relative concept. An animal may be partially aware, may be
subconsciously aware, or may be acutely unaware of an event. Awareness may be
focused on an internal state, such as a visceral feeling, or on external events by way of
sensory perception. Awareness provides the raw material from which animals develop
qualia, or subjective ideas about their experience.
Self-awareness
Popular ideas about consciousness suggest the phenomenon describes a condition of
being aware of one's awareness or, self-awareness. Efforts to describe consciousness
in neurological terms have focused on describing networks in the brain that develop
awareness of the qualia developed by other networks.[1]
Neuroscience
Neural systems that regulate attention serve to attenuate awareness among complex
animals whose central and peripheral nervous system provides more information than
cognitive areas of the brain can assimilate. Within an attenuated system of awareness,
a mind might be aware of much more than is being contemplated in a focused
extended consciousness.
551
Basic awareness
Basic awareness of one's internal and external world depends on the brain stem. Bjorn
Merker,[2] an independent neuroscientist in Stockholm, Sweden, argues that the brain
stem supports an elementary form of conscious thought in infants with
hydranencephaly. "Higher" forms of awareness including self-awareness require
cortical contributions, but "primary consciousness" or "basic awareness" as an ability to
integrate sensations from the environment with one's immediate goals and feelings in
order to guide behavior, springs from the brain stem which human beings share with
most of the vertebrates. Psychologist Carroll Izard emphasizes that this form of primary
consciousness consists of the capacity to generate emotions and an awareness of
one's surroundings, but not an ability to talk about what one has experienced. In the
same way, people can become conscious of a feeling that they can't label or describe,
a phenomenon that's especially common in pre-verbal infants.
Due to this discovery medical definitions of brain death as a lack of cortical activity face
a serious challenge.
Basic interests
Down the brain stem lie interconnected regions that regulate the direction of eye gaze
and organize decisions about what to do next, such as reaching for a piece of food or
pursuing a potential mate.
Changes in awareness
The ability to consciously detect an image when presented at near-threshold stimulus
varies across presentations. One factor is "baseline shifts" due to top down attention
that modulates ongoing brain activity in sensory cortex areas that affects the neural
processing of subsequent perceptual judgments.[3] Such top down biasing can occur
through two distinct processes: an attention driven baseline shift in the alpha waves,
and a decision bias reflected in gamma waves.[4]
Living systems view
Outside of neuroscience biologists, Humberto Maturana and Francisco Varela
contributed their Santiago theory of cognition in which they wrote:
Living systems are cognitive systems, and living as a process is a process of
cognition. This statement is valid for all organisms, with or without a nervous
system.[5]
This theory contributes a perspective that cognition is a process present at organic
levels that we don't usually consider to be aware. Given the possible relationship
between awareness and cognition, and consciousness, this theory contributes an
interesting perspective in the philosophical and scientific dialogue of awareness and
living systems theory.
Communications and information systems
Awareness is also a concept used in Computer Supported Cooperative Work, CSCW.
Its definition has not yet reached a consensus in the scientific community in this
general expression.
However, context awareness and location awareness are concepts of large importance
especially for AAA (authentication, authorization, accounting) applications.
The composed term of location awareness still is gaining momentum with the growth of
ubiquitous computing. First defined with networked work positions (network location
552
awareness), it has been extended to mobile phones and other mobile communicable
entities. The term covers a common interest in whereabouts of remote entities,
especially individuals and their cohesion in operation.
The composed term of context awareness is a superset including the concept of
location awareness. It extends the awareness to context features of operational target
as well as to context or (?) and context of operational area.
Covert awareness
Covert awareness is the knowledge of something without knowing it. Some patients
with specific brain damage are for example unable to tell if a pencil is horizontal or
vertical.[citation needed] They are however able to grab the pencil, using the correct
orientation of the hand and wrist. This condition implies that some of the knowledge the
mind possesses is delivered through alternate channels than conscious intent.[original
research?]
Other uses
Awareness forms a basic concept of the theory and practice of Gestalt therapy.
In general, "awareness" may also refer to public or common knowledge or
understanding about a social, scientific, or political issue, and hence many movements
try to foster "awareness" of a given subject, that is, "raising awareness". Examples
include AIDS awareness and Multicultural awareness.
Awareness may refer to Anesthesia awareness.
553
554
Self-awareness
Self-awareness is the capacity for introspection and the ability to recognize oneself as
an individual separate from the environment and other individuals.
Contents
1 The basis of personal identity
1.1 A philosophical view
1.2 Self-Awareness Development
1.2.1 Self-Awareness Theory
2 In theater
3 In animals
4 In Schizophrenia
5 In science fiction
6 In psychology
7 In Adolescent
8 Self-awareness in Autism Spectrum Disorders
555
The basis of personal identity
A philosophical view
"I think, therefore I exist, as a thing that thinks."
"...And as I observed that this truth 'I think, therefore I am' (Cogito ergo sum)
was so certain and of such evidence ...I concluded that I might, without scruple,
accept it as the first principle of the Philosophy I was in search."
"...In the statement 'I think, therefore I am' ... I see very clearly that to think it is
necessary to be, I concluded that I might take, as a general rule, the principle,
that all the things which we very clearly and distinctly conceive are true..."[1][2]
While reading Descartes, Locke began to relish the great ideas of philosophy and the
scientific method. On one occasion, while in a meeting with friends, the question of the
"limits of human understanding" arose. He spent almost twenty years of his life on the
subject until the publication of An Essay Concerning Human Understanding, a great
chapter in the History of Philosophy.[3]
John Locke's chapter XXVII "On Identity and Diversity" in An Essay Concerning Human
Understanding (1689) has been said to be one of the first modern conceptualizations of
consciousness as the repeated self-identification of oneself, through which moral
responsibility could be attributed to the subject—and therefore punishment and
guiltiness justified, as critics such as Nietzsche would point out, affirming "...the
psychology of conscience is not 'the voice of God in man'; it is the instinct of cruelty ...
expressed, for the first time, as one of the oldest and most indispensable elements in
the foundation of culture."[4][5][6] John Locke does not use the terms self-awareness
or self-consciousness though.[7]
According to Locke, personal identity (the self) "depends on consciousness, not on
substance" nor on the soul. We are the same person to the extent that we are
conscious of our past and future thoughts and actions in the same way as we are
conscious of our present thoughts and actions. If consciousness is this "thought" which
doubles all thoughts, then personal identity is only founded on the repeated act of
consciousness: "This may show us wherein personal identity consists: not in the
identity of substance, but ... in the identity of consciousness." For example, one may
claim to be a reincarnation of Plato, therefore having the same soul. However, one
would be the same person as Plato only if one had the same consciousness of Plato's
thoughts and actions that he himself did. Therefore, self-identity is not based on the
soul. One soul may have various personalities.
Self-identity is not founded either on the body or the substance, argues Locke, as the
substance may change while the person remains the same: "animal identity is
preserved in identity of life, and not of substance", as the body of the animal grows and
changes during its life. Take for example a prince's soul which enters the body of a
cobbler: to all exterior eyes, the cobbler would remain a cobbler. But to the prince
himself, the cobbler would be himself, as he would be conscious of the prince's
thoughts and acts, and not of the cobbler's life. A prince's consciousness in a cobbler
body: thus the cobbler is, in fact, a prince. But this interesting border-case leads to this
problematic thought that since personal identity is based on consciousness, and that
only oneself can be aware of his consciousness, exterior human judges may never
know if they really are judging—and punishing—the same person, or simply the same
body. In other words, Locke argues that you may be judged only for the acts of your
body, as this is what is apparent to all but God; however, you are in truth only
responsible for the acts for which you are conscious. This forms the basis of the
insanity defense: one can't be held accountable for acts in which one was
unconsciously irrational, mentally ill[8]—and therefore leads to interesting philosophical
questions:
556
[...] personal identity consists [not in the identity of substance] but in the identity
of consciousness, wherein if Socrates and the present mayor of Queenborough
agree, they are the same person: if the same Socrates waking and sleeping do
not partake of the same consciousness, Socrates waking and sleeping is not
the same person. And to punish Socrates waking for what sleeping Socrates
thought, and waking Socrates was never conscious of, would be no more right,
than to punish one twin for what his brother-twin did, whereof he knew nothing,
because their outsides were so like, that they could not be distinguished; for
such twins have been seen.[3]
Or again:
PERSON, as I take it, is the name for this self. Wherever a man finds what he
calls himself, there, I think, another may say is the same person. It is a forensic
term, appropriating actions and their merit; and so belong only to intelligent
agents, capable of a law, and happiness, and misery. This personality extends
itself beyond present existence to what is past, only by consciousness, -whereby it becomes concerned and accountable; owns and imputes to itself
past actions, just upon the same ground and for the same reason as it does the
present. All which is founded in a concern for happiness, the unavoidable
concomitant of consciousness; that which is conscious of pleasure and pain,
desiring that that self that is conscious should be happy. And therefore
whatever past actions it cannot reconcile or APPROPRIATE to that present self
by consciousness, it can be no more concerned in it than if they had never been
done: and to receive pleasure or pain, i.e. reward or punishment, on the
account of any such action, is all one as to be made happy or miserable in its
first being, without any demerit at all. For, supposing a MAN punished now for
what he had done in another life, whereof he could be made to have no
consciousness at all, what difference is there between that punishment and
being CREATED miserable? And therefore, conformable to this, the apostle
tells us, that, at the great day, when every one shall "receive according to his
doings, the secrets of all hearts shall be laid open". The sentence shall be
justified by the consciousness all person shall have, that THEY THEMSELVES,
in what bodies soever they appear, or what substances soever that
consciousness adheres to, are the SAME that committed those actions, and
deserve that punishment for them.[4]
Henceforth, Locke's conception of personal identity found it not on the substance or the
body, but in the "same continued consciousness", which is also distinct from the soul.
He creates a third term between the soul and the body—and Locke's thought may
certainly be mediated by those who, following a scientist ideology, would identify too
quickly the brain to consciousness. For the brain, as the body and as any substance,
may change, while consciousness remains the same. Therefore personal identity is not
in the brain, but in consciousness. However, Locke's theory also reveals his debt to
theology and to that Apocalyptic "great day", which by advance excuse any failings of
human justice and therefore humanity's miserable state.
Self-Awareness Development
Individuals become conscious of themselves through the development of selfawareness.[9] This particular type of self-development pertains to becoming conscious
of one's own body and mental state of mind including thoughts, actions, ideas, feelings
and interactions with others.[10] “Self-awareness does not occur suddenly through one
particular behavior it develops gradually through a succession of different behaviors all
of which relate to the self."[11] It is developed through an early sense of non-self
components using sensory and memory sources. In developing self –awareness
557
through self-exploration and social experiences one can broaden their social world and
become more familiar with the self.
Several ideas in the development of self-awareness have been researched. In babies
self-awareness occurs in a predicted stages.[10] Ulric Neisser (1988 cited in [10])
states that self-awareness is built upon different resources of information including
ecological, interpersonal, extended, private, and conceptual aspects of self. The
ecological self is seen in early infancy it is the self in relation to the surrounding
environment. It is considered low level self-awareness based on only being aware of
your surrounding space. Interpersonal self also emerges in early infancy. It supports
the theory of unresponsive interpersonal interaction with the environment for instance a
baby cooing. Even though the social world is not responding the infant is able to
discover more about themselves. This leads to the extended self where one is able to
reflect on itself generating thoughts of past and future. The private self pertains to
internal thoughts, feelings, and intentions. Finally, the concept of self (Ulric's theory of
self-concept) is the beliefs that we hold based on representations of human nature and
the self. This level of self is essential because it enables and individual to portray who
they are.[10]
According to Emory University’s Phileppe Rochat.[9] there are five levels of selfawareness which unfold in early development and six potential prospects ranging from
“Level 0” (having no self-awareness) advancing complexity to “Level 5” (explicit selfawareness).
-Level 0: Confusion
At this level the individual has a degree of zero self-awareness. This person is unaware
of any mirror reflection or the mirror itself. They perceive the mirror as an extension of
their environment. Level 0 can also be displayed when an adult frightens themselves in
a mirror mistaking their own reflection as another person just for a second.
-Level 1: Differentiation
The individual realizes the mirror is able to reflect things. They see that what is in the
mirror is different than what is surrounding them. At this level one can differentiate
between their own movement in the mirror and the movement of the surrounding
environment.
-Level 2: Situation
At this point an individual can link the movements on the mirror to what is perceived
within their own body. This is the first hint of self-exploration on a projected surface
where what is visualized on the mirror is special to the self.
-Level 3: Identification
The individual finds out that recognition takes effect. One can now see that what’s in
the mirror is not another person but it is actually themselves. It is seen when a child
refers to them self while looking in the mirror instead of referring to the mirror while
referring to themselves. They have now identified self
-Level 4: Permanence
Once an individual reaches this level they can identify the self beyond the present
mirror imagery. They are able to identify the self in previous pictures looking different or
younger. A “permanent self” is now experienced.
-Level 5: self-consciousness or “meta” self-awareness
At this level not only is the self seen from a first person view but its realized that it’s
also seen from a third person’s view. They begin to understand they can be in the mind
of others. For instance, how they are seen from a public standpoint.[9]
Related to research stated above by the time an average toddler reaches 18 months
they will discover themselves and recognize their own reflection in the mirror. By the
558
age of 24 months the toddler will observe and relate their own actions to those actions
of other people and the surrounding environment.[12] As infants grow to familiarize
themselves with their surround environment and a child will provide a self-description in
terms of action and later in terms of qualities and traits of their environment.
Around school age a child’s awareness of personal memory transitions into a sense of
ones own self. At this stage, a child begins to develop interests along with likes and
dislikes. This transition enables the awareness of an individual’s past, present, and
future to grow as conscious experiences are remembered more often.[12] School age
children begin to separate
As a child’s self-awareness increases they tend to separate and become their own
person. Their cognitive and social development allows “the taking of another's
perspective and the accepting of inconsistencies.”[13] By adolescence, a coherent and
integrated self-perception normally emerges. This very personal emerging perspective
continues to direct and advance an individual’s self-awareness throughout their adult
life.
“A further and deeper development in self-awareness allows a person to become
increasingly wise and coherent in the understanding of self.” The increase in
awareness can ultimately lead to high levels of consciousness. This has been
supported through research on enhanced self-actualization, increased attention in
association with expanding ones self-concept, and a higher level of internal control and
maintenance of self during stressful conditions.[14]
Self-Awareness Theory
Duval and Robert Wicklund’s (1972) landmarkthe theory of self-awareness. SelfAwareness Theory states that when we focus our attention on ourselves, we evaluate
and compare our current behavior to our internal standards and values. We become
self-conscious as objective evaluators of ourselves. However self-awareness is not to
be confused with self-consciousness.[15] Various emotional states are intensified by
self-awareness. However, some people may seek to increase their self-awareness
through these outlets. People are more likely to align their behavior with their standards
when made self-aware. People will be negatively affected if they don't live up to their
personal standards. Various environmental cues and situations induce awareness of
the self, such as mirrors, an audience, or being videotaped or recorded. These cues
also increase accuracy of personal memory.[16] In Demetriou's theory, one of the neoPiagetian theories of cognitive development, self-awareness develops systematically
from birth through the life span and it is a major factor for the development of general
inferential processes.[17] Moreover, a series of recent studies showed that selfawareness about cognitive processes participates in general intelligence on a par with
processing efficiency functions, such as working memory, processing speed, and
reasoning.[18]
In theater
Theater also concerns itself with other awareness besides self-awareness. There is a
possible correlation between the experience of the theater audience and individual selfawareness. As actors and audiences must not "break" the fourth wall in order to
maintain context, so individuals must not be aware of the artificial, or the constructed
perception of his or her reality. This suggests that both self-awareness and the social
constructs applied to others are artificial continuums just as theater is. Theatrical efforts
such as Six Characters in Search of an Author, or The Wonderful Wizard of Oz,
construct yet another layer of the fourth wall, but they do not destroy the primary
illusion. Refer to Erving Goffman's Frame Analysis: An Essay on the Organization of
Experience.
559
In animals
There is an ongoing debate as to whether animals have consciousness or not; but in
this article the question is to whether animals have self-awareness? Like humans’
minds and brains, animals’ minds and brains are concealed and subjective also. When
an individual can identify, process, store information about the self, and has knowledge
of one’s own mental states they are defined to have self-awareness.[19] Knowing that
an individual stays the same individual across time and is separate from the others’
and the environment is also a factor of having self-awareness.[19] Gordon Gallup, a
professor of psychology at the State University of New York in Albany says that “selfawareness provides the ability to contemplate the past, to project into the future, and to
speculate on what others are thinking”.[20] Studies have been done mainly on primates
to test if self-awareness is present. Apes, chimpanzees, monkeys, elephants, and
dolphins are studied most frequently. The most relevant studies to this day that
represent self-awareness in animals have been done on chimpanzees, dolphins, and
magpies.
The ‘Red Spot Technique’ created and experimented by Gordon Gallup [21] studies
self-awareness in animals (primates). In this technique, a red odorless spot is placed
on an anesthetized primate’s forehead. The spot is placed on the forehead so that it
can only be seen through a mirror. Once the individual awakens, independent
movements toward the spot after seeing their reflection in a mirror are observed.
During the Red Spot Technique, after looking in the mirror, chimpanzees used their
fingers to touch the red dot that was on their forehead and even after touching the red
dot they would smell their fingertips.[22] "Animals that can recognize themselves in
mirrors can conceive of themselves," says Gallup. This would mean that the
chimpanzees would possess self-awareness. Note that the chimpanzees have had
experience with a mirror before the ‘Red Spot Technique’ was performed. Having
experience with a mirror before the technique was performed reflects the past,
independent movement while looking in the mirror would reflect the present, and
touching the red dot would reflect what others’ are thinking which relates perfectly to
Gallup’s statement in the beginning of this article.[20] Chimpanzees, the most studied
species, compare the most to humans with the most convincing findings and
straightforward evidence in the relativity of self-awareness in animals so far.[23]
Dolphins were put to a similar test and achieved the same results. Diana Reiss, a
psycho-biologist at the New York Aquarium discovered that bottlenose dolphins can
recognize themselves in mirrors. In her experiment,[20] Reiss and her colleagues drew
with temporary black ink on some of the dolphins in the aquarium on parts of their
bodies that they could only see in a mirror. A gigantic mirror was placed inside the
dolphins’ tank. The dolphins who did not get drawn on ignored the mirror, but the
dolphins who did get drawn on “made a bee-line to see where they’d been marked”
according to Reiss. After the experiment the dolphins that have been drawn on once,
returned to the mirror to inspect themselves even when they were ‘drawn’ on again but
with clear water. The dolphins recognized the feeling and remembered the action from
when they were drawn on which relates to a factor of self-awareness.
Magpies are a part of the crow family species. Recently, similar to the Red Spot
Technique,[21] researchers studied magpie’s self-awareness by using the Mark Test.
In this study, Prior and Colleagues[23] performed eight sessions per magpie (5) tested
twice, using two different colors; yellow and red. The bird was marked with either
yellow or red, or a black imitation mark (the black mark is an imitation because
magpies are black in feather color). The magpies were tested with a mirror and a
colored mark, a mirror and a black mark, no mirror and a colored mark, and no mirror
and a black mark.[23] The sessions were twenty minutes long each, and each color
(red or yellow) was used once. The black (imitation) mark experiment that is put on the
Magpies is comparative to the Dolphins in Reiss’s study [20] when they were ‘drawn’
on with clear water. The imitation marks (and being drawn on with clear water), if
560
recognized shows that no anesthesia is needed and the remembrance of the action
does represent self-awareness.[23] The differences between the Red Spot Technique
[21] and Reiss’s Dolphin study [20] compared to the Mark Test [23] are that in the Red
Spot Technique the primates are anesthetized and have prior experiences with a mirror
where in the Mark test, the magpies were not anesthetized nor experienced with a
mirror.
Majority of birds are blind to the area below the beak near the throat region due to it
being out of their visual field; this is where the color marks were placed during the Mark
Test,[23] alternating from yellow to red. In the Mark Test,[23] a mirror was presented
with the reflective side facing the magpie being the only interpretation of the bird seeing
the marked spot they had on them. During one trial with a mirror and a mark, three out
of the five magpies showed a minimum of one example of self-directed behavior. The
magpies explored the mirror by moving toward it and looking behind it. One of the
magpies, Harvey, during several trials would pick up objects, posed, did some wingflapping, all in front of the mirror with the objects in his beak. This represents a sense of
self-awareness; knowing what is going on within himself and in the present. In all of the
trials with the mirror and the marks, never did the birds peck at the reflection of the
mark in the actual mirror. All of the behaviors were towards their own body but only
heightened when there was a mirror present and the mark was of color. Behavior
towards their own bodies concluded in the trials when the bird removed the mark. For
example, Gerti and Goldie, two of the magpies being studied, removed their marks
after a few minutes in their trials with a colored mark and a mirror. After the mark was
removed, there were no more behaviors toward their own bodies.[23]
A few slight occurrences of behavior towards the magpies own body happened in the
trial with the black mark and the mirror. It is an assumption in this study[23] that the
black mark may have been slightly visible on the black feathers. Prior and
Colleagues,[23] stated “This is an indirect support for the interpretation that the
behavior towards the mark region was elicited by seeing the own body in the mirror in
conjunction with an unusual spot on the body.”
The behaviors of the magpies clearly contrasted with no mirror present. In the no-mirror
trials, a non-reflective gray plate of the same size and in the same position as the
mirror was swapped in. There were not any mark directed self-behaviors when the
mark was present, in color, or in black.[23] Prior and Colleagues,[23] data quantitatively
matches the findings in chimpanzees. In summary of The Mark Test,[23] the results
show that magpies understand that a mirror image represents their own body; magpies
show to have self-awareness.
In conclusion, the fact that primates and magpies spot the markings on them and
examine themselves better means, according to theory, that they are seeing
themselves which means they are self-aware.[24] According to the definition stated
earlier in this section, if an individual can process, identify, store information (memory),
and recognize differences, they are self-aware. The chimpanzees, dolphins, and
magpies have all demonstrated these factors in the mentioned experiments.
In Schizophrenia
Schizophrenia is a chronic psychiatric illness characterized by excessive dopamine
activity in the mesolimbic tract and mesocortical tract leading to symptoms of psychosis
along with poor cognition in socialization. Under the DSM-V, schizophrenics have a
combination of positive, negative and psychomotor symptoms. These cognitive
disturbances involve rare beliefs and/or thoughts of a distorted reality that creates an
abnormal pattern of functioning for the patient. Multiple studies have investigated this
issue. Although it has been studied and proven that schizophrenia is hereditary, most
patients that inherit this gene are not self-aware of their disorder, regardless of their
561
family history. The level of self-awareness among patients with schizophrenia is a
heavily studied topic.
Schizophrenia as a disease state is characterized by severe cognitive dysfunction and
it is uncertain to what extent patients are aware of this deficiency. In a study published
in Schizophrenia Research by Medalia and Lim (2004),[25] researchers investigated
patients’ awareness of their cognitive deficit in the areas of attention, nonverbal
memory, and verbal memory. Results from this study (N=185) revealed large
discrepancy in patients’ assessment of their cognitive functioning relative to the
assessment of their clinicians. Though it is impossible to access ones’ consciousness
and truly understand what a schizophrenic believes, regardless in this study, patients
were not aware of their cognitive dysfunctional reasoning. In the DSM-V, to properly
diagnose a schizophrenic, they must have two or more of the following symptoms in
the duration of one month: delusions*, hallucinations*, disorganized speech*, grossly
disorganized/catatonic behavior and negative symptoms (*these three symptoms
above all other symptoms must be present to correctly diagnose a patient.) Sometimes
these symptoms are very prominent and are treated with a combination of
antipsychotics (i.e. haloperidol, loxapine), atypical antipsychotics (such as clozapine
and risperdone) and psychosocial therapies that include family interventions and
socials skills. When a patient is undergoing treatment and recovering from the disorder,
the memory of their behavior is present in a diminutive amount; thus, self-awareness of
diagnoses of schizophrenia after treatment is rare, as well as subsequent to onset and
prevalence in the patient.
The above findings are further supported by a study conducted in The American
Journal of Psychiatry in 1993 by Amador, et al. (N=43).[26] The study suggests a
correlation exists between patient insight, compliance and disease progression.
Investigators assess insight of illness was assessed via Scale to Assess Unawareness
of Mental Disorder and was used along with rating of psychopathology, course of
illness, and compliance with treatments in a sample of 43 patients. Patients with poor
insight are less likely to be compliant with treatment and are more likely to have a
poorer prognosis. Patients with hallucinations sometimes experience positive
symptoms, which can include delusions of reference, thought insertion/withdrawal,
thought broadcast, delusions of persecution, grandiosity and many more. These
psychoses skew the patient’s perspectives of reality in ways in which they truly believe
are really happening. For instance, a patient that is experiencing delusions of reference
may believe while watching the weather forecast that when the weatherman says it will
rain, he is really sending a message to the patient in which rain symbolizes a specific
warning completely irrelevant to what the weather is. Another example would be
thought broadcast, which is when a patient believes that everyone can hear their
thoughts. These positive symptoms sometimes are so severe to where the
schizophrenic believes that something is crawling on them or smelling something that
is not there in reality. These strong hallucinations are intense and difficult to convince
the patient that they do not exist outside of their cognitive beliefs, making it extremely
difficult for a patient to understand and become self-aware that what they are
experiencing is in fact not there.
Furthermore, a study by Bedford and Davis [27](2013) was conducted to look at the
association of denial vs. acceptance of multiple facets of schizophrenia (self reflection,
self perception and insight) and its effect on self-reflection (N=26). Study results
suggest patients with increased disease denial have lower recollection for self
evaluated mental illnesses. Disease denial, to a great extent, creates a hardship for
patients to undergo recovery because their feelings and sensations are intensely
outstanding. But just as this and the above studies imply, a large proportion of
schizophrenics do not have self-awareness of their illness for many factors and severity
of reasoning of their diagnoses.
562
In science fiction
In science fiction, self-awareness describes an essential human property that often
(depending on the circumstances of the story) bestows "personhood" onto a nonhuman. If a computer, alien or other object is described as "self-aware", the reader may
assume that it will be treated as a completely human character, with similar rights,
capabilities and desires to a normal human being.[28] The words "sentience",
"sapience" and "consciousness" are used in similar ways in science fiction.
In psychology
In psychology, the concept of "self-awareness" is used in different ways:
-As a form of intelligence, self-awareness can be an understanding of one's own
knowledge, attitudes, and opinions. Alfred Binet's first attempts to create an
intelligence test included items for "auto-critique" – a critical understanding of
oneself.[29] Surprisingly we do not have a privileged access to our own
opinions and knowledge directly. For instance, if we try to enumerate all the
members of any conceptual category we know, our production falls much short
of our recognition of members of that category.
-Albert Bandura[30] has created a category called self-efficacy that builds on
our varying degrees of self-awareness.
-Our general inaccuracy about our own abilities, knowledge, and opinions has
created many popular phenomena for research such as the better than average
effect. For instance, 90% of drivers may believe that they are "better than
average" (Swenson, 1981).[31] Their inaccuracy comes from the absence of a
clear definable measure of driving ability and their own limited self-awareness;
and this of course underlines the importance of objective standards to inform
our subjective self-awareness in all domains. Inaccuracy in our opinion seems
particularly disturbing, for what is more personal than opinions. Yet,
inconsistency in our opinion is as strong as in our knowledge of facts. For
instance, people who call themselves opposite extremes in political views often
hold not just overlapping political views, but views that are an essential
component of the opposite extreme. Reconciling such differences proves
difficult and gave rise to Leon Festinger's theory of cognitive dissonance.[32]
In Adolescent
Before going into self-awareness in adolescents there are a few terms that must be
stated that fit and help you to really understand just what self-awareness is. There are
self-esteem, self -concept and self representation. Let’s start with self-esteem. This
term is used in psychology to reflect person's overall emotional evaluation of his or her
own worth. It is a judgment of oneself as well as an attitude toward their self
(Wikipedia). Self-concept is the idea that you have about the kind of person you are,
and the mental image one has of oneself according to Webster. Self representation is
how you represent yourself in public and around others. Last is adolescent (teenager)
and that is a person between the ages of 13 and 19 according to the dictionary. (Free
Dictionary). [33]
Self-awareness in adolescent is when they are becoming conscious of their emotions.
Most children by the age to two are aware of emotions such as shame, guilt, pride and
embarrassment (Zeanah,84).[34] But they do not fully understand how those emotions
affect their life. By time they become thirteen they are really starting to understand how
those emotions have an impact on their lives. Although they are going through puberty
they really get to understand and really get in touch with their emotions. The emotions
that flow through them at the time they make decisions are not always good and that is
563
part of what makes adults believe they are confused, but they really are not. Harter did
a study of adolescents and in the study it found that adolescents (teenagers) are happy
and like themselves when around their friends. But when around their parents they feel
sad, mad and depressed and when it comes to pleasing them they feel hopeless
because what they get from their parents is they can do nothing right. However it was
also shown that when at school and around their teachers they felt intelligent and
creative. But when around people they do not know they are shy, uncomfortable and
nervous (60).[35] This tells you a little bit of how they understand and show their
emotions. This may also assist in making outsiders believe adolescents are confused,
but they really are not confused they are just going through a lot at that time. The way
they respond during the situation is just how they truly felt at that time during that
situation. What needs to be realized is that they are in the process of learning to get
their emotions together in under control. As adults you have already learned this
therefore they should be a little more understanding to what the adolescents are going
through. Although they are adolescents they know what they are doing and that’s why
they follow two roles. One role is responsible and the other role is the irresponsible and
it depends on who they are with to determine which role they are going to be in. When
they are around teachers, parents and sometime people they do not know they play the
responsible role but when around the oppose sex, friends and enemies they play the
irresponsible roles (Harter,89). The responsible role is when they are dependable and
you know they will make good decisions on their own. When they act in a reliable way
and do things to make you pride of them. The irresponsible role is the opposite of
responsible, when they are not trust worthy, they do everything you tell them not to do
and should not be on their own because they may do something to get into trouble.
A important thing to remember is that when they are growing it is important for the
people around them to encourage them to feel good about whom they are. This helps
to build a strong feeling of self-awareness. When they do not have a strong feeling it
can cause them to have a low self-esteem problem. With that it can cause problems
like body image problems which could lead to eating disorders like bulimia, it could also
cause them to have to deal with a lot of the issue that go along with low self-esteem.
Some girls who have low self-esteem will do things just to fit in with the crowd. They
could become sexually active before they are ready and have many partners just so
others would like them at that time. They could get started using drugs or do harm to
them self all because they don’t feel like anyone cared. When they have low selfesteems you can tell because there will have signs. Some signs are low grades in
school, not making friends easily and not wanting to try new things.
When girls are comfortable with their self-awareness they will have high self-esteem.
They will not follow others and will be more of the leaders but if they do follow they will
only go so far. Therefore they will not do things that will get them into trouble or
jeopardize their future. That’s because they care about what their family, friends and
teachers think about them. In other words she will not do things she’s not comfortable
with just to please others. She will do what she sees to be right and be ok with the
decisions she makes. That’s why self-awareness is very important in the growth of
adolescent (teenagers) and there surrounding adults should started to build it up as
young as the age of two years old. So once all the emotions really start to kick in they
will have a good base built already therefore they will just continue to build on it. They
will be independent but will be able to handle it and handle it the right way. Most girls
that have high self- esteem it will show in their grades in school, they get involved in
sports and other activity’s and they can make friends easily. The girls with high self–
esteem will always be willing to try new things but on top of it all they want to make
something of themselves.
564
Self-awareness in Autism Spectrum Disorders
Autism spectrum disorders (ASDs) are a group of neurodevelopmental disabilities that
can cause significant social, communication, and behavioral challenges
(Understanding Autism, 2003). ASDs can also cause imaginative abnormalities and
can range from mild to severe, especially in sensory-motor, perceptual and affective
dimensions. Children with ASD may struggle with self-awareness and self acceptance.
Their different thinking patterns and brain processing functions in the area of social
thinking and actions may compromise their ability to understand themselves and social
connections to others (Autism Asperger’s Digest, 2010). About 75% diagnosed
autistics are mentally handicapped in some general way and the other 25% diagnosed
with Asperger's Syndrome show average to good cognitive functioning (Mcgeer, 2004).
When we compare our own behavior to the morals and values that we were taught, we
can focus attention on ourselves increasing self-awareness. In understanding the many
effects of autism spectrum disorders on those afflicted have led many scientists to
theorize what level of self-awareness occurs and in what degree.
It is well known that children suffering from varying degrees of Autism struggle in social
situations. Scientists have produced evidence that self-awareness is a main problem
for people with ASD. Researchers used functional magnetic resonance scans (FMRI)
to measure brain activity in 66 male volunteers in which half have been diagnosed in
the autism spectrum. The volunteers were monitored while being asked to make
judgments about their own thoughts, opinions, preferences, as well as about someone
else's. By scanning the volunteers' brains as they responded to these questions, the
researchers were able to see differences in the brain activity between those with and
without autism. One area of the brain closely examined was the ventromedial prefrontal cortex (vMPFC) which is known to be active when people think about
themselves (Dyslexia United, 2009). This study showed that those afflicted with ASD
did not have much of a separation of the brain activity when speaking about
themselves as opposed to someone else like their normal counterparts did. This
research suggests that the autistic brain struggled to process information about
themselves. Self-awareness requires being able to keep track of the relationship one
has with themselves and to understand what make them similar to others or in what
ways different from others. "This research has shown that children with autism may
also have difficulty understanding their own thoughts and feelings and the brain
mechanisms underlying this, thus leading to deficits in self-awareness" (Dyslexia
United, 2009).
A study out of Stanford University has tried to map out brain circuits with understanding
self-awareness in Autism Spectrum Disorders. This study suggests that self-awareness
is primarily lacking in social situations but when in private they are more self-aware and
present. It is in the company of others while engaging in interpersonal interaction that
the self-awareness mechanism seems to fail. Higher functioning individuals on the ASD
scale have reported that they are more self-aware when alone unless they are in
sensory overload or immediately following social exposure (Progress in Autism, 2011).
Self-awareness dissipates when an autistic is faced with a demanding social situation.
This theory suggests that this happens due to the behavioral inhibitory system which is
responsible for self-preservation. This is the system that prevents human from selfharm like jumping out of a speeding bus or putting our hand on a hot stove. Once a
dangerous situation is perceived then the behavioral inhibitory system kicks in and
restrains our activities. "For individuals with ASD, this inhibitory mechanism is so
powerful, it operates on the least possible trigger and shows an over sensitivity to
impending danger and possible threats. (Progress in Autism, 2011). Some of these
dangers may be perceived as being in the presence of strangers, or a loud noise from
a radio. In these situations self-awareness can be compromised due to the desire of
self preservation, which trumps social composure and proper interaction.
565
The Hobson hypothesis reports that autism begins in infancy due to the lack of
cognitive and linguistic engagement which in turn results in impaired reflective selfawareness. In this study ten children with Asperger's Syndrome were examined using
the Self-understanding Interview. This interview was created by Damon and Hart and
focuses on seven core areas or schemas that measure the capacity to think in
increasingly difficult levels. This interview will estimate the level of self understanding
present. "The study showed that the Asperger group demonstrated impairment in the
'self-as-object' and 'self-as-subject domains of the Self-understanding Interview, which
supported Hobson's concept of an impaired capacity for self-awareness and selfreflection in people with ASD." (Jackson, Skirrow, Hare, 2012). Self-understanding is a
self description in an individual’s past, present and future. Without self-understanding it
is reported that self-awareness is lacking in people with ASD.
Joint attention (JA) was developed as a teaching strategy to help increase positive selfawareness in those with Autism Spectrum Disorders (Wehmeyer and Shogren, 2008).
JA strategies were first used to directly teach about reflected mirror images and how
they relate to their reflected image. Mirror Self Awareness Development (MSAD)
activities were used as a four step framework in which to measure increases in selfawareness in those with ASD. Self-awareness and knowledge is not something that
can simply be taught through direct instruction. Instead, students acquire this
knowledge by interacting with their environment (Wehmeyer and Shogren, 2008).
Mirror understanding and its relation to the development of self leads to measurable
increases in self-awareness in those with ASD. It also proves to be a highly engaging
and highly preferred tool in understanding the developmental stages of selfawareness.
There have been many different theories and studies done on what degree of selfawareness is displayed among people suffering from Autism Spectrum Disorders.
Scientists have done research about the various parts of the brain associated with
understanding self and self-awareness. Studies have shown evidence of areas of the
brain that are impacted by ASD. Other theories suggest that helping an individual learn
more about themselves through Joint Activities, such as the Mirror Self Awareness
Development may help teach positive self-awareness and growth. In helping to build
self-awareness it is also possible to build self-esteem and self acceptance. This in turn
can help to allow the individual with ASD to relate better to their environment and have
better social interactions with others.
566
Sleep
Contents
1 Physiology
1.1 Stages
1.2 NREM sleep
1.3 REM sleep
1.4 Timing
1.5 Optimal amount in humans
2 Naps
3 Sleep debt
4 Genetics
5 Functions
5.1 Restoration
5.2 Ontogenesis
5.3 Memory processing
5.4 Preservation
6 Dreaming
7 Evolution
8 Insomnia
9 Obstructive sleep apnea
10 Other sleep disorders
11 Effect of food and drink on sleep
11.1 Hypnotics
11.2 Stimulants
12 Anthropology of sleep
13 Sleep in other animals
In animals, sleep is a naturally recurring state characterized by altered consciousness,
relatively inhibited sensory activity, and inhibition of nearly all voluntary muscles.[1] It is
distinguished from wakefulness by a decreased ability to react to stimuli, and it is more
easily reversible than being in hibernation or a coma.
During sleep, most systems in an animal are in a heightened anabolic state,
accentuating the growth and rejuvenation of, e.g., the immune, nervous, skeletal and
muscular systems. It is observed in mammals, birds, reptiles, amphibians and fish, and
in some form also in insects and even simpler animals such as nematodes (see the
related article Sleep (non-human)), suggesting that sleep is universal in the animal
kingdom.
The purposes and mechanisms of sleep are only partially clear and the subject of
substantial ongoing research.[2] Sleep is sometimes thought to help conserve energy,
though this theory is not fully adequate as it only decreases metabolism by about 5–
567
10%.[3][4] Additionally it is observed
that mammals require sleep even
during the hypometabolic state of
hibernation, in which circumstance it is
actually a net loss of energy as the
animal returns from hypothermia to
euthermia in order to sleep.[5]
Humans may suffer from a number of
sleep disorders. These include such
dyssomnias as insomnia, hypersomnia,
and sleep apnea; such parasomnias as
sleepwalking and REM behavior
disorder; and the circadian rhythm
sleep disorders.
Physiology
In mammals and birds, sleep is divided into two broad types: rapid eye movement
(REM sleep) and non-rapid eye movement (NREM or non-REM sleep). Each type has
a distinct set of associated physiological and neurological features. REM sleep is
associated with the capability of dreaming.[6] The American Academy of Sleep
Medicine (AASM) divides NREM into three stages: N1, N2, and N3, the last of which is
also called delta sleep or slow-wave sleep.[7]
Stages
-NREM stage 1: This is a stage between sleep and wakefulness. The muscles
are active, and the eyes roll slowly, opening and closing moderately.
-NREM stage 2: theta activity In this stage, it gradually becomes harder to
awaken the sleeper; the alpha waves of the previous stage are interrupted by
abrupt activity called sleep spindles and K-complexes.[8]
-NREM stage 3: Formerly divided into stages 3 and 4, this stage is called slowwave sleep (SWS). SWS is initiated in the preoptic area and consists of delta
activity, high amplitude waves at less than 3.5 Hz. The sleeper is less
responsive to the environment; many environmental stimuli no longer produce
any reactions.
-REM: The sleeper now enters rapid eye movement (REM) where most
muscles are paralyzed. REM sleep is turned on by acetylcholine secretion and
is inhibited by neurons that secrete serotonin. This level is also referred to as
paradoxical sleep because the sleeper, although exhibiting EEG waves similar
to a waking state, is harder to arouse than at any other sleep stage. Vital signs
indicate arousal and oxygen consumption by the brain is higher than when the
sleeper is awake.[9] An adult reaches REM approximately every 90 minutes,
with the latter half of sleep being more dominated by this stage. REM sleep
occurs as a person returns to stage 1 from a deep sleep.[6] The function of
REM sleep is uncertain but a lack of it will impair the ability to learn complex
tasks. One approach to understanding the role of sleep is to study the
deprivation of it.[10] During this period, the EEG pattern returns to high
frequency waves which look similar to the waves produced while the person is
awake [8]
568
Sleep proceeds in cycles of REM and
NREM, usually four or five of them per
night, the order normally being N1 →
N2 → N3 → N2 → REM. There is a
greater amount of deep sleep (stage
N3) earlier in the night, while the
proportion of REM sleep increases in
the two cycles just before natural
awakening.
The stages of sleep were first described
in 1937 by Alfred Lee Loomis and his
coworkers, who separated the different
electroencephalography (EEG) features
of sleep into five levels (A to E), which
represented the spectrum from wakefulness to deep sleep.[11] In 1953, REM sleep
was discovered as distinct, and thus
William Dement and Nathaniel Kleitman
reclassified sleep into four NREM
stages and REM.[12] The staging
criteria were standardized in 1968 by
Allan Rechtschaffen and Anthony Kales
in the "R&K sleep scoring manual."[13]
In the R&K standard, NREM sleep was
divided into four stages, with slow-wave
sleep comprising stages 3 and 4. In
stage 3, delta waves made up less than
50% of the total wave patterns, while
they made up more than 50% in stage
4. Furthermore, REM sleep was
sometimes referred to as stage 5.
569
In 2004, the AASM commissioned the AASM Visual Scoring Task Force to review the
R&K scoring system. The review resulted in several changes, the most significant
being the combination of stages 3 and 4 into Stage N3. The revised scoring was
published in 2007 as The AASM Manual for the Scoring of Sleep and Associated
Events.[14] Arousals and respiratory, cardiac, and movement events were also
added.[15][16] Sleep stages and other characteristics of sleep are commonly
assessed by polysomnography in a specialized sleep laboratory. Measurements taken
include EEG of brain waves, electrooculography (EOG) of eye movements, and
electromyography (EMG) of skeletal muscle activity. In humans, the average length of
the first sleep cycle is approximately 90 minutes and 100 to 120 minutes from the
second to the fourth cycle, which is usually the last one.[17] Each stage may have a
distinct physiological function and this can result in sleep that exhibits loss of
consciousness but does not fulfill its physiological functions (i.e., one may still feel tired
after apparently sufficient sleep).
Scientific studies on sleep have shown that sleep stage at awakening is an important
factor in amplifying sleep inertia. Alarm clocks involving sleep stage monitoring
appeared on the market in 2005.[18] Using sensing technologies such as EEG
electrodes or accelerometers, these alarm clocks are supposed to wake people only
from light sleep.
NREM sleep
According to 2007 AASM standards, NREM consists of three stages. There is relatively
little dreaming in NREM.
Stage N1 refers to the transition of the brain from alpha waves having a frequency of
8–13 Hz (common in the awake state) to theta waves having a frequency of 4–7 Hz.
This stage is sometimes referred to as somnolence or drowsy sleep. Sudden twitches
and hypnic jerks, also known as positive myoclonus, may be associated with the onset
of sleep during N1. Some people may also experience hypnagogic hallucinations
during this stage. During N1, the subject loses some muscle tone and most conscious
awareness of the external environment.
Stage N2 is characterized by sleep spindles ranging from 11 to 16 Hz (most commonly
12–14 Hz) and K-complexes. During this stage, muscular activity as measured by EMG
decreases, and conscious awareness of the external environment disappears. This
stage occupies 45–55% of total sleep in adults.
Stage N3 (deep or slow-wave sleep) is characterized by the presence of a minimum of
20% delta waves ranging from 0.5–2 Hz and having a peak-to-peak amplitude >75 μV.
(EEG standards define delta waves to be from 0 to 4 Hz, but sleep standards in both
the original R&K, as well as the new 2007 AASM guidelines have a range of 0.5–2 Hz.)
This is the stage in which parasomnias such as night terrors, nocturnal enuresis,
sleepwalking, and somniloquy occur. Many illustrations and descriptions still show a
stage N3 with 20–50% delta waves and a stage N4 with greater than 50% delta waves;
these have been combined as stage N3.
REM sleep
Rapid eye movement sleep, or REM sleep (also known as paradoxical sleep),[19]
accounts for 20–25% of total sleep time in most human adults. The criteria for REM
sleep include rapid eye movements as well as a rapid low-voltage EEG. During REM
sleep, EEG patterns returns to higher frequency saw-tooth waves. Most memorable
dreaming occurs in this stage. At least in mammals, a descending muscular atonia is
seen. Such paralysis may be necessary to protect organisms from self-damage
through physically acting out scenes from the often-vivid dreams that occur during this
stage.
570
Timing
Sleep timing is controlled by the circadian clock, sleep-wake homeostasis, and in
humans, within certain bounds, willed behavior. The circadian clock—an inner
timekeeping, temperature-fluctuating, enzyme-controlling device—works in tandem
with adenosine, a neurotransmitter that inhibits many of the bodily processes
associated with wakefulness. Adenosine is created over the course of the day; high
levels of adenosine lead to sleepiness.[20]
In diurnal animals, sleepiness occurs as the circadian element causes the release of
the hormone melatonin and a gradual decrease in core body temperature. The timing is
affected by one's chronotype. It is the circadian rhythm that determines the ideal timing
of a correctly structured and restorative sleep episode.[21]
Homeostatic sleep propensity (the need for sleep as a function of the amount of time
elapsed since the last adequate sleep episode) must be balanced against the circadian
element for satisfactory sleep.[22] Along with corresponding messages from the
circadian clock, this tells the body it needs to sleep.[23] Sleep offset (awakening) is
primarily determined by circadian rhythm. A person who regularly awakens at an early
hour will generally not be able to sleep much later than his or her normal waking time,
even if moderately sleep-deprived[citation needed].
Sleep duration is affected by the gene DEC2. Some people have a mutation of this
gene; they sleep two hours less than normal. Neurology professor Ying-Hui Fu and her
colleagues bred mice that carried the DEC2 mutation and slept less than normal
mice.[24][25]
571
Optimal amount in humans
Adult
The optimal amount of sleep is not a meaningful concept unless the timing of that sleep
is seen in relation to an individual's circadian rhythms. A person's major sleep episode
is relatively inefficient and inadequate when it occurs at the "wrong" time of day; one
should be asleep at least six hours before the lowest body temperature.[27] The timing
is correct when the following two circadian markers occur after the middle of the sleep
episode and before awakening:[28] maximum concentration of the hormone melatonin,
and minimum core body temperature.
Human sleep needs can vary by age and among individuals, and sleep is considered to
be adequate when there is no daytime sleepiness or dysfunction. Moreover, selfreported sleep duration is only moderately correlated with actual sleep time as
measured by actigraphy, [29] and those affected with sleep state misperception may
typically report having slept only four hours despite having slept a full eight hours.[30]
A University of California, San Diego psychiatry study of more than one million adults
found that people who live the longest self-report sleeping for six to seven hours each
night.[31] Another study of sleep duration and mortality risk in women showed similar
results.[32] Other studies show that "sleeping more than 7 to 8 hours per day has been
consistently associated with increased mortality," though this study suggests the cause
is probably other factors such as depression and socioeconomic status, which would
correlate statistically.[33] It has been suggested that the correlation between lower
sleep hours and reduced morbidity only occurs with those who wake naturally, rather
than those who use an alarm.
Researchers at the University of Warwick and University College London have found
that lack of sleep can more than double the risk of death from cardiovascular disease,
but that too much sleep can also be associated with a doubling of the risk of death,
though not primarily from cardiovascular disease.[34][35]
572
Professor Francesco Cappuccio said, "Short sleep has been shown to be a risk factor
for weight gain, hypertension, and Type 2 diabetes, sometimes leading to mortality; but
in contrast to the short sleep-mortality association, it appears that no potential
mechanisms by which long sleep could be associated with increased mortality have yet
been investigated. Some candidate causes for this include depression, low
socioeconomic status, and cancer-related fatigue... In terms of prevention, our findings
indicate that consistently sleeping around seven hours per night is optimal for health,
and a sustained reduction may predispose to ill health."
Furthermore, sleep difficulties are closely associated with psychiatric disorders such as
depression, alcoholism, and bipolar disorder.[36] Up to 90% of adults with depression
are found to have sleep difficulties. Dysregulation found on EEG includes disturbances
in sleep continuity, decreased delta sleep and altered REM patterns with regard to
latency, distribution across the night and density of eye movements.[37]
Hours required by age
Children need more sleep per day in order to develop and function properly: up to 18
hours for newborn babies, with a declining rate as a child ages.[23] A newborn baby
spends almost 9 hours a day in REM sleep. By the age of five or so, only slightly over
two hours is spent in REM. Studies say that school age children need about 10 to 11
hours of sleep.[38]
Age and condition
Sleep Needs
Newborns (0–2 months)
12 to 18 hours[39]
Infants (3–11 months)
14 to 15 hours[39]
Toddlers (1–3 years)
12 to 14 hours[39]
Preschoolers (3–5 years)
11 to 13 hours[39]
School-age children (5–10 years)
10 to 11 hours[39]
Adolescents (10–17 years)
8.5 to 9.25 hours[39][40]
Adults, including elderly
7 to 9 hours[39]
Naps
The siesta habit has recently been
associated with a 37% reduction in
coronary mortality, possibly due to reduced
cardiovascular stress mediated by daytime
sleep.[41] Nevertheless, epidemiological
studies
on
the
relations
between
cardiovascular health and siestas have led
to conflicting conclusions, possibly because
of poor control of moderator variables, such
as physical activity. It is possible that
people who take siestas have different
physical activity habits, e.g., waking earlier
and scheduling more activity during the
morning. Such differences in physical activity may mediate different 24-hour profiles in
573
cardiovascular function. Even if such effects of physical activity can be discounted for
explaining the relationship between siestas and cardiovascular health, it is still
unknown whether it is the daytime nap itself, a supine posture, or the expectancy of a
nap that is the most important factor. It was recently suggested that a short nap can
reduce stress and blood pressure (BP), with the main changes in BP occurring
between the time of lights off and the onset of stage 1.[42][43]
Dr. Zaregarizi and his team have concluded
that the acute time of falling asleep was
when beneficial cardiovascular changes take
place. This study has indicated that a large
decline in BP occurs during the daytime
sleep-onset period only when sleep is
expected. However, when subjects rest in a
supine position, the same reduction in BP is
not observed. This BP reduction may be
associated with the lower coronary mortality
rates seen in Mediterranean and Latin
American populations in which siestas are
common.
Dr.
Zaregarizi
assessed
cardiovascular function (BP, heart rate, and
measurements of blood vessel dilation) while
nine healthy volunteers, 34 years of age on
average, spent an hour standing quietly,
reclining at rest but not sleeping, or reclining
to nap. All participants were restricted to 4
hours of sleep on the night prior to each of
the sleep laboratory tests. During the three
phases of daytime sleep, he noted
significant reductions in BP and heart rate.
By contrast, they did not observe changes in cardiovascular function while the
participants were standing or reclining at rest. These findings also show that the
greatest decline in BP occurs between lights-off and onset of daytime sleep itself.
During this sleep period, which lasted 9.7 minutes on average, BP decreased, while
blood vessel dilation increased by more than 9 percent.
“There is little change in blood pressure once a subject is actually asleep," Dr.
Zaregarizi noted, and he found minor changes in blood vessel dilation during
sleep.[42][43]
Kaul et al. found that sleep duration in long-term experienced meditators was lower
than in non-meditators and general population norms, with no apparent decrements in
vigilance.[44]
Sleep debt
Sleep debt is the effect of not getting enough sleep; a large debt causes mental,
emotional and physical fatigue.
Sleep debt results in diminished abilities to perform high-level cognitive functions.
Neurophysiological and functional imaging studies have demonstrated that frontal
regions of the brain are particularly responsive to homeostatic sleep pressure.[45]
Scientists do not agree on how much sleep debt it is possible to accumulate; whether it
is accumulated against an individual's average sleep or some other benchmark; nor on
whether the prevalence of sleep debt among adults has changed appreciably in the
industrialized world in recent decades. It is likely that children are sleeping less than
previously in Western societies.[46]
574
Genetics
It is hypothesized that a considerable amount of sleep-related behavior, such as when
and how long a person needs to sleep, is regulated by genetics. Researchers have
discovered some evidence that seems to support this assumption.[47] ABCC9 is one
gene found which influences the duration of human sleep.[48]
Functions
The multiple hypotheses proposed to explain the function of sleep reflect the
incomplete understanding of the subject. (When asked, after 50 years of research,
what he knew about the reason people sleep, William Dement, founder of Stanford
University's Sleep Research Center, answered, "As far as I know, the only reason we
need to sleep that is really, really solid is because we get sleepy.")[49] It is likely that
sleep evolved to fulfill some primeval function and took on multiple functions over
time[citation needed] (analogous to the larynx, which controls the passage of food and
air, but descended over time to develop speech capabilities).
If sleep were not essential, one would expect to find:
-Animal species that do not sleep at all
-Animals that do not need recovery sleep after staying awake longer than usual
-Animals that suffer no serious consequences as a result of lack of sleep
Outside of a few basal animals that have no brain or a very simple one, no animals
have been found to date that satisfy any of these criteria.[50] While some varieties of
shark, such as great whites and hammerheads, must remain in motion at all times to
move oxygenated water over their gills, it is possible they still sleep one cerebral
hemisphere at a time as marine mammals do. However it remains to be shown
definitively whether any fish is capable of unihemispheric sleep.
Some of the many proposed functions of sleep are as follows:
Restoration
Wound healing has been shown to be affected by sleep. A study conducted by
Gumustekin et al.[51] in 2004 shows sleep deprivation hindering the healing of burns
on rats.
It has been shown that sleep deprivation affects the immune system. In a study by
Zager et al. in 2007,[52] rats were deprived of sleep for 24 hours. When compared with
a control group, the sleep-deprived rats' blood tests indicated a 20% decrease in white
blood cell count, a significant change in the immune system. It is now possible to state
that "sleep loss impairs immune function and immune challenge alters sleep," and it
has been suggested that mammalian species which invest in longer sleep times are
investing in the immune system, as species with the longer sleep times have higher
white blood cell counts.[53] Sleep has also been theorized to effectively combat the
accumulation of free radicals in the brain, by increasing the efficiency of endogeneous
antioxidant mechanisms.[54]
The effect of sleep duration on somatic growth is not completely known. One study by
Jenni et al.[55] in 2007 recorded growth, height, and weight, as correlated to parentreported time in bed in 305 children over a period of nine years (age 1–10). It was
found that "the variation of sleep duration among children does not seem to have an
effect on growth." It has been shown that sleep—more specifically, slow-wave sleep
(SWS)—does affect growth hormone levels in adult men. During eight hours' sleep,
Van Cauter, Leproult, and Plat[56] found that the men with a high percentage of SWS
(average 24%) also had high growth hormone secretion, while subjects with a low
percentage of SWS (average 9%) had low growth hormone secretion.
575
There is some supporting evidence of the restorative function of sleep. The sleeping
brain has been shown to remove metabolic waste products at a faster rate than during
an awake state.[57] While awake, metabolism generates reactive oxygen species,
which are damaging to cells. In sleep, metabolic rates decrease and reactive oxygen
species generation is reduced allowing restorative processes to take over. It is
theorized that sleep helps facilitate the synthesis of molecules that help repair and
protect the brain from these harmful elements generated during waking.[58] The
metabolic phase during sleep is anabolic; anabolic hormones such as growth
hormones (as mentioned above) are secreted preferentially during sleep. The duration
of sleep among species is, broadly speaking, inversely related to animal size[citation
needed] and directly related to basal metabolic rate. Rats, which have a high basal
metabolic rate, sleep for up to 14 hours a day, whereas elephants and giraffes, which
have lower BMRs, sleep only 3–4 hours per day.
Energy conservation could as well have been accomplished by resting quiescent
without shutting off the organism from the environment, potentially a dangerous
situation. A sedentary nonsleeping animal is more likely to survive predators, while still
preserving energy. Sleep, therefore, seems to serve another purpose, or other
purposes, than simply conserving energy; for example, hibernating animals waking up
from hibernation go into rebound sleep because of lack of sleep during the hibernation
period. They are definitely well-rested and are conserving energy during hibernation,
but need sleep for something else.[5] Rats kept awake indefinitely develop skin lesions,
hyperphagia, loss of body mass, hypothermia, and, eventually, fatal sepsis.[59]
Ontogenesis
According to the ontogenetic hypothesis of REM sleep, the activity occurring during
neonatal REM sleep (or active sleep) seems to be particularly important to the
developing organism (Marks et al., 1995). Studies investigating the effects of
deprivation of active sleep have shown that deprivation early in life can result in
behavioral problems, permanent sleep disruption, decreased brain mass (Mirmiran et
al., 1983), and an abnormal amount of neuronal cell death.[60]
REM sleep appears to be important for development of the brain. REM sleep occupies
the majority of time of sleep of infants, who spend most of their time sleeping. Among
different species, the more immature the baby is born, the more time it spends in REM
sleep. Proponents also suggest that REM-induced muscle inhibition in the presence of
brain activation exists to allow for brain development by activating the synapses, yet
without any motor consequences that may get the infant in trouble. Additionally, REM
deprivation results in developmental abnormalities later in life.
However, this does not explain why older adults still need REM sleep. Aquatic mammal
infants do not have REM sleep in infancy;[61] REM sleep in those animals increases as
they age.
Memory processing
Scientists have shown numerous ways in which sleep is related to memory. In a study
conducted by Turner, Drummond, Salamat, and Brown (2007),[62] working memory
was shown to be affected by sleep deprivation. Working memory is important because
it keeps information active for further processing and supports higher-level cognitive
functions such as decision making, reasoning, and episodic memory. The study
allowed 18 women and 22 men to sleep only 26 minutes per night over a four-day
period. Subjects were given initial cognitive tests while well-rested, and then were
tested again twice a day during the four days of sleep deprivation. On the final test, the
average working memory span of the sleep-deprived group had dropped by 38% in
comparison to the control group.
576
The relation between working memory and sleep can also be explored by testing how
working memory works during sleep. Daltrozzo, Claude, Tillmann, Bastuji, Perrin,[63]
using Event-Related Potentials to the perception of sentences during sleep showed
that working memory for linguistic information is partially preserved during sleep with a
smaller capacity compared to wake.
Memory seems to be affected differently by certain stages of sleep such as REM and
slow-wave sleep (SWS). In one study,[64] multiple groups of human subjects were
used: wake control groups and sleep test groups. Sleep and wake groups were taught
a task and were then tested on it, both on early and late nights, with the order of nights
balanced across participants. When the subjects' brains were scanned during sleep,
hypnograms revealed that SWS was the dominant sleep stage during the early night,
representing around 23% on average for sleep stage activity.
The early-night test group performed 16% better on the declarative memory test than
the control group. During late-night sleep, REM became the most active sleep stage at
about 24%, and the late-night test group performed 25% better on the procedural
memory test than the control group. This indicates that procedural memory benefits
from late, REM-rich sleep, whereas declarative memory benefits from early, slow waverich sleep.
A study conducted by Datta[65] indirectly supports these results. The subjects chosen
were 22 male rats. A box was constructed wherein a single rat could move freely from
one end to the other. The bottom of the box was made of a steel grate. A light would
shine in the box accompanied by a sound. After a five-second delay, an electrical
shock would be applied. Once the shock commenced, the rat could move to the other
end of the box, ending the shock immediately. The rat could also use the five-second
delay to move to the other end of the box and avoid the shock entirely.
The length of the shock never exceeded five seconds. This was repeated 30 times for
half the rats. The other half, the control group, was placed in the same trial, but the rats
were shocked regardless of their reaction. After each of the training sessions, the rat
would be placed in a recording cage for six hours of polygraphic recordings.
This process was repeated for three consecutive days. This study found that during the
posttrial sleep recording session, rats spent 25.47% more time in REM sleep after
learning trials than after control trials. These trials support the results of the Born et al.
study, indicating an obvious correlation between REM sleep and procedural
knowledge.
An observation of the Datta study is that the learning group spent 180% more time in
SWS than did the control group during the post-trial sleep-recording session. This
phenomenon is supported by a study performed by Kudrimoti, Barnes, and
McNaughton.[66] This study shows that after spatial exploration activity, patterns of
hippocampal place cells are reactivated during SWS following the experiment. In a
study by Kudrimoti et al., seven rats were run through a linear track using rewards on
either end. The rats would then be placed in the track for 30 minutes to allow them to
adjust (PRE), then they ran the track with reward-based training for 30 minutes (RUN),
and then they were allowed to rest for 30 minutes.
During each of these three periods, EEG data were collected for information on the
rats' sleep stages. Kudrimoti et al. computed the mean firing rates of hippocampal
place cells during prebehavior SWS (PRE) and three ten-minute intervals in
postbehavior SWS (POST) by averaging across 22 track-running sessions from seven
rats. The results showed that ten minutes after the trial RUN session, there was a 12%
increase in the mean firing rate of hippocampal place cells from the PRE level. After 20
minutes, the mean firing rate returned rapidly toward the PRE level. The elevated firing
of hippocampal place cells during SWS after spatial exploration could explain why
there were elevated levels of slow-wave sleep in Datta's study, as it also dealt with a
form of spatial exploration.
577
A study has also been done involving direct current stimulation to the prefrontal cortex
to increase the amount of slow oscillations during SWSfe. The direct current stimulation
greatly enhanced word-pair retention the following day, giving evidence that SWS plays
a large role in the consolidation of episodic memories.[67]
The different studies all suggest that there is a correlation between sleep and the
complex functions of memory. Harvard sleep researchers Saper[68] and Stickgold[69]
point out that an essential part of memory and learning consists of nerve cell dendrites'
sending of information to the cell body to be organized into new neuronal connections.
This process demands that no external information is presented to these dendrites, and
it is suggested that this may be why it is during sleep that memories and knowledge are
solidified and organized.
Recent studies examining gene expression and evolutionary increases in brain size
offer complimentary support for the role of sleep in the mammalian memory
consolidation theory. Evolutionary increases in the size of the mammalian amygdala, a
brain structure active during sleep and involved in memory processing, are associated
with increases in NREM sleep durations.[70] Likewise, nighttime gene expression
differs from daytime expression and specifically targets genes thought to be involved in
memory consolidation and brain plasticity.[71]
Preservation
The "Preservation and Protection" theory holds that sleep serves an adaptive function.
It protects the animal during that portion of the 24-hour day in which being awake, and
hence roaming around, would place the individual at greatest risk.[72] Organisms do
not require 24 hours to feed themselves and meet other necessities. From this
perspective of adaptation, organisms are safer by staying out of harm's way, where
potentially they could be prey to other, stronger organisms. They sleep at times that
maximize their safety, given their physical capacities and their habitats.
This theory fails to explain why the brain disengages from the external environment
during normal sleep. However, the brain consumes a large proportion of the body's
energy at any one time and preservation of energy could only occur by limiting its
sensory inputs. Another argument against the theory is that sleep is not simply a
passive consequence of removing the animal from the environment, but is a "drive";
animals alter their behaviors in order to obtain sleep.
Therefore, circadian regulation is more than sufficient to explain periods of activity and
quiescence that are adaptive to an organism, but the more peculiar specializations of
sleep probably serve different and unknown functions. Moreover, the preservation
theory needs to explain why carnivores like lions, which are on top of the food chain
and thus have little to fear, sleep the most. It has been suggested that they need to
minimize energy expenditure when not hunting.
Preservation also does not explain why aquatic mammals sleep while moving.
Quiescence during these vulnerable hours would do the same and would be more
advantageous, because the animal would still be able to respond to environmental
challenges like predators, etc. Sleep rebound that occurs after a sleepless night will be
maladaptive, but obviously must occur for a reason. A zebra falling asleep the day after
it spent the sleeping time running from a lion is more, not less, vulnerable to predation.
578
Dreaming
Dreaming
is
the
perceived
experience of sensory images and
sounds during sleep, in a sequence
which the dreamer usually perceives
more as an apparent participant than
as an observer. Dreaming is
stimulated by the pons and mostly
occurs during the REM phase of
sleep.
As Dement studied, he found out that
people need REM, or dreaming,
sleep. He conducted a sleep and
dream research project, in which the
first eight of his participants were
published in the article he wrote. All eight were male. For a maximum span of a 7 days,
he varyingly deprived the participants of strictly REM sleep by waking them each time
they started to enter the stage. He monitored this with small electrodes attached to
their scalp and temples. As the study went on, he noticed that the more he deprived
them of REM sleep, the more often he had to wake the men.[73]
Dreams can also be suppressed or encouraged; taking anti-depressants,
acetaminophen, ibuprofen, or alcohol is thought to potentially suppress dreams,
whereas melatonin may have the ability to encourage them.[74]
People have proposed many hypotheses about the functions of dreaming. Sigmund
Freud postulated that dreams are the symbolic expression of frustrated desires that
have been relegated to the unconscious mind, and he used dream interpretation in the
form of psychoanalysis to uncover these desires. See Freud: The Interpretation of
Dreams.
While penile erections during sleep are commonly believed to indicate dreams with
sexual content, they are not more frequent during sexual dreams than they are during
nonsexual dreams.[75] The parasympathetic nervous system experiences increased
activity during REM sleep which may cause erection of the penis or clitoris. In males,
80% to 95% of erection accompanies REM sleep while only about 12% of men's
dreams contain sexual content.[9]
Freud's work concerns the psychological role of dreams, which does not exclude any
physiological role they may have. Recent research[76] claims that sleep has the overall
role of consolidation and organization of synaptic connections formed during learning
and experience. As such, Freud's work is not ruled out. Nevertheless, Freud's research
has been expanded on, especially with regard to the organization and consolidation of
recent memory.
Certain processes in the cerebral cortex have been studied by John Allan Hobson and
Robert McCarley. In their activation synthesis theory, for example, they propose that
dreams are caused by the random firing of neurons in the cerebral cortex during the
REM period. Neatly, this theory helps explain the irrationality of the mind during REM
periods, as, according to this theory, the forebrain then creates a story in an attempt to
reconcile and make sense of the nonsensical sensory information presented to it.[77]
Ergo, the odd nature of many dreams.
579
Evolution
According to Tsoukalas (2012) REM sleep is an evolutionary transformation of a wellknown defensive mechanism, the tonic immobility reflex. This reflex, also known as
animal hypnosis or death feigning, functions as the last line of defense against an
attacking predator and consists of the total immobilization of the animal: the animal
appears dead (cf. “playing possum”). The neurophysiology and phenomenology of this
reaction shows striking similarities to REM sleep, a fact which betrays a deep
evolutionary kinship. For example, both reactions exhibit brainstem control, paralysis,
sympathetic activation, and thermoregulatory changes. This theory integrates many
earlier findings into a unified, and evolutionary well informed, framework.[78][79]
Mammals, birds and reptiles evolved from amniotic ancestors, the first vertebrates with
life cycles independent of water. The fact that birds and mammals are the only known
animals to exhibit REM and NREM sleep indicates a common trait before
divergence.[80] Reptiles are therefore the most logical group to investigate the origins
of sleep. A new study proposes that the reptilian active state was transformed into
mammalian sleep. Daytime activity in reptiles alternates between basking and short
bouts of active behavior, which has significant neurological and physiological
similarities to sleep states in mammals. It is proposed that REM sleep evolved from
short bouts of motor activity in reptiles while SWS evolved from their basking state
which shows similar slow wave EEG patterns.[81]
Early mammals engaged in polyphasic sleep, dividing sleep into multiple bouts per day.
What then explains monophasic sleep behavior widely observed in mammals today?
Higher daily sleep quotas and shorter sleep cycles in polyphasic species as compared
to monophasic species, suggest that polyphasic sleep may be a less efficient means of
attaining sleep’s benefits. Small species with higher BMR may therefore have less
efficient sleep patterns. It follows that the evolution of monophasic sleep may hitherto
be an unknown advantage of evolving larger mammalian body sizes and therefore
lower BMR.[82]
Insomnia
Insomnia, a dyssomnia, is a general term describing difficulty falling asleep and staying
asleep. Insomnia can have many different causes, including psychological stress, a
poor sleep environment, an inconsistent sleep schedule, or excessive mental or
physical stimulation in the hours before bedtime. Insomnia is often treated through
behavioral changes like keeping a regular sleep schedule, avoiding stimulating or
stressful activities before bedtime, and cutting down on stimulants such as caffeine.
The sleep environment may be improved by installing heavy drapes to shut out all
sunlight, and keeping computers, televisions and work materials out of the sleeping
area.
A 2010 review of published scientific research suggested that exercise generally
improves sleep for most people, and helps sleep disorders such as insomnia. The
optimum time to exercise may be 4 to 8 hours before bedtime, though exercise at any
time of day is beneficial, with the exception of heavy exercise taken shortly before
bedtime, which may disturb sleep. However there is insufficient evidence to draw
detailed conclusions about the relationship between exercise and sleep.[83]
Sleeping medications such as Ambien and Lunesta are an increasingly popular
treatment for insomnia, and have become a major source of revenue for drug
companies. Although these nonbenzodiazepine medications are generally believed to
be better and safer than earlier generations of sedatives, they have still generated
some controversy and discussion regarding side-effects.
White noise appears to be a promising treatment for insomnia.[84]
580
Obstructive sleep apnea
Obstructive sleep apnea is a condition in which major pauses in breathing occur during
sleep, disrupting the normal progression of sleep and often causing other more severe
health problems. Apneas occur when the muscles around the patient's airway relax
during sleep, causing the airway to collapse and block the intake of oxygen. As oxygen
levels in the blood drop, the patient then comes out of deep sleep in order to resume
breathing. When several of these episodes occur per hour, sleep apnea rises to a level
of seriousness that may require treatment.
Diagnosing sleep apnea usually requires a professional sleep study performed in a
sleep clinic, because the episodes of wakefulness caused by the disorder are
extremely brief and patients usually do not remember experiencing them. Instead,
many patients simply feel tired after getting several hours of sleep and have no idea
why. Major risk factors for sleep apnea include chronic fatigue, old age, obesity and
snoring.
Other sleep disorders
Sleep disorders include narcolepsy, periodic limb movement disorder (PLMD), restless
leg syndrome (RLS), and the circadian rhythm sleep disorders. Fatal familial insomnia,
or FFI, is an extremely rare genetic disease with no known treatment or cure, is
characterized by increasing insomnia as one of its symptoms; ultimately sufferers of
the disease stop sleeping entirely, before dying of the disease.[49]
Somnambulism, known as sleep walking, is also a common sleeping disorder,
especially among children. In somnambulism the individual gets up from his/her sleep
and wanders around while still sleeping.[85]
Older people may be more easily awakened by disturbances in the environment[86]
and may to some degree lose the ability to consolidate sleep.
Effect of food and drink on sleep
Hypnotics
-Nonbenzodiazepine hypnotics such as eszopiclone (Lunesta), zaleplon
(Sonata), and zolpidem (Ambien) are commonly used as sleep aids prescribed
by doctors to treat forms of insomnia. Nonbenzodiazepines are the most
commonly prescribed and OTC sleep aids used worldwide and have been
greatly growing in use since the 1990s. They target the GABAA receptor.
-Benzodiazepines target the GABAA receptor also, and as such, they are
commonly used sleep aids as well, though benzodiazepines have been found to
decrease REM sleep.[87]
-Antihistamines, such as diphenhydramine (Benadryl) and doxylamine (found in
various OTC medicines, such as NyQuil)
-Alcohol – Often, people start drinking alcohol in order to get to sleep (alcohol is
initially a sedative and will cause somnolence, encouraging sleep).[88]
However, being addicted to alcohol can lead to disrupted sleep, because
alcohol has a rebound effect later in the night. As a result, there is strong
evidence linking alcoholism and forms of insomnia.[89] Alcohol also reduces
REM sleep.[87]
-Barbiturates cause drowsiness and have actions similar to alcohol in that they
have a rebound effect and inhibit REM sleep, so they are not used as a longterm sleep aid.[90]
-Melatonin is a naturally occurring hormone that regulates sleepiness. It is made
in the brain, where tryptophan is converted into serotonin and then into
581
melatonin, which is released at night by the pineal gland to induce and maintain
sleep. Melatonin supplementation may be used as a sleep aid, both as a
hypnotic and as a chronobiotic (see phase response curve, PRC).
-Siesta and the "post-lunch dip" – Many people have a temporary drop in
alertness in the early afternoon, commonly known as the "post-lunch dip." While
a large meal can make a person feel sleepy, the post-lunch dip is mostly an
effect of the biological clock. People naturally feel most sleepy (have the
greatest "drive for sleep") at two times of the day about 12 hours apart—for
example, at 2:00 a.m. and 2:00 p.m. At those two times, the body clock "kicks
in." At about 2 p.m. (14:00), it overrides the homeostatic buildup of sleep debt,
allowing several more hours of wakefulness. At about 2 a.m. (02:00), with the
daily sleep debt paid off, it "kicks in" again to ensure a few more hours of sleep.
-Tryptophan – The amino acid tryptophan is a building block of proteins. It has
been claimed to contribute to sleepiness, since it is a precursor of the
neurotransmitter serotonin, involved in sleep regulation. However, no solid data
have ever linked modest dietary changes in tryptophan to changes in sleep.
-Marijuana – Some people use marijuana to induce sleepiness. Users often
report relaxation and drowsiness. It has been shown that Tetrahydrocannabinol,
the principal psychoactive constituent in marijuana, reduces the amount of REM
sleep.[91] Frequent users often report being unable to recall their dreams.
Stimulants
-Amphetamine (dextroamphetamine, and a related, slightly more powerful drug
methamphetamine, etc.) are used to treat narcolepsy. Their most common
effects are anxiety, insomnia, stimulation, increased alertness, and decreased
hunger.
-Caffeine is a stimulant that works by slowing the action of the hormones in the
brain that cause somnolence, particularly by acting as an antagonist at
adenosine receptors. Effective dosage is individual, in part dependent on prior
usage. It can cause a rapid reduction in alertness as it wears off.
-Cocaine and crack cocaine – Studies on cocaine have shown its effects to be
mediated through the circadian rhythm system.[92] This may be related to the
onset of hypersomnia (oversleeping) in regard to "cocaine-induced sleep
disorder."[93]
-MDMA, including similar drugs like MDA, MMDA, or bk-MDMA – The class of
drugs called empathogen-entactogens keep users awake with intense euphoria.
Commonly known as "ecstasy."
-Methylphenidate – Commonly known by the brand names Ritalin and
Concerta, methylphenidate is similar in action to amphetamine and cocaine; its
chemical composition more closely resembles that of cocaine.
-Tobacco – Tobacco has been found not only to disrupt but also to reduce total
sleep time. In studies, users have described more daytime drowsiness than
nonsmokers.[94]
-Other analeptic drugs like Modafinil and Armodafinil are prescribed to treat
narcolepsy, idiopathic hypersomnia, shift work sleep disorder, and other
conditions causing Excessive Daytime Sleepiness. The precise mechanism of
these CNS stimulants is not known, but they have been shown to increase both
the release of monoamines and levels of hypothalamic histamine, thereby
promoting wakefulness.
582
Anthropology of sleep
Research suggests that sleep patterns
vary
significantly
across
cultures.[95][96] The most striking
differences are between societies that
have plentiful sources of artificial light
and ones that do not.[95] The primary
difference appears to be that pre-light
cultures have more broken-up sleep
patterns.[95] For example, people
might go to sleep far sooner after the
sun sets, but then wake up several
times
throughout
the
night,
punctuating their sleep with periods of
wakefulness, perhaps lasting several
hours.[95]
The boundaries between sleeping and waking are blurred in these societies.[95] Some
observers believe that nighttime sleep in these societies is most often split into two
main periods, the first characterized primarily by deep sleep and the second by REM
sleep.[95]
Some societies display a fragmented sleep pattern in which people sleep at all times of
the day and night for shorter periods. In many nomadic or hunter-gatherer societies,
people will sleep on and off throughout the day or night depending on what is
happening.[95] Plentiful artificial light has been available in the industrialized West
since at least the mid-19th century, and sleep patterns have changed significantly
everywhere that lighting has been introduced.[95] In general, people sleep in a more
concentrated burst through the night, going to sleep much later, although this is not
always true.[95]
Historian Roger Ekrich thinks that the traditional pattern of "segmented sleep" as it is
called began to disappear among the urban upper class in Europe in the late 17th
century and the change spread over the next 200 years; by the 1920s "the idea of a
first and second sleep had receded entirely from our social consciousness."[97] Ekrich
attributes the change to increases in "street lighting, domestic lighting and a surge in
coffee houses," which slowly made nighttime a legitimate time for activity, decreasing
the time available for rest.[97]
In some societies, people generally sleep with at least one other person (sometimes
many) or with animals. In other cultures, people rarely sleep with anyone but a most
intimate relation, such as a spouse. In almost all societies, sleeping partners are
strongly regulated by social standards. For example, people might only sleep with their
immediate family, extended family, spouses, their children, children of a certain age,
children of specific gender, peers of a certain gender, friends, peers of equal social
rank, or with no one at all. Sleep may be an actively social time, depending on the
sleep groupings, with no constraints on noise or activity.[95]
People sleep in a variety of locations. Some sleep directly on the ground; others on a
skin or blanket; others sleep on platforms or beds. Some sleep with blankets, some
with pillows, some with simple headrests, some with no head support. These choices
are shaped by a variety of factors, such as climate, protection from predators, housing
type, technology, personal preference, and the incidence of pests.[95]
583
Sleep in other animals
Neurological sleep states can be
difficult to detect in some animals. In
these cases, sleep may be defined
using behavioral characteristics such
as minimal movement, postures typical
for the species, and reduced
responsiveness to external stimulation.
Sleep is quickly reversible, as opposed
to hibernation or coma, and sleep
deprivation is followed by longer or
deeper rebound sleep. Herbivores,
who require a long waking period to
gather and consume their diet,
typically sleep less each day than
similarly sized carnivores, who might well consume several days' supply of meat in a
sitting.
Horses and other herbivorous ungulates can sleep while standing, but must necessarily
lie down for REM sleep (which causes muscular atony) for short periods. Giraffes, for
example, only need to lie down for REM sleep for a few minutes at a time. Bats sleep
while hanging upside down. Some aquatic mammals and some birds can sleep with
one half of the brain while the other half is awake, so-called unihemispheric slow-wave
sleep.[98] Birds and mammals have cycles of non-REM and REM sleep (as described
above for humans), though birds' cycles are much shorter and they do not lose muscle
tone (go limp) to the extent that most mammals do.
Many mammals sleep for a large proportion of each 24-hour period when they are very
young.[99] However, killer whales and some other dolphins do not sleep during the first
month of life.[100] Instead, young dolphins and whales frequently take rests by
pressing their body next to their mother’s while she swims. As the mother swims she is
keeping her offspring afloat to prevent them from drowning. This allows young dolphins
and whales to rest, which will help keep their immune system healthy; in turn,
protecting them from illnesses.[101] During this period, mothers often sacrifice sleep for
the protection of their young from predators. However, unlike other mammals, adult
dolphins and whales are able to go without sleep for a month.[101][102]
Also unlike terrestrial mammals, dolphins, whales, and pinnipeds (seals) cannot go into
a deep sleep. The consequences of falling into a deep sleep for marine mammalian
species is suffocation and drowning, or becoming easy prey for predators. Thus,
dolphins, whales, and seals engage in unihemispheric sleep, which allows one brain
hemisphere to remain fully functional, while the other goes to sleep. The hemisphere
that is asleep, alternates so that both hemispheres can be fully rested.[101][103]
Just like terrestrial mammals, pinnipeds that sleep on land fall into a deep sleep and
both hemispheres of their brain shut down and are in full sleep mode.[104][105]
584
Dream
Dreams are successions of
images, ideas, emotions, and
sensations
that
occur
involuntarily in the mind during
certain stages of sleep.[1] The
content and purpose of dreams
are not definitively understood,
though they have been a topic of
scientific speculation and a
subject of philosophical and
religious interest throughout
recorded history. The scientific
study of dreams is called
oneirology. Scientists think that
all mammals dream, but whether
this is true of other animals,
such as birds or reptiles, is uncertain.[2]
Dreams mainly occur in the rapid-eye movement (REM) stage of sleep—when brain
activity is high and resembles that of being awake. REM sleep is revealed by
continuous movements of the eyes during sleep. At times, dreams may occur during
other stages of sleep. However, these dreams tend to be much less vivid or
memorable.[3]
Dreams can last for a few seconds, or as long as 20 minutes. People are more likely to
remember the dream if they are awakened during the REM phase. The average person
has three to five dreams per night, but some may have up to seven dreams in one
night. The dreams tend to last longer as the night progresses. During a full eight-hour
night sleep, most dreams occur in the typical two hours of REM.[4]
In modern times, dreams have been seen as a connection to the unconscious mind.
They range from normal and ordinary to overly surreal and bizarre. Dreams can have
varying natures, such as frightening, exciting, magical, melancholic, adventurous, or
sexual. The events in dreams are generally outside the control of the dreamer, with the
exception of lucid dreaming, where the dreamer is self-aware. Dreams can at times
make a creative thought occur to the person or give a sense of inspiration.[5]
Opinions about the meaning of dreams have varied and shifted through time and
culture. Dream interpretations date back to 5000–4000 BC. The earliest recorded
dreams were acquired from materials dating back approximately 5,000 years, in
Mesopotamia, where they were documented on clay tablets.[6] In the Greek and
Roman periods, the people believed that dreams were direct messages from one
and/or multiple deities, from deceased persons, and that they predicted the future.
Some cultures practiced dream incubation with the intention of cultivating dreams that
are of prophecy.[7]
Sigmund Freud, who developed the discipline of psychoanalysis wrote extensively
about dream theories and their interpretations. He explained dreams as manifestations
of our deepest desires and anxieties, often relating to repressed childhood memories or
obsessions. In The Interpretation of Dreams, Freud developed a psychological
technique to interpret dreams and devised a series of guidelines to understand the
symbols and motifs that appear in our dreams.
585
Contents
1 Cultural meaning
1.1 Ancient history
1.2 Classical history
1.3 In Abrahamic religions
1.4 Dreams and philosophical realism
1.5 Postclassical and medieval history
1.6 In art
1.7 In literature
1.8 In popular culture
2 Dynamic psychiatry
2.1 Freudian view of dreams
2.2 Jungian and other views of dreams
3 The neurobiology of dreaming
4 Dreams in animals
5 Neurological theories of dreams
5.1 Activation synthesis theory
5.2 Continual-activation theory
5.3 Defensive immobilization: the precursor of dreams
5.4 Dreams as excitations of long-term memory
5.5 Dreams for strengthening of semantic memories
5.6 Dreams for removing excess sensory information
6 Psychological theories of dreams
6.1 Dreams for testing and selecting mental schemas
6.2 Evolutionary psychology theories of dreams
6.3 Psychosomatic theory of dreams
6.4 Expectation fulfilment theory of dreams
6.5 Other hypotheses on dreaming
7 Dream content
7.1 Visuals
7.2 Emotions
7.3 Sexual themes
7.4 Color vs. black and white
8 Dream interpretations
8.1 Relationship with medical conditions
9 Other associated phenomena
9.1 Incorporation of reality
9.2 Apparent precognition of real events
9.3 Lucid dreaming
9.3.1 Communication through lucid dreaming
9.3.2 Lucid dreaming as a path to enlightenment
9.4 Dreams of absent-minded transgression
9.5 Recalling dreams
9.5.1 Individual differences
9.6 Déjà vu
9.7 Sleepwalking
9.8 Daydreaming
9.9 Hallucination
9.10 Nightmares
9.11 Night terrors
586
Cultural meaning
Ancient history
The Dreaming is a common term within the animist creation narrative of indigenous
Australians for a personal, or group, creation and for what may be understood as the
"timeless time" of formative creation and perpetual creating.[8]
The Sumerians in Mesopotamia left evidence of dreams dating back to 3100 BC.
According to these early recorded stories, gods and kings, like the 7th century BC
scholar-king Assurbanipal, paid close attention to dreams. In his archive of clay tablets,
some amounts of the story of the legendary king Gilgamesh were found.[9]
The Mesopotamians believed that the soul, or some part of it, moves out from the body
of the sleeping person and actually visits the places and persons the dreamer sees in
their sleep. Sometimes the god of dreams is said to carry the dreamer.[10] Babylonians
and Assyrians divided dreams into "good," which were sent by the gods, and "bad,"
sent by demons - They also believed that their dreams were omens and
prophecies.[11]
In ancient Egypt, as far back as 2000 BC, the Egyptians wrote down their dreams on
papyrus. People with vivid and significant dreams were thought blessed and were
considered special.[12] Ancient Egyptians believed that dreams were like oracles,
bringing messages from the gods. They thought that the best way to receive divine
revelation was through dreaming and thus they would induce (or "incubate") dreams.
Egyptians would go to sanctuaries and sleep on special "dream beds" in hope of
receiving advice, comfort, or healing from the gods.[13]
Classical history
In Chinese history, people wrote of two vital
aspects of the soul of which one is freed
from the body during slumber to journey a
dream realm, while the other remained in
the body,[14] although this belief and dream
interpretation had been questioned since
early times, such as by the philosopher
Wang Chong (27-97).[14] The Indian text
Upanishads, written between 900 and 500
BC, emphasize two meanings on dreams.
The first says that dreams are merely
expressions of inner desires. The second is
the belief of the soul leaving the body and
being guided until awakened.
The Greeks shared their beliefs with the Egyptians on how to interpret good and bad
dreams, and the idea of incubating dreams. Morpheus, the Greek god of dreams also
sent warnings and prophecies to those who slept at shrines and temples. The earliest
Greek beliefs of dreams were that their gods physically visited the dreamers, where
they entered through a keyhole, and exiting the same way after the divine message
was given.
Antiphon wrote the first known Greek book on dreams in the 5th century BC. In that
century, other cultures influenced Greeks to develop the belief that souls left the
sleeping body.[15] Hippocrates (469-399 BC) had a simple dream theory: during the
day, the soul receives images; during the night, it produces images. Greek philosopher,
Aristotle (384-322 BC) believed dreams caused physiological activity. He thought
dreams could analyze illness and predict diseases. Marcus Tullius Cicero, for his part,
believed that all dreams are produced by thoughts and conversations a dreamer had
during the preceding days.[16]
587
In Abrahamic religions
In Judaism, dreams are considered
part of the experience of the world that
can be interpreted and from which
lessons can be garnered. It is
discussed in the Talmud, Tractate
Berachot 55-60.
The ancient Hebrews connected their
dreams heavily with their religion,
though
the
Hebrews
were
monotheistic and believed that dreams
were the voice of one god alone.
Hebrews also differentiated between
good dreams (from God) and bad
dreams (from evil spirits). The
Hebrews, like many other ancient
cultures, incubated dreams in order to
receive divine revelation. For example,
the Hebrew prophet Samuel, would
"lie down and sleep in the temple at Shiloh before the Ark and receive the word of the
Lord." Most of the dreams in the Bible are in the Book of Genesis.[17]
Christians mostly shared their beliefs with the Hebrews and thought that dreams were
of the supernatural element because the Old Testament had frequent stories of dreams
with divine inspiration. The most famous of these dream stories was Jacob's dream of
a ladder that stretched from Earth to Heaven. Many Christians preach that God can
speak to his people through their dreams.
Iain R. Edgar has researched the role of dreams in Islam.[18] He has argued that
dreams play an important role in the history of Islam and the lives of Muslims. Dream
interpretation, is the only way that Muslims can receive revelations from God after the
death of the last Prophet Mohammed.[19]
Dreams and philosophical realism
Some philosophers have concluded that what we think of as the "real world" could be
or is an illusion (an idea known as the skeptical hypothesis about ontology).
The first recorded mention of the idea was by Zhuangzi, and it is also discussed in
Hinduism, which makes extensive use of the argument in its writings.[20] It was
formally introduced to Western philosophy by Descartes in the 17th century in his
Meditations on First Philosophy. Stimulus, usually an auditory one, becomes a part of a
dream, eventually then awakening the dreamer.
Postclassical and medieval history
Some Indigenous American tribes and Mexican civilizations believe that dreams are a
way of visiting and having contact with their ancestors.[21] Some Native American
tribes used vision quests as a rite of passage, fasting and praying until an anticipated
guiding dream was received, to be shared with the rest of the tribe upon their
return.[22][23]
The Middle Ages brought a harsh interpretation of dreams. They were seen as evil, and
the images as temptations from the devil. Many believed that during sleep, the devil
could fill the human mind with corrupting and harmful thoughts. Martin Luther, founder
of Protestantism, believed dreams were the work of the Devil. However, Catholics such
as St. Augustine and St. Jerome claimed that the direction of their life were heavily
influenced by their dreams.
588
In art
Dreams and dark imaginings are the
theme of Goya's etching The Sleep of
Reason Produces Monsters. There is
a painting by Salvador Dalí that
depicts this concept, titled Dream
Caused by the Flight of a Bee around
a Pomegranate a Second Before
Awakening (1944). Rousseau's last
painting was The Dream. Le Rêve
("The Dream") is a 1932 painting by
Pablo Picasso.
In literature
Dream frames were frequently used in
medieval allegory to justify the
narrative;
The
Book
of
the
Duchess[24]
and
The
Vision
Concerning Piers Plowman[25] are
two such dream visions. Even before
them, in antiquity, the same device
had been used by Cicero and Lucian
of Samosata.
They have also featured in fantasy
and speculative fiction since the 19th
century. One of the best-known dream
worlds is Wonderland from Lewis
Carroll's
Alice's
Adventures
in
Wonderland, as well as Looking-Glass
Land from its sequel, Through the
Looking-Glass. Unlike many dream
worlds, Carroll's logic is like that of
actual dreams, with transitions and
flexible causality.
Other fictional dream worlds include
the Dreamlands of H. P. Lovecraft's
Dream
Cycle[26]
and
The
Neverending Story's[27] world of Fantasia, which includes places like the Desert of
Lost Dreams, the Sea of Possibilities and the Swamps of Sadness. Dreamworlds,
shared hallucinations and other alternate realities feature in a number of works by
Phillip K. Dick, such as The Three Stigmata of Palmer Eldritch and Ubik. Similar
themes were explored by Jorge Luis Borges, for instance in The Circular Ruins.
In popular culture
Modern popular culture often conceives of dreams, like Freud, as expressions of the
dreamer's deepest fears and desires.[28] In films such as Spellbound (1945), The
Manchurian Candidate (1962), Field of Dreams (1989), and Inception (2010), the
protagonists must extract vital clues from surreal dreams.[29]
Most dreams in popular culture are, however, not symbolic, but straightforward and
realistic depictions of their dreamer's fears and desires.[29] Dream scenes may be
indistinguishable from those set in the dreamer's real world, a narrative device that
undermines the dreamer's and the audience's sense of security[29] and allows horror
589
film protagonists, such as those of Carrie (1976), Friday the 13th (1980 film) (1980) or
An American Werewolf in London (1981) to be suddenly attacked by dark forces while
resting in seemingly safe places.[29] Dreams also play a major role in video games.
The Nintendo 3DS game Mario & Luigi: Dream Team follows the adventure of the
Mario Bros. traveling through Luigi's dreams.
In speculative fiction, the line between dreams and reality may be blurred even more in
the service of the story.[29] Dreams may be psychically invaded or manipulated
(Dreamscape, 1984; the Nightmare on Elm Street films, 1984–2010; Inception, 2010)
or even come literally true (as in The Lathe of Heaven, 1971). In Ursula K. Le Guin's
book, The Lathe of Heaven (1971), the protagonist finds that his "effective" dreams can
retroactively change reality. Peter Weir's 1977 Australian film The Last Wave makes a
simple and straightforward postulate about the premonitory nature of dreams (from one
of his Aboriginal characters) that "... dreams are the shadow of something real". Such
stories play to audiences' experiences with their own dreams, which feel as real to
them.[29]
Dynamic psychiatry
Freudian view of dreams
In the late 19th century, psychotherapist Sigmund Freud developed a theory that the
content of dreams is driven by unconscious wish fulfillment. Freud called dreams the
"royal road to the unconscious."[30] He theorized that the content of dreams reflects
the dreamer's unconscious mind and specifically that dream content is shaped by
unconscious wish fulfillment. He argued that important unconscious desires often relate
to early childhood memories and experiences. Freud's theory describes dreams as
having both manifest and latent content. Latent content relates to deep unconscious
wishes or fantasies while manifest content is superficial and meaningless. Manifest
content often masks or obscures latent content.
Freud has two main influential works about dreams in relation to psychoanalysis.
Dream Psychology and The Interpretation of Dreams both had profound impacts on
dream analysis and psychoanalysis. Dream psychology focused mainly on the amateur
psychoanalyst in an attempt to teach beginners the basics of dream analysis. The book
discusses desires in dreams, particularly sex in dreams, and illustrates Freud's
tendency to focus on the appearance of latent sexual desires.
The Interpretation of Dreams is one of Freud's most well-known works, and focuses on
the content of dreams as well as their relation to the individual's conscious state.
Freud's early work argued that the vast majority of latent dream content is sexual in
nature, but he later shied away from this categorical position. In Beyond the Pleasure
Principle he considered how trauma or aggression could influence dream content. He
also discussed supernatural origins in Dreams and Occultism, a lecture published in
New Introductory Lectures on Psychoanalysis.[31]
Jungian and other views of dreams
Carl Jung rejected many of Freud's theories. Jung expanded on Freud's idea that
dream content relates to the dreamer's unconscious desires. He described dreams as
messages to the dreamer and argued that dreamers should pay attention for their own
good. He came to believe that dreams present the dreamer with revelations that can
uncover and help to resolve emotional or religious problems and fears.[32]
Jung wrote that recurring dreams show up repeatedly to demand attention, suggesting
that the dreamer is neglecting an issue related to the dream. He believed that many of
the symbols or images from these dreams return with each dream. Jung believed that
memories formed throughout the day also play a role in dreaming. These memories
leave impressions for the unconscious to deal with when the ego is at rest. The
590
unconscious mind re-enacts these glimpses of the past in the form of a dream. Jung
called this a day residue.[33] Jung also argued that dreaming is not a purely individual
concern, that all dreams are part of "one great web of psychological factors."
Fritz Perls presented his theory of dreams as part of the holistic nature of Gestalt
therapy. Dreams are seen as projections of parts of the self that have been ignored,
rejected, or suppressed.[34] Jung argued that one could consider every person in the
dream to represent an aspect of the dreamer, which he called the subjective approach
to dreams. Perls expanded this point of view to say that even inanimate objects in the
dream may represent aspects of the dreamer. The dreamer may, therefore, be asked
to imagine being an object in the dream and to describe it, in order to bring into
awareness the characteristics of the object that correspond with the dreamer's
personality.
The neurobiology of dreaming
Accumulated observation has shown
that dreams are strongly associated
with rapid eye movement sleep, during
which an electroencephalogram (EEG)
shows brain activity that, among sleep
states, is most like wakefulness.
Participant-remembered
dreams
during NREM sleep are normally more
mundane in comparison.[35] During a
typical lifespan, a person spends a
total of about six years dreaming[36]
(which is about two hours each
night).[37] Most dreams only last 5 to
20 minutes.[36] It is unknown where in
the brain dreams originate, if there is a
single origin for dreams or if multiple
portions of the brain are involved, or what the purpose of dreaming is for the body or
mind.
During REM sleep, the release of the neurotransmitters norepinephrine, serotonin and
histamine is completely suppressed.[38][39][40]
During most dreams, the person dreaming is not aware that they are dreaming, no
matter how absurd or eccentric the dream is. The reason for this is the prefrontal
cortex, the region of the brain responsible for logic and planning, exhibits decreased
activity during dreams. This allows the dreamer to more actively interact with the dream
without thinking about what might happen, as things that would normally stand out in
reality blend in with the dream scenery.[41]
When REM sleep episodes were timed for their duration and subjects woken to make
reports before major editing or forgetting could take place, subjects accurately reported
the length of time they had been dreaming in an REM sleep state. Some researchers
have speculated that "time dilation" effects only seem to be taking place upon reflection
and do not truly occur within dreams.[42] This close correlation of REM sleep and
dream experience was the basis of the first series of reports describing the nature of
dreaming: that it is a regular nightly, rather than occasional, phenomenon, and a highfrequency activity within each sleep period occurring at predictable intervals of
approximately every 60–90 minutes in all humans throughout the life span.
REM sleep episodes and the dreams that accompany them lengthen progressively
across the night, with the first episode being shortest, of approximately 10–12 minutes
duration, and the second and third episodes increasing to 15–20 minutes. Dreams at
the end of the night may last as long as 15 minutes, although these may be
591
experienced as several distinct stories due to momentary arousals interrupting sleep as
the night ends. Dream reports can be reported from normal subjects on 50% of the
occasion when an awakening is made prior to the end of the first REM period. This rate
of retrieval is increased to about 99% when awakenings are made from the last REM
period of the night. This increase in the ability to recall appears related to intensification
across the night in the vividness of dream imagery, colors, and emotions.[43]
Dreams in animals
REM sleep and the ability to dream seem to be embedded in the biology of many
animals that live on Earth. All mammals experience REM. The range of REM can be
seen across species: dolphins experience minimum REM, while humans remain in the
middle and the opossum and the armadillo are among the most prolific dreamers.[44]
Studies have observed dreaming in mammals such as monkeys, dogs, cats, rats,
elephants and shrews. There have also been signs of dreaming in birds and
reptiles.[45] Sleeping and dreaming are intertwined. Scientific research results
regarding the function of dreaming in animals remain disputable; however, the function
of sleeping in living organisms is increasingly clear. For example, recent sleep
deprivation experiments conducted on rats and other animals have resulted in the
deterioration of physiological functioning and actual tissue damage of the animals.[46]
Some scientists argue that humans dream for the same reason other amniotes do.
From a Darwinian perspective dreams would have to fulfill some kind of biological
requirement, provide some benefit for natural selection to take place, or at least of have
no negative impact on fitness. In 2000 Antti Revonsuo, a professor at the University of
Turku in Finland, claimed that centuries ago dreams would prepare humans for
recognizing and avoiding danger by presenting a simulation of threatening events. The
theory has therefore been called the threat-simulation theory.[47] According to
Tsoukalas (2012) dreaming is related to the reactive patterns elicited by predatorial
encounters, a fact that is still evident in the control mechanisms of REM sleep (see
below).[48][49]
Neurological theories of dreams
Activation synthesis theory
In 1976 J. Allan Hobson and Robert McCarley proposed a new theory that changed
dream research, challenging the previously held Freudian view of dreams as
unconscious wishes to be interpreted. They assume that the same structures that
induce REM sleep also generate sensory information. Hobson's 1976 research
suggested that the signals interpreted as dreams originated in the brain stem during
REM sleep. However, research by Mark Solms suggests that dreams are generated in
the forebrain, and that REM sleep and dreaming are not directly related.[50]
While working in the neurosurgery department at hospitals in Johannesburg and
London, Solms had access to patients with various brain injuries. He began to question
patients about their dreams and confirmed that patients with damage to the parietal
lobe stopped dreaming; this finding was in line with Hobson's 1977 theory. However,
Solms did not encounter cases of loss of dreaming with patients having brain stem
damage. This observation forced him to question Hobson's prevailing theory, which
marked the brain stem as the source of the signals interpreted as dreams.
Continual-activation theory
Combining Hobson's activation synthesis hypothesis with Solms' findings, the
continual-activation theory of dreaming presented by Jie Zhang proposes that
dreaming is a result of brain activation and synthesis; at the same time, dreaming and
592
REM sleep are controlled by different brain mechanisms. Zhang hypothesizes that the
function of sleep is to process, encode and transfer the data from the short-term
memory to the long-term memory, though there is not much evidence backing up this
so-called "consolidation." NREM sleep processes the conscious-related memory
(declarative memory), and REM sleep processes the unconscious related memory
(procedural memory).
Zhang assumes that during REM sleep the unconscious part of a brain is busy
processing the procedural memory; meanwhile, the level of activation in the conscious
part of the brain descends to a very low level as the inputs from the sensory are
basically disconnected. This triggers the "continual-activation" mechanism to generate
a data stream from the memory stores to flow through the conscious part of the brain.
Zhang suggests that this pulse-like brain activation is the inducer of each dream. He
proposes that, with the involvement of the brain associative thinking system, dreaming
is, thereafter, self-maintained with the dreamer's own thinking until the next pulse of
memory insertion. This explains why dreams have both characteristics of continuity
(within a dream) and sudden changes (between two dreams).[51][52]
Defensive immobilization: the precursor of dreams
According to Tsoukalas (2012) REM sleep is an evolutionary transformation of a wellknown defensive mechanism, the tonic immobility reflex. This reflex, also known as
animal hypnosis or death feigning, functions as the last line of defense against an
attacking predator and consists of the total immobilization of the animal: the animal
appears dead (cf. "playing possum"). Tsoukalas claims the neurophysiology and
phenomenology of this reaction shows striking similarities to REM sleep, a fact which
betrays a deep evolutionary kinship. For example, both reactions exhibit brainstem
control, paralysis, sympathetic activation, and thermoregulatory changes. The author
claims this theory integrates many earlier findings into a unified framework.[48][49]
Dreams as excitations of long-term memory
Eugen Tarnow suggests that dreams are ever-present
excitations of long-term memory, even during waking
life. The strangeness of dreams is due to the format of
long-term memory, reminiscent of Penfield &
Rasmussen's findings that electrical excitations of the
cortex give rise to experiences similar to dreams.
During waking life an executive function interprets longterm memory consistent with reality checking. Tarnow's
theory is a reworking of Freud's theory of dreams in
which Freud's unconscious is replaced with the longterm memory system and Freud's "Dream Work"
describes the structure of long-term memory.[53]
Dreams for strengthening of semantic memories
A 2001 study showed evidence that illogical locations, characters, and dream flow may
help the brain strengthen the linking and consolidation of semantic memories.[54]
These conditions may occur because, during REM sleep, the flow of information
between the hippocampus and neocortex is reduced.[55]
Increasing levels of the stress hormone cortisol late in sleep (often during REM sleep)
cause this decreased communication. One stage of memory consolidation is the linking
of distant but related memories. Payne and Nadal hypothesize these memories are
then consolidated into a smooth narrative, similar to a process that happens when
memories are created under stress.[56]
593
Dreams for removing excess sensory information
Robert (1886),[57] a physician from Hamburg, was the first who suggested that dreams
are a need and that they have the function to erase (a) sensory impressions that were
not fully worked up, and (b) ideas that were not fully developed during the day. By the
dream work, incomplete material is either removed (suppressed) or deepened and
included into memory. Robert's ideas were cited repeatedly by Freud in his Die
Traumdeutung. Hughlings Jackson (1911) viewed that sleep serves to sweep away
unnecessary memories and connections from the day.
This was revised in 1983 by Crick and Mitchison's "reverse learning" theory, which
states that dreams are like the cleaning-up operations of computers when they are offline, removing (suppressing) parasitic nodes and other "junk" from the mind during
sleep.[58][59] However, the opposite view that dreaming has an information handling,
memory-consolidating function (Hennevin and Leconte, 1971) is also common.
Psychological theories of dreams
Dreams for testing and selecting mental schemas
Coutts[60] describes dreams as playing a central role in a two-phase sleep process
that improves the mind's ability to meet human needs during wakefulness. During the
accommodation phase, mental schemas self-modify by incorporating dream themes.
During the emotional selection phase, dreams test prior schema accommodations.
Those that appear adaptive are retained, while those that appear maladaptive are
culled. The cycle maps to the sleep cycle, repeating several times during a typical
night's sleep. Alfred Adler suggested that dreams are often emotional preparations for
solving problems, intoxicating an individual away from common sense toward private
logic. The residual dream feelings may either reinforce or inhibit contemplated action.
Evolutionary psychology theories of dreams
Numerous theories state that dreaming is a random by-product of REM sleep
physiology and that it does not serve any natural purpose.[61] Flanagan claims that
"dreams are evolutionary epiphenomena" and they have no adaptive function.
"Dreaming came along as a free ride on a system designed to think and to sleep.[62] "
Hobson, for different reasons, also considers dreams epiphenomena. He believes that
the substance of dreams have no significant influence on waking actions, and most
people go about their daily lives perfectly well without remembering their dreams.[63]
Hobson proposed the activation-synthesis theory, which states that "there is a
randomness of dream imagery and the randomness synthesizes dream-generated
images to fit the patterns of internally generated stimulations".[64] This theory is based
on the physiology of REM sleep, and Hobson believes dreams are the outcome of the
forebrain reacting to random activity beginning at the brainstem. The activationsynthesis theory hypothesizes that the peculiar nature of dreams is attributed to certain
parts of the brain trying to piece together a story out of what is essentially bizarre
information.[65]
However, evolutionary psychologists believe dreams serve some adaptive function for
survival. Deirdre Barrett describes dreaming as simply "thinking in different biochemical
state" and believes people continue to work on all the same problems—personal and
objective—in that state.[66] Her research finds that anything—math, musical
composition, business dilemmas—may get solved during dreaming.[67][68] In a related
theory, which Mark Blechner terms "Oneiric Darwinism," dreams are seen as creating
new ideas through the generation of random thought mutations. Some of these may be
rejected by the mind as useless, while others may be seen as valuable and
retained.[69]
594
Finnish psychologist Antti Revonsuo posits that dreams have evolved for "threat
simulation" exclusively. According to the Threat Simulation Theory he proposes, during
much of human evolution physical and interpersonal threats were serious, giving
reproductive advantage to those who survived them. Therefore dreaming evolved to
replicate these threats and continually practice dealing with them. In support of this
theory, Revonsuo shows that contemporary dreams comprise much more threatening
events than people meet in daily non-dream life, and the dreamer usually engages
appropriately with them.[70] It is suggested by this theory that dreams serve the
purpose of allowing for the rehearsal of threatening scenarios in order to better prepare
an individual for real-life threats.
According to Tsoukalas (2012) the biology of dreaming is related to the reactive
patterns elicited by predatorial encounters (especially the tonic immobility reflex), a fact
that lends support to evolutionary theories claiming that dreams specialize in threat
avoidance and/or emotional processing.[48]
Psychosomatic theory of dreams
Y.D. Tsai developed in 1995 a 3-hypothesis theory[71] that is claimed to provide a
mechanism for mind-body interaction and explain many dream-related phenomena,
including hypnosis, meridians in Chinese medicine, the increase in heart rate and
breathing rate during REM sleep, that babies have longer REM sleep, lucid dreams,
etc.
Dreams are a product of "dissociated imagination," which is dissociated from the
conscious self and draws material from sensory memory for simulation, with feedback
resulting in hallucination. By simulating the sensory signals to drive the autonomous
nerves, dreams can affect mind-body interaction. In the brain and spine, the
autonomous "repair nerves," which can expand the blood vessels, connect with
compression and pain nerves. Repair nerves are grouped into many chains called
meridians in Chinese medicine. When some repair nerves are prodded by compression
or pain to send out their repair signals, a chain reaction spreads out to set other repair
nerves in the same meridian into action. While dreaming, the body also employs the
meridians to repair the body and help it grow and develop by simulating very intensive
movement-compression signals to expand the blood vessels when the level of growth
enzymes increase.
Expectation fulfillment theory of dreams
In 1997,[72] Joe Griffin published a new theory to explain dreams, which later became
known as the expectation fulfilment theory of dreams. After years of research on his
own dreams and those of others, he found that dreaming serves to discharge the
emotional arousals (however minor) that haven't been expressed during the day, thus
freeing up space in the brain to deal with the emotional arousals of the next day and
allowing instinctive urges to stay intact. In effect, the expectation is fulfilled, i.e. the
action is 'completed', in the dream but in a metaphorical form, so that a false memory is
not created. The theory satisfactorily explains why dreams are usually forgotten
immediately afterwards: as Griffin suggests, far from being "the cesspit of the
unconscious", as Freud proclaimed, dreaming is the equivalent of the flushed toilet.
Other hypotheses on dreaming
There are many other hypotheses about the function of dreams, including:[73]
-Dreams allow the repressed parts of the mind to be satisfied through fantasy
while keeping the conscious mind from thoughts that would suddenly cause one
to awaken from shock.[74]
595
-Freud suggested that bad dreams let the brain learn to gain control over
emotions resulting from distressing experiences.[73]
-Jung suggested that dreams may compensate for one-sided attitudes held in
waking consciousness.[75]
-Ferenczi[76] proposed that the dream, when told, may communicate something
that is not being said outright.
-Dreams regulate mood.[77]
-Hartmann[78] says dreams may function like psychotherapy, by "making
connections in a safe place" and allowing the dreamer to integrate thoughts that
may be dissociated during waking life.
-LaBerge and DeGracia [79] have suggested that dreams may function, in part,
to recombine unconscious elements within consciousness on a temporary basis
by a process they termm “mental recombination”, in analogy with genetic
recombination of DNA. From a bio-computational viewpoint, mental
recombination may contribute to maintaining an optimal information processing
flexibility in brain information networks.
Dream content
From the 1940s to 1985, Calvin S. Hall collected more than 50,000 dream reports at
Western Reserve University. In 1966 Hall and Van De Castle published The Content
Analysis of Dreams, in which they outlined a coding system to study 1,000 dream
reports from college students.[80] It was found that people all over the world dream of
mostly the same things. Hall's complete dream reports became publicly available in the
mid-1990s by Hall's protégé William Domhoff, allowing further different analysis.
Personal experiences from the last day or week are frequently incorporated into
dreams.[81]
Visuals
The visual nature of dreams is generally highly phantasmagoric; that is, different
locations and objects continuously blend into each other. The visuals (including
locations, characters/people, objects/artifacts) are generally reflective of a person's
memories and experiences, but often take on highly exaggerated and bizarre forms.
People who are blind from birth do not have visual dreams. Their dream contents are
related to other senses like auditory, touch, smell and taste, whichever are present
since birth.[82]
Emotions
The most common emotion experienced in dreams is anxiety. Other emotions include
abandonment, anger, fear, joy, and happiness. Negative emotions are much more
common than positive ones.[80]
Sexual themes
The Hall data analysis shows that sexual dreams occur no more than 10% of the time
and are more prevalent in young to mid-teens.[80] Another study showed that 8% of
men's and women's dreams have sexual content.[83] In some cases, sexual dreams
may result in orgasms or nocturnal emissions. These are colloquially known as wet
dreams.[84]
Sigmund Freud argued that even dreams which may seem innocuous on the surface
ultimately have underlying sexual meanings. He would interpret dreams in hypersexualized ways. From Dream PsychologyFreud states, “We have already asserted
596
elsewhere that dreams which are conspicuously innocent invariably embody coarse
erotic wishes, and we might confirm this by means of numerous fresh examples. But
many dreams which appear indifferent, and which would never be suspected of any
particular significance, can be traced back, after analysis, to unmistakably sexual wishfeelings, which are often of an unexpected nature". [85]
Color vs. black and white
A small minority of people say that they dream only in black and white.[86] A 2008
study by a researcher at the University of Dundee found that people who were only
exposed to black and white television and film in childhood reported dreaming in black
and white about 25% of the time.[87]
Dream interpretations
Dream interpretation can be a result of subjective ideas and experiences. A recent
study conducted by the Journal of Personality and Social Psychology concluded that
most people believe that "their dreams reveal meaningful hidden truths". The study was
conducted in the United States, South Korea and India. 74% Indians, 65% South
Koreans and 56% Americans believe in Freud's dream theories.[88]
According to these series of studies, we are irrational about dreams the same way we
are irrational in our every day decisions. In their search for meaning, humans can turn
to dreams in order to find answers and explanations. The studies find that dreams
reflect the human trait of optimistic thinking since the results depict that humans tend to
focus more on dreams where good things take place.
Relationship with medical conditions
There is evidence that certain medical conditions (normally only neurological
conditions) can impact dreams. For instance, some people with synesthesia have
never reported entirely black-and-white dreaming, and often have a difficult time
imagining the idea of dreaming in only black and white.[89]
Therapy for recurring nightmares (often associated with posttraumatic stress disorder)
can include imagining alternative scenarios that could begin at each step of the
dream.[90]
Other associated phenomena
Incorporation of reality
During the night, many external stimuli may bombard the senses, but the brain often
interprets the stimulus and makes it a part of a dream to ensure continued sleep.[91]
Dream incorporation is a phenomenon whereby an actual sensation, such as
environmental sounds, is incorporated into dreams, such as hearing a phone ringing in
a dream while it is ringing in reality or dreaming of urination while wetting the bed. The
mind can, however, awaken an individual if they are in danger or if trained to respond
to certain sounds, such as a baby crying.
The term "dream incorporation" is also used in research examining the degree to which
preceding daytime events become elements of dreams. Recent studies suggest that
events in the day immediately preceding, and those about a week before, have the
most influence.[81]
597
Apparent precognition of real events
According to surveys, it is common for people to feel their dreams are predicting
subsequent life events.[92] Psychologists have explained these experiences in terms of
memory biases, namely a selective memory for accurate predictions and distorted
memory so that dreams are retrospectively fitted onto life experiences.[92] The multifaceted nature of dreams makes it easy to find connections between dream content
and real events.[93]
In one experiment, subjects were asked to write down their dreams in a diary. This
prevented the selective memory effect, and the dreams no longer seemed accurate
about the future.[94] Another experiment gave subjects a fake diary of a student with
apparently precognitive dreams. This diary described events from the person's life, as
well as some predictive dreams and some non-predictive dreams. When subjects were
asked to recall the dreams they had read, they remembered more of the successful
predictions than unsuccessful ones.[95]
Lucid dreaming
Lucid dreaming is the conscious perception of one's state while dreaming. In this state
the dreamer may often (but not always) have some degree of control over their own
actions within the dream or even the characters and the environment of the dream.
Dream control has been reported to improve with practiced deliberate lucid dreaming,
but the ability to control aspects of the dream is not necessary for a dream to qualify as
"lucid" — a lucid dream is any dream during which the dreamer knows they are
dreaming.[96] The occurrence of lucid dreaming has been scientifically verified.[97]
Oneironaut is a term sometimes used for those who lucidly dream.
Communication through lucid dreaming
In 1975, parapsychologist Keith Hearne successfully communicated to a patient
experiencing a lucid dream. On April 12, 1975, after instructed to move the eyes left
and right upon becoming lucid, the subject had a lucid dream and the first recorded
signals from a lucid dream were recorded.[98]
Years later, psychophysiologist Stephen LaBerge conducted similar work including
-Using eye signals to map the subjective sense of time in dreams
-Comparing the electrical activity of the brain while singing awake and while
dreaming.
-Studies comparing in-dream sex, arousal, and orgasm[99]
Lucid dreaming as a path to enlightenment
Many Tibetan Buddhist monks aim to use lucid dreaming as a tool to complete
otherwise impossible tasks, such as
-Practice a spiritual discipline called ''Sadhana''
-Receive initiations, empowerments and transmissions
-Visit different locations, realities and ''lokas'' (worlds)
-Communicate with ''Yidam'' (enlightened being)
-Meet other sentient beings
-Fly and shape into other creatures
The ultimate goal being able to “apprehend the dream” one is able to attain complete
conscious awareness and dissolve the dream state. They believe you can observe the
purest form of conscious awareness after you have stripped yourself of the body’s
physical stimulus and the dreaming mind’s conceptual stimulus.[100]
598
Dreams of absent-minded transgression
Dreams of absent-minded transgression (DAMT) are dreams wherein the dreamer
absentmindedly performs an action that he or she has been trying to stop (one classic
example is of a quitting smoker having dreams of lighting a cigarette). Subjects who
have had DAMT have reported waking with intense feelings of guilt. One study found a
positive association between having these dreams and successfully stopping the
behavior.[101]
Recalling dreams
The recall of dreams is extremely unreliable, though it is a skill that can be trained.
Dreams can usually be recalled if a person is awakened while dreaming.[90] Women
tend to have more frequent dream recall than men.[90] Dreams that are difficult to
recall may be characterized by relatively little affect, and factors such as salience,
arousal, and interference play a role in dream recall. Often, a dream may be recalled
upon viewing or hearing a random trigger or stimulus. The salience hypothesis
proposes that dream content that is salient, that is, novel, intense, or unusual, is more
easily remembered. There is considerable evidence that vivid, intense, or unusual
dream content is more frequently recalled.[102] A dream journal can be used to assist
dream recall, for personal interest or psychotherapy purposes.
For some people, sensations from the previous night's dreams are sometimes
spontaneously experienced in falling asleep. However they are usually too slight and
fleeting to allow dream recall. At least 95% of all dreams are not remembered. Certain
brain chemicals necessary for converting short-term memories into long-term ones are
suppressed during REM sleep. Unless a dream is particularly vivid and if one wakes
during or immediately after it, the content of the dream is not remembered.[103]
Individual differences
In line with the salience hypothesis, there is considerable evidence that people who
have more vivid, intense or unusual dreams show better recall. There is evidence that
continuity of consciousness is related to recall. Specifically, people who have vivid and
unusual experiences during the day tend to have more memorable dream content and
hence better dream recall. People who score high on measures of personality traits
associated with creativity, imagination, and fantasy, such as openness to experience,
daydreaming, fantasy proneness, absorption, and hypnotic susceptibility, tend to show
more frequent dream recall.[102] There is also evidence for continuity between the
bizarre aspects of dreaming and waking experience. That is, people who report more
bizarre experiences during the day, such as people high in schizotypy (psychosis
proneness) have more frequent dream recall and also report more frequent
nightmares.[102]
Déjà vu
One theory of déjà vu attributes the feeling of having previously seen or experienced
something to having dreamt about a similar situation or place, and forgetting about it
until one seems to be mysteriously reminded of the situation or the place while
awake.[104]
Sleepwalking
Sleepwalking was once thought of as "acting out a dream", but that theory has fallen
out of favor.
599
Daydreaming
A daydream is a visionary fantasy, especially one of happy, pleasant thoughts, hopes
or ambitions, imagined as coming to pass, and experienced while awake.[105] There
are many different types of daydreams, and there is no consistent definition amongst
psychologists.[105] The general public also uses the term for a broad variety of
experiences. Research by Harvard psychologist Deirdre Barrett has found that people
who experience vivid dream-like mental images reserve the word for these, whereas
many other people refer to milder imagery, realistic future planning, review of past
memories or just "spacing out"—i.e. one's mind going relatively blank—when they talk
about "daydreaming."[106]
While daydreaming has long been derided as a lazy, non-productive pastime, it is now
commonly acknowledged that daydreaming can be constructive in some contexts.[107]
There are numerous examples of people in creative or artistic careers, such as
composers, novelists and filmmakers, developing new ideas through daydreaming.
Similarly, research scientists, mathematicians and physicists have developed new
ideas by daydreaming about their subject areas.
Hallucination
A hallucination, in the broadest sense of the word, is a perception in the absence of a
stimulus. In a stricter sense, hallucinations are perceptions in a conscious and awake
state, in the absence of external stimuli, and have qualities of real perception, in that
they are vivid, substantial, and located in external objective space. The latter definition
distinguishes hallucinations from the related phenomena of dreaming, which does not
involve wakefulness.
Nightmares
A nightmare is an unpleasant dream that can cause a strong negative emotional
response from the mind, typically fear and/or horror, but also despair, anxiety and great
sadness. The dream may contain situations of danger, discomfort, psychological or
physical terror. Sufferers usually awaken in a state of distress and may be unable to
return to sleep for a prolonged period of time.[108]
Night terrors
A night terror, also known as a sleep terror or pavor nocturnus, is a parasomnia
disorder that predominantly affects children, causing feelings of terror or dread. Night
terrors should not be confused with nightmares, which are bad dreams that cause the
feeling of horror or fear.
600
Emotion
Contents
1 Etymology, definitions, and differentiation
2 Components of emotion
3 Classification
3.1 Basic emotions
3.2 Multi dimensional Analysis of emotions
4 Theories on the experience of emotions
4.1 Ancient Greece and Middle Ages
4.2 Evolutionary theories
4.3 Somatic theories
4.4 Cognitive theories
4.5 Situated perspective on emotion
5 Neurocircuitry
5.1 Prefrontal cortex
5.2 Homeostatic/primordial emotion
6 Disciplinary approaches
6.1 History
6.2 Sociology
6.3 Psychotherapy and regulation of emotion
6.4 Computer science
7 Notable theorists
In psychology and philosophy, emotion
is a subjective, conscious experience
characterized
primarily
by
psychophysiological
expressions,
biological reactions, and mental states.
Emotion is often associated and
considered reciprocally influential with
mood,
temperament,
personality,
disposition, and motivation.[1] It also is
influenced
by
hormones
and
neurotransmitters such as dopamine,
noradrenaline, serotonin, oxytocin,
cortisol and GABA. Emotion is often
the driving force behind motivation,
positive or negative.[2] An alternative
definition of emotion is a "positive or
negative experience that is associated
with
a
particular
pattern
of
physiological activity."[3]
The physiology of emotion is closely linked to arousal of the nervous system with
various states and strengths of arousal relating, apparently, to particular emotions.
Emotions are a complex state of feeling that results in physical and psychological
changes that influence our behaviour. Those acting primarily on emotion may seem as
if they are not thinking, cognition is an important aspect of emotion, particularly the
601
interpretation of events. For example, the experience of fear usually occurs in response
to a threat. The cognition of danger and subsequent arousal of the nervous system
(e.g. rapid heartbeat and breathing, sweating, muscle tension) is an integral component
to the subsequent interpretation and labeling of that arousal as an emotional state.
Emotion is also linked to behavioral tendency. Extroverted people are more likely to be
social and express their emotions, while introverted people are more likely to be more
socially withdrawn and conceal their emotions.
Research on emotion has increased significantly over the past two decades with many
fields contributing including psychology, neuroscience, endocrinology, medicine,
history, sociology, and even computer science. The numerous theories that attempt to
explain the origin, neurobiology, experience, and function of emotions have only
fostered more intense research on this topic. Current areas of research in the concept
of emotion include the development of materials that stimulate and elicit emotion. In
addition PET scans and fMRI scans help study the affective processes in the brain.[4]
Etymology, definitions, and differentiation
The word "emotion" dates back to 1579, when it was adapted from the French word
émouvoir, which means "to stir up". However, the earliest precursors of the word likely
dates back to the very origins of language.[5]
Emotions have been described as discrete and consistent responses to internal or
external events which have a particular significance for the organism. Emotions are
brief in duration and consist of a coordinated set of responses, which may include
verbal, physiological, behavioural, and neural mechanisms.[6] Emotions have also
been described as biologically given and a result of evolution because they provided
good solutions to ancient and recurring problems that faced our ancestors.[7]
Emotion can be differentiated from a number of similar constructs within the field of
affective neuroscience:[6]
-Feelings are best understood as a subjective representation of emotions,
private to the individual experiencing them.
-Moods are diffuse affective states that generally last for much longer durations
than emotions and are also usually less intense than emotions.
-Affect is an encompassing term, used to describe the topics of emotion,
feelings, and moods together, even though it is commonly used interchangeably
with emotion.
In addition, relationships exist between emotions, such as having positive or negative
influences, with direct opposites existing. These concepts are described in contrasting
and categorization of emotions.
Components of emotion
In Scherer's components processing model of emotion,[8] five crucial elements of
emotion are said to exist. From the component processing perspective, emotion
experience is said to require that all of these processes become coordinated and
synchronized for a short period of time, driven by appraisal processes. Although the
inclusion of cognitive appraisal as one of the elements is slightly controversial, since
some theorists make the assumption that emotion and cognition are separate but
interacting systems, the component processing model provides a sequence of events
that effectively describes the coordination involved during an emotional episode.
-Cognitive appraisal: provides an evaluation of events and objects
-Bodily symptoms: the physiological component of emotional experience
-Action tendencies: a motivational component for the preparation and direction
of motor responses.
602
-Expression: facial and vocal expression almost always accompanies an
emotional state to communicate reaction and intention of actions
-Feelings: the subjective experience of emotional state once it has occurred
Classification
A distinction can be made between emotional episodes and emotional dispositions.
Emotional dispositions are also comparable to character traits, where someone may be
said to be generally disposed to experience certain emotions. For example, an irritable
person is generally disposed to feel irritation more easily or quickly than others do.
Finally, some theorists place emotions within a more general category of "affective
states" where affective states can also include emotion-related phenomena such as
pleasure and pain, motivational states (for example, hunger or curiosity), moods,
dispositions and traits.[9]
The classification of emotions has mainly been researched from two fundamental
viewpoints. The first viewpoint is that emotions are discrete and fundamentally different
constructs while the second viewpoint asserts that emotions can be characterized on a
dimensional basis in groupings.
Basic emotions
For more than 40 years, Paul Ekman
has supported the view that emotions
are
discrete,
measurable,
and
physiologically distinct. Ekman's most
influential work revolved around the
finding that certain emotions appeared to
be universally recognized, even in
cultures that were preliterate and could
not have learned associations for facial
expressions through media. Another
classic
study
found
that
when
participants
contorted
their
facial
muscles into distinct facial expressions
(e.g. disgust), they reported subjective
and physiological experiences that
matched the distinct facial expressions.
His research findings led him to classify
six emotions as basic: anger, disgust,
fear,
happiness,
sadness
and
surprise.[10]
Robert Plutchik agreed with Ekman's
biologically driven perspective but developed the "wheel of emotions", suggesting eight
primary emotions grouped on a positive or negative basis: joy versus sadness; anger
versus fear; trust versus distrust; and surprise versus anticipation.[10] Some basic
emotions can be modified to form complex emotions. The complex emotions could
arise from cultural conditioning or association combined with the basic emotions.
Alternatively, similar to the way primary colors combine, primary emotions could blend
to form the full spectrum of human emotional experience. For example, interpersonal
anger and disgust could blend to form contempt. Relationships exist between basic
emotions, resulting in positive or negative influences.[11]
603
Multi dimensional Analysis of emotions
Through the use of multidimensional
scaling, psychologists can map out
similar emotional experiences, which
allows a visual depiction of the
"emotional
distance"
between
experiences. A further step can be
taken by looking at the map's
dimensions
of
the
emotional
experiences.
The
emotional
experiences are divided into two
dimensions known as valences (how
negative or positive the experience
was) and arousal (extent of reaction to
stimuli). These two dimensions can be
depicted on a 2D coordinate map.[12]
Theories on the experience of emotions
Ancient Greece and Middle Ages
Theories about emotions stretch back to at least as far as the stoics of Ancient Greece
and Ancient China. In the latter it was believed that excess emotion caused damage to
qi, which in turn, damages the vital organs.[13] The four humours theory made popular
by Hippocrates contributed to the study of emotion in the same way that it did for
medicine.
Western philosophy regarded emotion in varying ways. In stoic theories it was seen as
a hindrance to reason and therefore a hindrance to virtue. Aristotle believed that
emotions were an essential component to virtue.[14] In the Aristotelian view all
emotions (called passions) corresponded to an appetite or capacity. During the Middle
Ages, the Aristotelian view was adopted and further developed by scholasticism and
Thomas Aquinas[15] in particular. There are also theories in the works of philosophers
such as René Descartes, Niccolò Machiavelli, Baruch Spinoza[16] and David Hume. In
the 19th century emotions were considered adaptive and were studied more frequently
from an empiricist psychiatric perspective.
Evolutionary theories
19th Century
Perspectives on emotions from evolutionary
theory were initiated in the late 19th century with
Charles Darwin's book The Expression of the
Emotions in Man and Animals.[17] Darwin
argued that emotions actually served a purpose
for humans, in communication and also in aiding
their survival. Darwin, therefore, argued that
emotions evolved via natural selection and
therefore
have
universal
cross-cultural
counterparts. Darwin also detailed the virtues of
experiencing emotions and the parallel
experiences that occur in animals (see emotion
in animals).
This led the way for animal research on
emotions and the eventual determination of the
neural underpinnings of emotion.
604
Contemporary
More contemporary views along the evolutionary psychology spectrum posit that both
basic emotions and social emotions evolved to motivate (social) behaviors that were
adaptive in the ancestral environment.[2] Current research[citation needed] suggests
that emotion is an essential part of any human decision-making and planning, and the
famous distinction made between reason and emotion is not as clear as it seems. Paul
D. MacLean claims that emotion competes with even more instinctive responses, on
one hand, and the more abstract reasoning, on the other hand. The increased potential
in neuroimaging has also allowed investigation into evolutionarily ancient parts of the
brain. Important neurological advances were derived from these perspectives in the
1990s by Joseph E. LeDoux and António Damásio.
Research on social emotion also focuses on the physical displays of emotion including
body language of animals and humans (see affect display). For example, spite seems
to work against the individual but it can establish an individual's reputation as someone
to be feared.[2] Shame and pride can motivate behaviors that help one maintain one's
standing in a community, and self-esteem is one's estimate of one's status.[2][18]
Somatic theories
Somatic theories of emotion claim that bodily responses, rather than cognitive
interpretations, are essential to emotions. The first modern version of such theories
came from William James in the 1880s. The theory lost favor in the 20th century, but
has regained popularity more recently due largely to theorists such as John
Cacioppo,[19] António Damásio,[20] Joseph E. LeDoux[21] and Robert Zajonc[22] who
are able to appeal to neurological evidence.[citation needed]
James–Lange theory
In his 1884 article[23] William James argued that feelings and emotions were
secondary to physiological phenomena. In his theory, James proposed that the
perception of what he called an "exciting fact" led directly to a physiological response,
known as "emotion." To account for different types of emotional experiences, James
proposed that stimuli trigger activity in the autonomic nervous system, which in turn
produces an emotional experience in the brain. The Danish psychologist Carl Lange
also proposed a similar theory at around the same time, and therefore this theory
became known as the James–Lange theory. As James wrote, "the perception of bodily
changes, as they occur, is the emotion." James further claims that "we feel sad
because we cry, angry because we strike, afraid because we tremble, and neither we
cry, strike, nor tremble because we are sorry, angry, or fearful, as the case may
be."[23]
An example of this theory in action would be as follows: An emotion-evoking stimulus
(snake) triggers a pattern of physiological response (increased heart rate, faster
breathing, etc.), which is interpreted as a particular emotion (fear). This theory is
supported by experiments in which by manipulating the bodily state induces a desired
emotional state.[24] Some people may believe that emotions give rise to emotionspecific actions: e.g. "I'm crying because I'm sad," or "I ran away because I was
scared." The issue with the James–Lange theory is that of causation (bodily states
causing emotions and being a priori), not that of the bodily influences on emotional
experience (which can be argued and is still quite prevalent today in biofeedback
studies and embodiment theory).[25]
Although mostly abandoned in its original form, Tim Dalgleish argues that most
contemporary neuroscientists have embraced the components of the James-Lange
theory of emotions.[26]
605
The James–Lange theory has remained influential. Its main contribution is the
emphasis it places on the embodiment of emotions, especially the argument
that changes in the bodily concomitants of emotions can alter their experienced
intensity. Most contemporary neuroscientists would endorse a modified James–
Lange view in which bodily feedback modulates the experience of emotion." (p.
583)
Cannon–Bard theory
Walter Bradford Cannon agreed that physiological responses played a crucial role in
emotions, but did not believe that physiological responses alone could explain
subjective emotional experiences. He argued that physiological responses were too
slow and often imperceptible and this could not account for the relatively rapid and
intense subjective awareness of emotion. He also believed that the richness, variety,
and temporal course of emotional experiences could not stem from physiological
reactions, that reflected fairly undifferentiated fight or flight responses.[27][28] An
example of this theory in action is as follows: An emotion-evoking event (snake)
triggers simultaneously both a physiological response and a conscious experience of
an emotion.
Phillip Bard contributed to the theory with his work on animals. Bard found that
sensory, motor, and physiological information all had to pass through the diencephalon
(particularly the thalamus), before being subjected to any further processing. Therefore,
Cannon also argued that it was not anatomically possible for sensory events to trigger
a physiological response prior to triggering conscious awareness and emotional stimuli
had to trigger both physiological and experiential aspects of emotion
simultaneously.[27]
Two-factor theory
Stanley Schachter formulated his theory on the earlier work of a Spanish physician,
Gregorio Maranon, who injected patients with epinephrine and subsequently asked
them how they felt. Interestingly, Maranon found that most of these patients felt
something but in the absence of an actual emotion-evoking stimulus, the patients were
unable to interpret their physiological arousal as an experienced emotion. Schachter
did agree that physiological reactions played a big role in emotions. He suggested that
physiological reactions contributed to emotional experience by facilitating a focused
cognitive appraisal of a given physiologically arousing event and that this appraisal was
what defined the subjective emotional experience. Emotions were thus a result of two
stage process: general physiological arousal, and experience of emotion. For example,
the physiological arousal, heart pounding, in a response to an evoking stimulus, the
sight of a bear in the kitchen. The brain then quickly scans the area, to explain the
pounding, and notices the bear. Consequently, the brain interprets the pounding heart
as being the result of fearing the bear.[29] With his student, Jerome Singer, Schachter
demonstrated that subjects can have different emotional reactions despite being placed
into the same physiological state with an injection of epinephrine. Subjects were
observed to express either anger or amusement depending on whether another person
in the situation (a confederate) displayed that emotion. Hence, the combination of the
appraisal of the situation (cognitive) and the participants' reception of adrenaline or a
placebo together determined the response. This experiment has been criticized in
Jesse Prinz's (2004) Gut Reactions.
Cognitive theories
With the two-factor theory now incorporating cognition, several theories began to argue
that cognitive activity in the form of judgments, evaluations, or thoughts were entirely
necessary for an emotion to occur. One of the main proponents of this view was
606
Richard Lazarus who argued that emotions must have some cognitive intentionality.
The cognitive activity involved in the interpretation of an emotional context may be
conscious or unconscious and may or may not take the form of conceptual processing.
Lazarus' theory is very influential; emotion is a disturbance that occurs in the following
order:
1-Cognitive appraisal—The individual assesses the event cognitively, which
cues the emotion.
2-Physiological changes—The cognitive reaction starts biological changes such
as increased heart rate or pituitary adrenal response.
3-Action—The individual feels the emotion and chooses how to react.
For example: Jenny sees a snake.
1-Jenny cognitively assesses the snake in her presence. Cognition allows her to
understand it as a danger.
2-Her brain activates Adrenaline gland which pumps Adrenaline through her
blood stream resulting in increased heartbeat.
3-Jenny screams and runs away.
Lazarus stressed that the quality and intensity of emotions are controlled through
cognitive processes. These processes underline coping strategies that form the
emotional reaction by altering the relationship between the person and the
environment.
George Mandler provided an extensive theoretical and empirical discussion of emotion
as influenced by cognition, consciousness, and the autonomic nervous system in two
books (Mind and Emotion, 1975, and Mind and Body: Psychology of Emotion and
Stress, 1984)
There are some theories on emotions arguing that cognitive activity in the form of
judgements, evaluations, or thoughts are necessary in order for an emotion to occur. A
prominent philosophical exponent is Robert C. Solomon (for example, The Passions,
Emotions and the Meaning of Life, 1993). Solomon claims that emotions are
judgements. He has put forward a more nuanced view which responds to what he has
called the ‘standard objection’ to cognitivism, the idea that a judgement that something
is fearsome can occur with or without emotion, so judgement cannot be identified with
emotion. The theory proposed by Nico Frijda where appraisal leads to action
tendencies is another example.
It has also been suggested that emotions (affect heuristics, feelings and gut-feeling
reactions) are often used as shortcuts to process information and influence
behavior.[30] The affect infusion model (AIM) is a theoretical model developed by
Joseph Forgas in the early 1990s that attempts to explain how emotion and mood
interact with one's ability to process information.
Perceptual theory
Theories dealing with perception either use one or multiples perceptions in order to find
an emotion (Goldie, 2007).A recent hybrid of the somatic and cognitive theories of
emotion is the perceptual theory. This theory is neo-Jamesian in arguing that bodily
responses are central to emotions, yet it emphasizes the meaningfulness of emotions
or the idea that emotions are about something, as is recognized by cognitive theories.
The novel claim of this theory is that conceptually-based cognition is unnecessary for
such meaning. Rather the bodily changes themselves perceive the meaningful content
of the emotion because of being causally triggered by certain situations. In this respect,
emotions are held to be analogous to faculties such as vision or touch, which provide
information about the relation between the subject and the world in various ways. A
sophisticated defense of this view is found in philosopher Jesse Prinz's book Gut
Reactions and psychologist James Laird's book Feelings.
607
Affective events theory
This is a communication-based theory developed by Howard M. Weiss and Russell
Cropanzano (1996), that looks at the causes, structures, and consequences of
emotional experience (especially in work contexts). This theory suggests that emotions
are influenced and caused by events which in turn influence attitudes and behaviors.
This theoretical frame also emphasizes time in that human beings experience what
they call emotion episodes— a "series of emotional states extended over time and
organized around an underlying theme." This theory has been utilized by numerous
researchers to better understand emotion from a communicative lens, and was
reviewed further by Howard M. Weiss and Daniel J. Beal in their article, "Reflections on
Affective Events Theory" published in Research on Emotion in Organizations in 2005.
Situated perspective on emotion
A situated perspective on emotion, developed by Paul E. Griffiths and Andrea
Scarantino , emphasizes the importance of external factors in the development and
communication of emotion, drawing upon the situationism approach in psychology.[31]
This theory is markedly different from both cognitivist and neo-Jamesian theories of
emotion, both of which see emotion as a purely internal process, with the environment
only acting as a stimulus to the emotion. In contrast, a situationist perspective on
emotion views emotion as the product of an organism investigating its environment,
and observing the responses of other organisms. Emotion stimulates the evolution of
social relationships, acting as a signal to mediate the behavior of other organisms. In
some contexts, the expression of emotion (both voluntary and involuntary) could be
seen as strategic moves in the transactions between different organisms. The situated
perspective on emotion states that conceptual thought is not an inherent part of
emotion, since emotion is an action-oriented form of skillful engagement with the world.
Griffiths and Scarantino suggested that this perspective on emotion could be helpful in
understanding phobias, as well as the emotions of infants and animals.
Neurocircuitry
Based on discoveries made through neural mapping of the limbic system, the
neurobiological explanation of human emotion is that emotion is a pleasant or
unpleasant mental state organized in the limbic system of the mammalian brain. If
distinguished from reactive responses of reptiles, emotions would then be mammalian
elaborations of general vertebrate arousal patterns, in which neurochemicals (for
example, dopamine, noradrenaline, and serotonin) step-up or step-down the brain's
activity level, as visible in body movements, gestures, and postures.
For example, the emotion of love is proposed to be the expression of paleocircuits of
the mammalian brain (specifically, modules of the cingulate gyrus) which facilitate the
care, feeding, and grooming of offspring. Paleocircuits are neural platforms for bodily
expression configured before the advent of cortical circuits for speech. They consist of
pre-configured pathways or networks of nerve cells in the forebrain, brain stem and
spinal cord.
The motor centers of reptiles react to sensory cues of vision, sound, touch, chemical,
gravity, and motion with pre-set body movements and programmed postures. With the
arrival of night-active mammals, smell replaced vision as the dominant sense, and a
different way of responding arose from the olfactory sense, which is proposed to have
developed into mammalian emotion and emotional memory. The mammalian brain
invested heavily in olfaction to succeed at night as reptiles slept—one explanation for
why olfactory lobes in mammalian brains are proportionally larger than in the reptiles.
These odor pathways gradually formed the neural blueprint for what was later to
become our limbic brain.
608
Emotions are thought to be related to certain activities in brain areas that direct our
attention, motivate our behavior, and determine the significance of what is going on
around us. Pioneering work by Broca (1878), Papez (1937), and MacLean (1952)
suggested that emotion is related to a group of structures in the center of the brain
called the limbic system, which includes the hypothalamus, cingulate cortex,
hippocampi, and other structures. More recent research has shown that some of these
limbic structures are not as directly related to emotion as others are, while some nonlimbic structures have been found to be of greater emotional relevance.
In 2011, Lövheim proposed a direct relation between specific combinations of the
levels of the signal substances dopamine, noradrenaline and serotonin and eight basic
emotions. A model was presented where the signal substances form the axes of a
coordinate system, and the eight basic emotions according to Silvan Tomkins are
placed in the eight corners. Anger is, according to the model, for example produced by
the combination of low serotonin, high dopamine and high noradrenaline.[32]
Prefrontal cortex
There is ample evidence that the left prefrontal
cortex is activated by stimuli that cause
positive approach.[33] If attractive stimuli can
selectively activate a region of the brain, then
logically the converse should hold, that
selective activation of that region of the brain
should cause a stimulus to be judged more
positively. This was demonstrated for
moderately attractive visual stimuli[34] and
replicated and extended to include negative
stimuli.[35]
Two neurobiological models of emotion in the
prefrontal cortex made opposing predictions. The Valence Model predicted that anger,
a negative emotion, would activate the right prefrontal cortex. The Direction Model
predicted that anger, an approach emotion, would activate the left prefrontal cortex.
The second model was supported.[36]
This still left open the question of whether the opposite of approach in the prefrontal
cortex is better described as moving away (Direction Model), as unmoving but with
strength and resistance (Movement Model), or as unmoving with passive yielding
(Action Tendency Model). Support for the Action Tendency Model (passivity related to
right prefrontal activity) comes from research on shyness[37] and research on
behavioral inhibition.[38] Research that tested the competing hypotheses generated by
all four models also supported the Action Tendency Model.[39][40]
Homeostatic/primordial emotion
Another neurological approach distinguishes two classes of emotion: "classical"
emotions such as love, anger and fear that are evoked by environmental stimuli, and
"primordial" or "homeostatic emotions" – attention-demanding feelings evoked by body
states, such as pain, hunger and fatigue, that motivate behavior (withdrawal, eating or
resting in these examples) aimed at maintaining the body's internal milieu at its ideal
state.[41]
Derek Denton defines the latter as "the subjective element of the instincts, which are
the genetically programmed behaviour patterns which contrive homeostasis. They
include thirst, hunger for air, hunger for food, pain and hunger for specific minerals etc.
There are two constituents of a primordial emotion--the specific sensation which when
609
severe may be imperious, and the compelling intention for gratification by a
consummatory act." [42]
Disciplinary approaches
Many different disciplines have produced work on the emotions. Human sciences study
the role of emotions in mental processes, disorders, and neural mechanisms. In
psychiatry, emotions are examined as part of the discipline's study and treatment of
mental disorders in humans. Nursing studies emotions as part of its approach to the
provision of holistic health care to humans. Psychology examines emotions from a
scientific perspective by treating them as mental processes and behavior and they
explore the underlying physiological and neurological processes. In neuroscience subfields such as social neuroscience and affective neuroscience, scientists study the
neural mechanisms of emotion by combining neuroscience with the psychological
study of personality, emotion, and mood. In linguistics, the expression of emotion may
change to the meaning of sounds. In education, the role of emotions in relation to
learning is examined.
Social sciences often examine emotion for the role that it plays in human culture and
social interactions. In sociology, emotions are examined for the role they play in human
society, social patterns and interactions, and culture. In anthropology, the study of
humanity, scholars use ethnography to undertake contextual analyses and crosscultural comparisons of a range of human activities. Some anthropology studies
examine the role of emotions in human activities. In the field of communication
sciences, critical organizational scholars have examined the role of emotions in
organizations, from the perspectives of managers, employees, and even customers. A
focus on emotions in organizations can be credited to Arlie Russell Hochschild's
concept of emotional labor. The University of Queensland hosts EmoNet,[43] an e-mail
distribution list representing a network of academics that facilitates scholarly discussion
of all matters relating to the study of emotion in organizational settings. The list was
established in January 1997 and has over 700 members from across the globe.
In economics, the social science that studies the production, distribution, and
consumption of goods and services, emotions are analyzed in some sub-fields of
microeconomics, in order to assess the role of emotions on purchase decision-making
and risk perception. In criminology, a social science approach to the study of crime,
scholars often draw on behavioral sciences, sociology, and psychology; emotions are
examined in criminology issues such as anomie theory and studies of "toughness,"
aggressive behavior, and hooliganism. In law, which underpins civil obedience, politics,
economics and society, evidence about people's emotions is often raised in tort law
claims for compensation and in criminal law prosecutions against alleged lawbreakers
(as evidence of the defendant's state of mind during trials, sentencing, and parole
hearings). In political science, emotions are examined in a number of sub-fields, such
as the analysis of voter decision-making.
In philosophy, emotions are studied in sub-fields such as ethics, the philosophy of art
(for example, sensory–emotional values, and matters of taste and sentimentality), and
the philosophy of music (see also Music and emotion). In history, scholars examine
documents and other sources to interpret and analyze past activities; speculation on
the emotional state of the authors of historical documents is one of the tools of
interpretation. In literature and film-making, the expression of emotion is the
cornerstone of genres such as drama, melodrama, and romance. In communication
studies, scholars study the role that emotion plays in the dissemination of ideas and
messages. Emotion is also studied in non-human animals in ethology, a branch of
zoology which focuses on the scientific study of animal behavior. Ethology is a
combination of laboratory and field science, with strong ties to ecology and evolution.
610
Ethologists often study one type of behavior (for example, aggression) in a number of
unrelated animals.
History
The history of emotions has become an increasingly popular topic recently, with some
scholars arguing that it is an essential category of analysis, not unlike class, race, or
gender. Historians, like other social scientists, assume that emotions, feelings and their
expressions are regulated in different ways by both different cultures and different
historical times, and constructivist school of history claims even that some sentiments
and meta-emotions, for example Schadenfreude, are learnt and not only regulated by
culture. Historians of emotion trace and analyse the changing norms and rules of
feeling, while examining emotional regimes, codes, and lexicons from social, cultural or
political history perspectives. Others focus on the history of medicine, science or
psychology. What somebody can and may feel (and show) in a given situation, towards
certain people or things, depends on social norms and rules. It is thus historically
variable and open to change.[44] Several research centers have sprung up in different
countries in the past few years in Germany, England, Spain,[45] Sweden and Australia.
Furtherly, research in historical trauma suggests that some traumatic emotions can be
passed on from parents to offspring to second and even third generation, presented as
examples of transgenerational trauma.
Sociology
Attempts are frequently made to regulate emotion according to the conventions of the
society and the situation based on many (sometimes conflicting) demands and
expectations which originate from various entities. The emotion of anger is in many
cultures discouraged in girls and women, while fear is discouraged in boys and men.
Expectations attached to social roles, such as "acting as man" and not as a woman,
and the accompanying "feeling rules" contribute to the differences in expression of
certain emotions. Some cultures encourage or discourage happiness, sadness, or
jealousy, and the free expression of the emotion of disgust is considered socially
unacceptable in most cultures. Some social institutions are seen as based on certain
emotion, such as love in the case of contemporary institution of marriage. In
advertising, such as health campaigns and political messages, emotional appeals are
commonly found. Recent examples include no-smoking health campaigns and political
campaign advertising emphasizing the fear of terrorism.
Psychotherapy and regulation of emotion
Emotion regulation refers to the cognitive and behavioral strategies people use to
influence their own emotional experience.[46] For example, a behavioral strategy in
which one avoids a situation to avoid unwanted emotions (e.g., trying not to think about
the situation, doing distracting activities, etc.).[47] Depending on the particular school's
general emphasis on either cognitive components of emotion, physical energy
discharging, or on symbolic movement and facial expression components of
emotion,[48] different schools of psychotherapy approach the regulation of emotion
differently. Cognitively oriented schools approach them via their cognitive components,
such as rational emotive behavior therapy. Yet others approach emotions via symbolic
movement and facial expression components (like in contemporary Gestalt
therapy).[49]
Computer science
In the 2000s, research in computer science, engineering, psychology and neuroscience
has been aimed at developing devices that recognize human affect display and model
611
emotions.[50] In computer science, affective computing is a branch of the study and
development of artificial intelligence that deals with the design of systems and devices
that can recognize, interpret, and process human emotions. It is an interdisciplinary
field spanning computer sciences, psychology, and cognitive science.[51] While the
origins of the field may be traced as far back as to early philosophical enquiries into
emotion,[23] the more modern branch of computer science originated with Rosalind
Picard's 1995 paper[52] on affective computing.[53][54] Detecting emotional
information begins with passive sensors which capture data about the user's physical
state or behavior without interpreting the input. The data gathered is analogous to the
cues humans use to perceive emotions in others. Another area within affective
computing is the design of computational devices proposed to exhibit either innate
emotional capabilities or that are capable of convincingly simulating emotions.
Emotional speech processing recognizes the user's emotional state by analyzing
speech patterns. The detection and processing of facial expression or body gestures is
achieved through detectors and sensors.
Notable theorists
In the late 19th century, the most influential theorists were
William James (1842–1910) and Carl Lange (1834–1900).
James was an American psychologist and philosopher
who wrote about educational psychology, psychology of
religious experience/mysticism, and the philosophy of
pragmatism. Lange was a Danish physician and
psychologist. Working independently, they developed the
James–Lange theory, a hypothesis on the origin and
nature of emotions. The theory states that within human
beings, as a response to experiences in the world, the
autonomic nervous system creates physiological events
such as muscular tension, a rise in heart rate, perspiration,
and dryness of the mouth. Emotions, then, are feelings
which come about as a result of these physiological
changes, rather than being their cause.[55]
Silvan Tomkins (1911 – 1991) developed the Affect theory and Script theory. The
Affect theory introduced the concept of basic emotions, and was based on the idea that
the dominance of the emotion , which he called the affect system, was the motivating
force in human life.[56]
Some of the most influential theorists on emotion from the 20th century have died in
the last decade. They include Magda B. Arnold (1903–2002), an American psychologist
who developed the appraisal theory of emotions;[57] Richard Lazarus (1922–2002), an
American psychologist who specialized in emotion and stress, especially in relation to
cognition; Herbert A. Simon (1916–2001), who included emotions into decision making
and artificial intelligence; Robert Plutchik (1928–2006), an American psychologist who
developed a psychoevolutionary theory of emotion;[58] Robert Zajonc (1923–2008) a
Polish–American social psychologist who specialized in social and cognitive processes
such as social facilitation. An American philosopher, Robert C. Solomon (1942–2007),
contributed to the theories on the philosophy of emotions with books such as What Is
An Emotion?: Classic and Contemporary Readings (Oxford, 2003). Peter Goldie (19462011) British philosopher who specializes in ethics, aesthetics, emotion, mood and
character
612
Behavioral Neuroscience
Behavioral neuroscience, also known as biological psychology,[1] biopsychology, or
psychobiology[2] is the application of the principles of biology (in particular
neurobiology), to the study of physiological, genetic, and developmental mechanisms
of behavior in human and non-human animals. It typically investigates at the level of
nerves, neurotransmitters, brain circuitry and the basic biological processes that
underlie normal and abnormal behavior. Most typically, experiments in behavioral
neuroscience involve non-human animal models (such as rats and mice, and nonhuman primates) which have implications for better understanding of human pathology
and therefore contribute to evidence-based practice.
Contents
1 History
2 Relationship to other fields of psychology and biology
3 Research methods
3.1 Disabling or decreasing neural function
3.2 Enhancing neural function
3.3 Measuring neural activity
3.4 Genetic manipulations
3.5 Limitations and advantages
4 Topic areas in behavioral neuroscience
History
Behavioral neuroscience as a scientific discipline emerged from a variety of scientific
and philosophical traditions in the 18th and 19th centuries. In philosophy, people like
René Descartes proposed physical models to explain animal and human behavior.
Descartes, for example, suggested that the pineal gland, a midline unpaired structure
in the brain of many organisms, was the point of contact between mind and body.
Descartes also elaborated on a theory in which the pneumatics of bodily fluids could
explain reflexes and other motor behavior. This theory was inspired by moving statues
in a garden in Paris.[3]
Other philosophers also helped give birth to psychology. One of the earliest textbooks
in the new field, The Principles of Psychology by William James (1890), argues that the
scientific study of psychology should be grounded in an understanding of biology:
“ Bodily experiences, therefore, and more particularly brain-experiences, must
take a place amongst those conditions of the mental life of which Psychology
need take account. The spiritualist and the associationist must both be
'cerebralists,' to the extent at least of admitting that certain peculiarities in the
way of working of their own favorite principles are explicable only by the fact
that the brain laws are a codeterminant of their result.
Our first conclusion, then, is that a certain amount of brain-physiology must be
presupposed or included in Psychology.[4] ”
613
James, like many early psychologists, had considerable training in physiology. The
emergence of both psychology and behavioral neuroscience as legitimate sciences can
be traced from the emergence of physiology from anatomy, particularly neuroanatomy.
Physiologists conducted experiments on living organisms, a practice that was
distrusted by the dominant anatomists of the 18th and 19th centuries.[5] The influential
work of Claude Bernard, Charles Bell, and William Harvey helped to convince the
scientific community that reliable data could be obtained from living subjects.
The term "psychobiology" has been used in a variety of contexts,emphasizing the
importance of biology, which is the discipline that studies organic, neural and cellular
modifications in behavior, plasticity in neuroscience, and biological deceases in all
aspects, in addition, biology focuses and analyzes behavior and all the subjects it is
concerned about, from a scientific point of view. In this context, psychology helps as a
complementary, but important discipline in the neurobiological sciences. The role of
psychology in this questions is that of a social tool that backs up the main or strongest
biological science. The term "psychobiology" was first used in its modern sense by
Knight Dunlap in his book An Outline of Psychobiology (1914).[6] Dunlap also was the
founder and editor-in-chief of the journal Psychobiology. In the announcement of that
journal, Dunlap writes that the journal will publish research "...bearing on the
interconnection of mental and physiological functions", which describes the field of
behavioral neuroscience even in its modern sense.[6]
Relationship to other fields of psychology and biology
In many cases, humans may serve as experimental subjects in behavioral
neuroscience experiments; however, a great deal of the experimental literature in
behavioral neuroscience comes from the study of non-human species, most frequently
rats, mice, and monkeys. As a result, a critical assumption in behavioral neuroscience
is that organisms share biological and behavioral similarities, enough to permit
extrapolations across species. This allies behavioral neuroscience closely with
comparative psychology, evolutionary psychology, evolutionary biology, and
neurobiology. Behavioral neuroscience also has paradigmatic and methodological
similarities to neuropsychology, which relies heavily on the study of the behavior of
humans with nervous system dysfunction (i.e., a non-experimentally based biological
manipulation).
Synonyms for behavioral neuroscience include biopsychology and psychobiology.[7]
Physiological psychology is another term often used synonymously with behavioral
neuroscience, though authors would make physiological psychology a subfield of
behavioral neuroscience, with an appropriately narrow definition.
Research methods
The distinguishing characteristic of a behavioral neuroscience experiment is that either
the independent variable of the experiment is biological, or some dependent variable is
biological. In other words, the nervous system of the organism under study is
permanently or temporarily altered, or some aspect of the nervous system is measured
(usually to be related to a behavioral variable).
Disabling or decreasing neural function
-Lesions - A classic method in which a brain-region of interest is naturally or
intentionally destroyed to observe any resulting changes such as degraded or
enhanced performance on some behavioral measure. Lesions can be placed
with relatively high accuracy thanks to a variety of brain 'atlases' which provide
a map of brain regions in 3-dimensional stereotactic coordinates.
614
-Surgical lesions - Neural tissue is destroyed by removing it surgically.
-Electrolytic lesions - Neural tissue is destroyed through the application
of electrical shock trauma.
-Chemical lesions - Neural tissue is destroyed by the infusion of a
neurotoxin.
-Temporary lesions - Neural tissue is temporarily disabled by cooling or
by the use of anesthetics such as tetrodotoxin.
-Transcranial magnetic stimulation - A new technique usually used with human
subjects in which a magnetic coil applied to the scalp causes unsystematic
electrical activity in nearby cortical neurons which can be experimentally
analyzed as a functional lesion.
-Psychopharmacological manipulations - A chemical receptor antagonist
induces neural activity by interfering with neurotransmission. Antagonists can
be delivered systemically (such as by intravenous injection) or locally
(intracerebrally) during a surgical procedure into the ventricles or into specific
brain structures. For example, NMDA antagonist AP5 has been shown to inhibit
the initiation of long term potentiation of excitatory synaptic transmission (in
rodent fear conditioning) which is believed to be a vital mechanism in learning
and memory.[8]
-Optogenetic inhibition - A light activated inhibitory protein is expressed in cells
of interest. Powerful millisecond timescale neuronal inhibition is instigated upon
stimulation by the appropriate frequency of light delivered via fiber optics or
implanted LEDs in the case of vertebrates,[9] or via external illumination for
small, sufficiently translucent invertebrates.[10] Bacterial Halorhodopsins or
Proton pumps are the two classes of proteins used for inhibitory optogenetics,
achieving inhibition by increasing cytoplasmic levels of halides (Cl-) or
decreasing the cytoplasmic concentration of protons, respectively.[11][12]
Enhancing neural function
-Electrical stimulation - A classic method in which neural activity is enhanced by
application of a small electrical current (too small to cause significant cell
death).
-Psychopharmacological manipulations - A chemical receptor agonist facilitates
neural activity by enhancing or replacing endogenous neurotransmitters.
Agonists can be delivered systemically (such as by intravenous injection) or
locally (intracerebrally) during a surgical procedure.
-Transcranial magnetic stimulation - In some cases (for example, studies of
motor cortex), this technique can be analyzed as having a stimulatory effect
(rather than as a functional lesion).
-Optogenetic excitation - A light activated excitatory protein is expressed in
select cells. Channelrhodopsin-2 (ChR2), a light activated cation channel, was
the first bacterial opsin shown to excite neurons in response to light,[13] though
a number of new excitatory optogenetic tools have now been generated by
improving and imparting novel properties to ChR2[14]
Measuring neural activity
-Optical techniques - Optical methods for recording neuronal activity rely on
methods that modify the optical properties of neurons in response to the cellular
events associated with action potentials or neurotransmitter release.
615
-Voltage sensitive dyes (VSDs) were among the earliest method for
optically detecting action potentials. VSDs commonly become
fluorescent in response to a neuron's change in voltage, rendering
individual action potentials detectable.[15] Genetically encoded voltage
sensitive fluorescent proteins have also been developed.[16]
-Calcium imaging relies on dyes[17] or genetically encoded proteins[18]
that fluoresce upon binding to the calcium that is transiently present
during an action potential.
-Synapto-pHluorin is a technique that relies on a fusion protein that
combines a synaptic vesicle membrane protein and a pH sensitive
fluorescent protein. Upon synaptic vesicle release, the chimeric protein
is exposed to the higher pH of the synaptic cleft, causing a measurable
change in fluorescence.[19]
-Single-unit recording - A method whereby an electrode is introduced into the
brain of a living animal to detect electrical activity that is generated by the
neurons adjacent to the electrode tip. Normally this is performed with sedated
animals but sometimes it is performed on awake animals engaged in a
behavioral event, such as a thirsty rat whisking a particular sandpaper grade
previously paired with water in order to measure the corresponding patterns of
neuronal firing at the decision point.[20]
-Multielectrode recording - The use of a bundle of fine electrodes to record the
simultaneous activity of up to hundreds of neurons.
-fMRI - Functional magnetic resonance imaging, a technique most frequently
applied on human subjects, in which changes in cerebral blood flow can be
detected in an MRI apparatus and are taken to indicate relative activity of larger
scale brain regions (i.e., on the order of hundreds of thousands of neurons).
-Electroencephalography - Or EEG; and the derivative technique of eventrelated potentials, in which scalp electrodes monitor the average activity of
neurons in the cortex (again, used most frequently with human subjects).
-Functional neuroanatomy - A more complex counterpart of phrenology. The
expression of some anatomical marker is taken to reflect neural activity. For
example, the expression of immediate early genes is thought to be caused by
vigorous neural activity. Likewise, the injection of 2-deoxyglucose prior to some
behavioral task can be followed by anatomical localization of that chemical; it is
taken up by neurons that are electrically active.
-MEG - Magnetoencephalography shows the functioning of the human brain
through the measurement of electromagnetic activity. Measuring the magnetic
fields created by the electric current flowing within the neurons identifies brain
activity associated with various human functions in real time, with millimeter
spatial accuracy. Clinicians can noninvasively obtain data to help them assess
neurological disorders and plan surgical treatments.
Genetic manipulations
-QTL mapping - The influence of a gene in some behavior can be statistically
inferred by studying inbred strains of some species, most commonly mice. The
recent sequencing of the genome of many species, most notably mice, has
facilitated this technique.
-Selective breeding - Organisms, often mice, may be bred selectively among
inbred strains to create a recombinant congenic strain. This might be done to
isolate an experimentally interesting stretch of DNA derived from one strain on
616
the background genome of another strain to allow stronger inferences about the
role of that stretch of DNA.
-Genetic engineering - The genome may also be experimentally-manipulated;
for example, knockout mice can be engineered to lack a particular gene, or a
gene may be expressed in a strain which does not normally do so (the
'transgenic'). Advanced techniques may also permit the expression or
suppression of a gene to occur by injection of some regulating chemical.
Limitations and advantages
Different manipulations have advantages and limitations. Neural tissue destroyed by
surgery, electric shock or neurotoxcin is a permanent manipulation and therefore limits
follow-up investigation.[21] Most genetic manipulation techniques are also considered
permanent.[21] Temporary lesions can be achieved with advanced in genetic
manipulations, for example, certain genes can now be switched on and off with
diet.[21] Pharmacological manipulations also allow blocking of certain
neurotransmitters temporarily as the function returns to its previous state after the drug
has been metabolized.[21]
Topic areas in behavioral neuroscience
In general, behavioral neuroscientists study similar themes and issues as academic
psychologists, though limited by the need to use nonhuman animals. As a result, the
bulk of literature in behavioral neuroscience deals with mental processes and
behaviors that are shared across different animal models such as:
-Sensation and perception
-Motivated behavior (hunger, thirst, sex)
-Control of movement
-Learning and memory
-Sleep and biological rhythms
-Emotion
However, with increasing technical sophistication and with the development of more
precise noninvasive methods that can be applied to human subjects, behavioral
neuroscientists are beginning to contribute to other classical topic areas of psychology,
philosophy, and linguistics, such as:
Language
Reasoning and decision making
Consciousness
Behavioral neuroscience has also had a strong history of contributing to the
understanding of medical disorders, including those that fall under the purview of
clinical psychology and biological psychopathology (also known as abnormal
psychology). Although animal models do not exist for all mental illnesses, the field has
contributed important therapeutic data on a variety of conditions, including:
-Parkinson's Disease, a degenerative disorder of the central nervous system
that often impairs the sufferer's motor skills and speech.
-Huntington's Disease, a rare inherited neurological disorder whose most
obvious symptoms are abnormal body movements and a lack of coordination. It
also affects a number of mental abilities and some aspects of personality.
-Alzheimer's Disease, a neurodegenerative disease that, in its most common
form, is found in people over the age of 65 and is characterized by progressive
cognitive deterioration, together with declining activities of daily living and by
neuropsychiatric symptoms or behavioral changes.
617
-Clinical depression, a common psychiatric disorder, characterized by a
persistent lowering of mood, loss of interest in usual activities and diminished
ability to experience pleasure.
-Schizophrenia, a psychiatric diagnosis that describes a mental illness
characterized by impairments in the perception or expression of reality, most
commonly manifesting as auditory hallucinations, paranoid or bizarre delusions
or disorganized speech and thinking in the context of significant social or
occupational dysfunction.
-Autism, a brain development disorder that impairs social interaction and
communication, and causes restricted and repetitive behavior, all starting before
a child is three years old.
-Anxiety, a physiological state characterized by cognitive, somatic, emotional,
and behavioral components. These components combine to create the feelings
that are typically recognized as fear, apprehension, or worry.
-Drug abuse, including alcoholism.
618
Ethology
Ethology (from Greek: nθος,
ethos, "character"; and λογία, -logia, "the study of") is
the scientific and objective
study of animal behaviour,
and is a sub-topic of zoology.
The focus of ethology is on
animal
behaviour
under
natural
conditions,[1]
as
opposed to behaviourism,
which focuses on behavioural
response
studies
in
a
laboratory setting.
Many
naturalists
have
studied aspects of animal
behaviour throughout history.
The modern discipline of
ethology
is
generally
considered to have begun
during the 1930s with the
work of Dutch biologist
Nikolaas Tinbergen and by
Austrian biologists Konrad
Lorenz and Karl von Frisch,
joint winners of the 1973
Nobel Prize in Physiology or
Medicine.[2] Ethology is a
combination of laboratory
and field science, with a
strong relation to some other
disciplines
such
as
neuroanatomy, ecology, and evolution. Ethologists are typically interested in a
behavioural process rather than in a particular animal group, and often study one type
of behaviour, such as aggression, in a number of unrelated animals.
The desire to understand animals has made ethology a rapidly growing field. Since the
turn of the 21st century, many aspects of animal communication, animal emotions,
animal culture, learning, and even sexual conduct that experts long thought they
understood, have been re-examined, and new conclusions reached. New fields have
developed, such as neuroethology.
Understanding ethology or animal behavior can be important in animal training.
Considering the natural behaviours of different species or breeds enables the trainer to
select the individuals best suited to perform the required task. It also enables the
trainer to encourage the performance of naturally occurring behaviors and also the
discontinuance of undesirable behaviors.[3]
619
Contents
1 Etymology
2 Relationship with comparative psychology
3 Scala naturae and Lamarck's theories
4 Theory of evolution by natural selection and the beginnings of
ethology
5 Fixed action patterns, animal communication and modal action
patterns
6 Instinct
7 Learning
7.1 Habituation
7.2 Associative learning
7.3 Imprinting
7.4 Observational learning
7.4.1 Imitation
7.4.2 Stimulus enhancement
7.4.3 Social transmission
7.5 Teaching
8 Mating and the fight for supremacy
9 Living in groups
10 Social ethology and recent developments
11 Tinbergen's four questions for ethologists
12 Growth of the field
Etymology
The term ethology derives from the Greek word èthos (ήθος), meaning character.
Other words that derive from ethos include ethics[4] and ethical. The term was first
popularized by American myrmecologist William Morton Wheeler in 1902.[5] An earlier,
slightly different sense of the term was proposed by John Stuart Mill in his 1843
System of Logic.[6] He recommended the development of a new science, "ethology,"
the purpose of which would be explanation of individual and national differences in
character, on the basis of associationistic psychology. This use of the word was never
adopted.
Relationship with comparative psychology
Comparative psychology also studies animal behaviour, but, as opposed to ethology, is
construed as a sub-topic of psychology rather than as one of biology. Historically,
where comparative psychology researches animal behaviour in the context of what is
known about human psychology, ethology researches animal behaviour in the context
of what is known about animal anatomy, physiology, neurobiology, and phylogenetic
history. Furthermore, early comparative psychologists concentrated on the study of
learning and tended to research behaviour in artificial situations, whereas early
ethologists concentrated on behaviour in natural situations, tending to describe it as
620
instinctive. The two approaches are complementary rather than competitive, but they
do result in different perspectives and, sometimes, conflicts of opinion about matters of
substance. In addition, for most of the twentieth century, comparative psychology
developed most strongly in North America, while ethology was stronger in Europe. A
practical difference is that early comparative psychologists concentrated on gaining
extensive knowledge of the behaviour of very few species. Ethologists were more
interested in understanding behaviour in a wide range of species to facilitate principled
comparisons across taxonomic groups. Ethologists have made much more use of a
truly comparative method[disambiguation needed] than comparative psychologists
have.
Scala naturae and Lamarck's theories
Until the 19th century, the most common
theory among scientists was still the concept
of scala naturae, proposed by Aristotle.
According to this theory, living beings were
classified on an ideal pyramid that represented
the simplest animals on the lower levels, with
complexity increasing progressively toward the
top, occupied by human beings. In the
Western world of the time, people believed
animal species were eternal and immutable,
created with a specific purpose, as this
seemed the only possible explanation for the
incredible variety of living beings and their
surprising adaptation to their habitats.[5]
Jean-Baptiste Lamarck (1744 - 1829) was the
first biologist to describe a complex theory of
evolution. His theory substantially comprised two statements: first, that animal organs
and behaviour can change according to the way they are used; and second, that those
characteristics can transmit from one generation to the next (the example of the giraffe
whose neck becomes longer while trying to
reach the upper leaves of a tree is well-known).
The second statement is that every living
organism, humans included, tends to reach a
greater level of perfection. When Charles
Darwin went to the Galapagos Islands, he was
well aware of Lamarck's theories and was
influenced by them.
Theory of evolution by natural selection
and the beginnings of ethology
Because ethology is considered a topic of
biology, ethologists have been concerned
particularly with the evolution of behaviour and
the understanding of behaviour in terms of the
theory of natural selection. In one sense, the
first modern ethologist was Charles Darwin,
whose book, The Expression of the Emotions
in Man and Animals, influenced many
ethologists. He pursued his interest in
behaviour by encouraging his protégé George
621
Romanes, who investigated animal learning and intelligence using an anthropomorphic
method, anecdotal cognitivism, that did not gain scientific support.
Other early ethologists, such as Oskar Heinroth and Julian Huxley, instead
concentrated on behaviours that can be called instinctive, or natural, in that they occur
in all members of a species under specified circumstances. Their beginning for
studying the behaviour of a new species was to construct an ethogram (a description of
the main types of natural behaviour with their frequencies of occurrence).[5] This
provided an objective, cumulative base of data about behaviour, which subsequent
researchers could check and supplement.
Fixed action patterns, animal communication and modal action patterns
An important development, associated with the name of Konrad Lorenz though
probably due more to his teacher, Oskar Heinroth, was the identification of fixed action
patterns (FAPs). Lorenz popularized FAPs as instinctive responses that would occur
reliably in the presence of identifiable stimuli (called sign stimuli or releasing stimuli).
These FAPs could then be compared across species, and the similarities and
differences between behaviour could be easily compared with the similarities and
differences in morphology. An important and much quoted study of the Anatidae (ducks
and geese) by Heinroth used this technique. Ethologists noted that the stimuli that
released FAPs were commonly features of the appearance or behaviour of other
members of the animal's own species, and they were able to prove how important
forms of animal communication could be mediated by a few simple FAPs. The most
sophisticated investigation of this kind was the study by Karl von Frisch of the so-called
"dance language" related to bee communication.[7] Lorenz developed an interesting
theory of the evolution of animal communication based on his observations of the
nature of fixed action patterns and the circumstances in which animals emit them.
Instinct
The
Merriam-Webster
dictionary
defines instinct as a largely inheritable
and unalterable tendency of an
organism to make a complex and
specific response to environmental
stimuli without involving reason.[8] For
ethologists, instinct means a series of
predictable behaviours for fixed action
patterns. Such schemes are only
acted when a precise stimulating
signal is present. When such signals
act
as
communication
among
members of the same species, they
are known as releasers. A notable
example of a releaser is the beak
movements in many bird species performed by the newly hatched chicks, which
stimulates the mother's regurgitating process to feed her offspring.[9] Another wellknown case is the classic experiments by Tinbergen on the Graylag Goose. Like
similar waterfowl, the goose rolls a displaced egg near its nest back to the others with
its beak. The sight of the displaced egg triggers this mechanism. If the egg is taken
away, the animal continues with the behaviour, pulling its head back as if an imaginary
egg is still being manoeuvred by the underside of its beak.[10] However, it also
attempts to move other egg-shaped objects, such as a giant plaster egg, door knob, or
even a volleyball back into the nest. Such objects, when they exaggerate the releasers
found in natural objects, can elicit a stronger version of the behavior than the natural
622
object, so that the goose ignores its own displaced egg in favour of the giant dummy
egg. These exaggerated releasers for instincts were named supernormal stimuli by
Tinbergen.[11] Tinbergen found he could produce supernormal stimuli for most
instincts in animals—such as cardboard butterflies that male butterflies preferred to
mate with if they had darker stripes than a real female, or dummy fish that a territorial
male stickleback fish fought more violently than a real invading male if the dummy had
a brighter-coloured underside. Harvard psychologist Deirdre Barrett wrote a book about
how easily humans respond to supernormal stimuli for sexual, nurturing, feeding, and
social instincts.[12] However, a behaviour only made of fixed action patterns would be
particularly rigid and inefficient, reducing the probability of survival and reproduction, so
the learning process has great importance, as does the ability to change the
individual's responses based on its experience. It can be said[by whom?] that the more
the brain is complex and the life of the individual long, the more its behaviour is
"intelligent" (in the sense of being guided by experience rather than stereotyped FAPs).
Learning
Habituation
Learning occurs in many ways, one of the most elementary being habituation.[13] This
process is a decrease in an elicited behaviour resulting from the repeated presentation
of an eliciting stimulus.[14] In effect, the animal learns to stop responding to irrelevant
stimuli. An example of learning by habituation is the one observed in squirrels: When
one of them feels threatened, the others hear its signal and go to the nearest refuge.
However, if the signal comes from an individual that has caused many false alarms, the
other squirrels ignore the signal.
Associative learning
Another common way of learning is by association, where a stimulus is, based on the
experience, linked to another one that may not have anything to do with the first one.
The first studies of associative learning were made by Russian physiologist Ivan
Pavlov.[15] An example of associative behaviour is observed when a common goldfish
goes close to the water surface whenever a human is going to feed it, or the
excitement of a dog whenever it sees a collar as a prelude for a walk.
Imprinting
Being able to discriminate the
members of one's own species is also
of
fundamental
importance
for
reproductive
success.
Such
discrimination can be based on a
number of factors. However, this
important type of learning only takes
place in a very limited period of time.
This kind of learning is called
imprinting,[16] and was a second
important finding of Lorenz. Lorenz
observed that the young of birds such
as geese and chickens followed their
mothers spontaneously from almost
the first day after they were hatched, and he discovered that this response could be
imitated by an arbitrary stimulus if the eggs were incubated artificially and the stimulus
were presented during a critical period that continued for a few days after hatching.
623
Observational learning
Imitation
Imitation is an advanced behaviour whereby an animal observes and exactly replicates
the behaviour of another. The National Institutes of Health reported that capuchin
monkeys preferred the company of researchers who imitated them to that of
researchers who did not. The monkeys not only spent more time with their imitators but
also preferred to engage in a simple task with them even when provided with the option
of performing the same task with a non-imitator.[17]
Stimulus enhancement
There are various ways animals can learn using observational learning but without the
process of imitation. One of these is stimulus enhancement in which individuals
become interested in an object as the result of observing others interacting with the
object.[18] Increased interest in an object can result in object manipulation which
allows for new object-related behaviours by trial-and-error learning. Haggerty (1909)
devised an experiment in which a monkey climbed up the side of a cage, placed its arm
into a wooden chute, and pulled a rope in the chute to release food. Another monkey
was provided an opportunity to obtain the food after watching a monkey go through this
process on four separate occasions. The monkey performed a different method and
finally succeeded after trial-and-error.[19] Another example familiar to some cat and
dog owners is the ability of their animals to open doors. The action of humans
operating the handle to open the door results in the animals becoming interested in the
handle and then by trial-and-error, they learn to operate the handle and open the door.
Social transmission
A well-documented example of social transmission of a behaviour occurred in a group
of macaques on Hachijojima Island, Japan. The macaques lived in the inland forest
until the 1960s, when a group of researchers started giving them potatoes on the
beach: soon, they started venturing onto the beach, picking the potatoes from the sand,
and cleaning and eating them.[20] About one year later, an individual was observed
bringing a potato to the sea, putting it into the water with one hand, and cleaning it with
the other. This behaviour was soon expressed by the individuals living in contact with
her; when they gave birth, this behaviour was also expressed by their young - a form of
social transmission.[21]
Teaching
Teaching is a highly specialised aspect of learning in which the "teacher"
(demonstrator) adjusts its behaviour to increase the probability of the "pupil" (observer)
achieving the desired end-result of the behaviour. Killer whales are known to
intentionally beach themselves to catch and eat pinnipeds.[22] Mother killer whales
teach their young to catch pinnipeds by pushing them onto the shore and encouraging
them to attack and eat the prey. Because the mother killer whale is altering her
behaviour to help her offspring learn to catch prey, this is evidence of teaching.[22]
Teaching is not limited to mammals. Many insects, for example, have been observed
demonstrating various forms of teaching to obtain food. Ants, for example, will guide
each other to food sources through a process called "tandem running," in which an ant
will guide a companion ant to a source of food.[23] It has been suggested that the pupil
ant is able to learn this route to obtain food in the future or teach the route to other
ants.
624
Mating and the fight for supremacy
Individual reproduction is the most important phase in the proliferation of individuals or
genes within a species: for this reason, there exist complex mating rituals, which can
be very complex even if they are often regarded as fixed action patterns (FAPs). The
Stickleback's complex mating ritual was studied by Niko Tinbergen and is regarded as
a notable example of a FAP.
Often in social life, animals fight for the right to reproduce, as well as social supremacy.
A common example of fighting for social and sexual supremacy is the so-called
pecking order among poultry. Every time a group of poultry cohabitate for a certain time
length, they establish a pecking order. In these groups, one chicken dominates the
others and can peck without being pecked. A second chicken can peck all the others
except the first, and so on. Higher level chickens are easily distinguished by their wellcured aspect, as opposed to lower level chickens. While the pecking order is
establishing, frequent and violent fights can happen, but once established, it is broken
only when other individuals enter the group, in which case the pecking order reestablishes from scratch.
Living in groups
Several animal species, including humans, tend to live in groups. Group size is a major
aspect of their social environment. Social life is probably a complex and effective
survival strategy. It may be regarded as a sort of symbiosis among individuals of the
same species: a society is composed of a group of individuals belonging to the same
species living within well-defined rules on food management, role assignments and
reciprocal dependence.
When biologists interested in evolution theory first started examining social behaviour,
some apparently unanswerable questions arose, such as how the birth of sterile
castes, like in bees, could be explained through an evolving mechanism that
emphasizes the reproductive success of as many individuals as possible, or why,
amongst animals living in small groups like squirrels, an individual would risk its own
life to save the rest of the group. These behaviours may be examples of altruism.[24]
Of course, not all behaviours are altruistic, as indicated by the table below. For
example, revengeful behaviour was at one point claimed to have been observed
exclusively in Homo sapiens. However, other species have been reported to be
vengeful, including reports of vengeful camels[25] and chimpanzees.[26]
The existence of egoism through natural selection does not pose any question to
evolution theory and is, on the contrary, fully predicted by it, as is cooperative
behaviour. It is more difficult to understand the mechanism through which altruistic
behaviour initially developed.
Social ethology and recent developments
In 1970, the English ethologist John H.
Crook published an important paper in
which he distinguished comparative
ethology from social ethology, and
argued that much of the ethology that
had existed so far was really
comparative
ethology—examining
animals as individuals—whereas, in
the future, ethologists would need to concentrate on the behaviour of social groups of
animals and the social structure within them.
625
Also in 1970, Robert Ardrey's book The Social Contract: A Personal Inquiry into the
Evolutionary Sources of Order and Disorder was published.[27] The book and study
investigated animal behaviour and then compared human behaviour to it as a similar
phenomenon.
E. O. Wilson's book Sociobiology: The New Synthesis appeared in 1975, and since that
time, the study of behaviour has been much more concerned with social aspects. It has
also been driven by the stronger, but more sophisticated, Darwinism associated with
Wilson, Robert Trivers, and William Hamilton. The related development of behavioural
ecology has also helped transform ethology. Furthermore, a substantial
reapprochement with comparative psychology has occurred, so the modern scientific
study of behaviour offers a more or less seamless spectrum of approaches: from
animal cognition to more traditional comparative psychology, ethology, sociobiology,
and behavioural ecology. Sociobiology has more recently[when?] developed into
evolutionary psychology.
Tinbergen's four questions for ethologists
Lorenz's collaborator, Niko Tinbergen, argued that ethology always needed to include
four kinds of explanation in any instance of behaviour:
-Function – How does the behavior affect the animal's chances of survival and
reproduction? Why does the animal respond that way instead of some other
way?
-Causation – What are the stimuli that elicit the response, and how has it been
modified by recent learning?
-Development – How does the behavior change with age, and what early
experiences are necessary for the animal to display the behavior?
-Evolutionary history – How does the behavior compare with similar behavior in
related species, and how might it have begun through the process of
phylogeny?
These explanations are complementary rather than mutually exclusive—all instances of
behaviour require an explanation at each of these four levels. For example, the function
of eating is to acquire nutrients (which ultimately aids survival and reproduction), but
the immediate cause of eating is hunger (causation). Hunger and eating are
evolutionarily ancient and are found in many species (evolutionary history), and
develop early within an organism's lifespan (development). It is easy to confuse such
questions—for example, to argue that people eat because they're hungry and not to
acquire nutrients—without realizing that the reason people experience hunger is
because it causes them to acquire nutrients.[28]
Growth of the field
Due to the work of Lorenz and Tinbergen, ethology developed strongly in continental
Europe during the years prior to World War II.[5] After the war, Tinbergen moved to the
University of Oxford, and ethology became stronger in the UK, with the additional
influence of William Thorpe, Robert Hinde, and Patrick Bateson at the Sub-department
of Animal Behaviour of the University of Cambridge, located in the village of
Madingley.[29] In this period, too, ethology began to develop strongly in North America.
Lorenz, Tinbergen, and von Frisch were jointly awarded the Nobel Prize in Physiology
or Medicine in 1973 for their work of developing ethology.[30]
Ethology is now a well-recognised scientific discipline, and has a number of journals
covering developments in the subject, such as the Ethology Journal. In 1972, the
International Society for Human Ethology was founded to promote exchange of
626
knowledge and opinions concerning human behaviour gained by applying ethological
principles and methods and published their journal, The Human Ethology Bulletin. In
2008, in a paper published in the journal Behaviour, ethologist Peter Verbeek
introduced the term "Peace Ethology" as a sub-discipline of Human Ethology that is
concerned with issues of human conflict, conflict resolution, reconciliation, war,
peacemaking, and peacekeeping behaviour.[31]
Today, along with actual ethologists, many biologists, zoologists, primatologists,
anthropologists, veterinarians, and physicians study ethology and other related fields
such as animal psychology, the study of animal social groups, and animal cognition.
Some research has begun to study atypical or disordered animal behaviour. Most
researchers in the field have some sort of advanced degree and specialty and
subspecialty training in the aforementioned fields.
627
628
Neuropsychology
Neuropsychology studies the structure and function of the brain as they relate to
specific psychological processes and behaviors. It is seen as a clinical and
experimental field of psychology that aims to study, assess, understand and treat
behaviors directly related to brain functioning. The term neuropsychology has been
applied to lesion studies in humans and animals. It has also been applied to efforts to
record electrical activity from individual cells (or groups of cells) in higher primates
(including some studies of human patients).[1] It is scientific in its approach, making
use of neuroscience, and shares an information processing view of the mind with
cognitive psychology and cognitive science.
In practice neuropsychologists tend to work in research settings (universities,
laboratories or research institutions), clinical settings (involved in assessing or treating
patients with neuropsychological problems), forensic settings or industry (often as
consultants where neuropsychological knowledge is applied to product design or in the
management of pharmaceutical clinical-trials research for drugs that might have a
potential impact on CNS functioning).
Contents
1 History
1.1 Imhotep
1.2 Hippocrates
1.3 René Descartes
1.4 Thomas Willis
1.5 Franz Joseph Gall
1.6 Jean-Baptiste Bouillaud
1.7 Paul Broca
1.8 Karl Spencer Lashley
1.9 From then to now
2 Approaches
3 Methods and tools
History
Neuropsychology is a relatively new discipline within the field of psychology; however,
the history of its discovery can be traced all the way back to the Third Dynasty in
ancient Egypt – perhaps even earlier.[2] There is much debate in regards to when
people started seriously looking at the functions of different organs, but it has been
determined that for many centuries, the brain was looked upon as a useless organ and
was generally discarded during burial processes and autopsies. As the field of
medicine developed in understanding human anatomy and physiology, people often
developed different theories as to why the human body functioned the way it did. Many
times, functions of the body were observed from a religious point of view and any
abnormalities were blamed on bad spirits and the gods. The brain has not always been
looked upon as the center for the functioning body as we know it to be now. Rather, the
brain has been the center of much discussion for many centuries. It has taken
hundreds of years to develop our understanding of the brain and how it directly affects
our behaviors, and hundreds of great minds committed to discovering the way our
bodies work and function both normally and abnormally.
629
Imhotep
The study of the brain can be linked all the way back to around 3500 B.C. Imhotep, a
highly regarded priest and one of the first physicians recorded in history, can be seen
as one of the major pioneers in the history of understanding the brain.[2] Imhotep took
a more scientific, rather than magical, approach to medicine and disease. His writings
contain intricate information on different forms of trauma, abnormalities, and remedies
of the time to serve as reference to future physicians, as well as a very detailed
account of the brain and the rest of the body. Despite this detailed information,
Egyptians did not see the brain as the seat of the locus of control, nor as a glorious or
noteworthy organ within the body at all. Egyptians preferred to look at the heart as the
‘seat of the soul’.
Hippocrates
The Greeks however, looked upon the brain as the seat of the soul. Hippocrates drew
a connection between the brain and behaviors of the body saying “The brain exercises
the greatest power in the man”.[3] Apart from moving the focus from the heart as the
“seat of the soul” to the brain, Hippocrates did not go into much detail about its actual
functioning. However, by switching the attention of the medical community to the brain,
the doors were opened to a more scientific discovery of the organ responsible for our
behaviors. For years to come, scientists were inspired to explore the functions of the
body and to find concrete explanations for both normal and abnormal behaviors.
Scientific discovery led them to believe that there were natural and organically
occurring reasons to explain various functions of the body, and it could all be traced
back to the brain. Over the years, science would continue to expand and the mysteries
of the world would begin to make sense, or at least be looked at in a different way.
Hippocrates introduced man to the concept of the mind – which was widely seen as a
separate function apart from the actual brain organ.
René Descartes
Philosopher René Descartes expanded upon this idea and is most widely known by his
work on the mind-body problem. Often, Descartes ideas were looked upon as overly
philosophical and lacking in sufficient scientific background. Descartes focused much
of his anatomical experimentation on the brain, paying specific attention to the pineal
gland – which he argued was the actual “seat of the soul”. Still deeply rooted in a
spiritual outlook towards the scientific world, the body was said to be mortal, and the
soul immortal. The pineal gland was then thought to be the very place at which the
mind would interact with the mortal and machine-like body. At the time, Descartes was
convinced the mind had control over the behaviors of the body (controlling the man) –
but also that the body could have influence over the mind, which is referred to as
dualism.[4] This idea that the mind essentially had control over the body, but man’s
body could resist or even influence other behaviors was a major turning point in the
way many physiologists would look at the brain. The capabilities of the mind were
observed to do much more than simply react, but also to be rational and function in
organized, thoughtful ways – much more complex than he thought the animal world to
be. These ideas, although disregarded by many and cast aside for years led the
medical community to expand their own ideas of the brain and begin to understand in
new ways just how intricate the workings of the brain really were, and the complete
affects it had on daily life, as well, which treatments would be the most beneficial to
helping those people living with a dysfunctional mind. . The mind-body problem,
spurred by René Descartes, continues to this day with many philosophical arguments
both for and against his ideas. However controversial they were and remain today, the
fresh and well thought out perspective Descartes presented has had long lasting
effects on the various disciplines of medicine, psychology and much more, especially in
630
putting an emphasis on separating the mind from the body in order to explain
observable behaviors.
Thomas Willis
It was during the mid 17th Century that
another major contributor to both the field of
psychology and neurology emerged. Thomas
Willis studied at Oxford University and took a
more physiological approach to the brain and
behavior. It was Willis who coined the words
‘hemisphere’ and ‘lobe’ when referring to the
brain. He also is known to be one of the
earliest to use the words neurology as well as
psychology. Without him, these disciplines
would not be as they are to this day. With a
more physiological approach to the brain and a
rejection to the idea that humans were the only
beings capable of rational thought (which was
central the Descartes theory), Willis looked at
specialized structures of the brain. He
hypothesized and experimented within the
theory that higher structures within the brain
accounted for the more complex functions of
the body whereas the lower structures of the brain were responsible for functions
similar to animals, consisting mostly of reactions and automatic responses. Throughout
his career, he tested this hypothesis out on both animals and human brains. Most of
Willis’ attention seemed to be focused on localized area of the brain that were designed
specifically to carry out certain functions – both voluntary and involuntary. He was
particularly interested in looking at both the behaviors as well as the brains of people
who suffered from manic disorders and hysteria. This is one of the first times that
psychiatry and neurology came together to study the individual. Through his in-depth
study of the brain and behavior, Willis concluded that within the lower region of the
brain, automated responses such as breathing, heartbeats and other various motor
activities were carried out. Although much of his work has been proven to be
insufficient and some even as false, the presentation of localized regions of function
within the brain presented a new idea that the brain was more complex than previously
imagined. This led the way for future pioneers to understand and develop upon his
theories, especially when it came to looking at disorders and dysfunctions of the brain.
The development and expansion upon the theories presented by the minds of the past
are ultimately the driving force behind the ideas of the future in terms of the birth of
neuropsychology.
Franz Joseph Gall
With new theories developing on localization of functioning, neuroanatomist /
physiologist Franz Joseph Gall made some major progress in the way both neurology
and psychology understood the brain. Gall concentrated his career on developing his
theories that personality was directly related to features and structures within the brain.
However, Gall’s major contribution within the field of neuroscience is his invention of
phrenology. This new discipline looked at the brain as an organ of the mind, where the
shape of the skull could ultimately determine ones intelligence and personality.[5] This
theory was not unlike many circulating at the time, as many scientists were taking into
account physical features of the face and body as well as head size and structure to
explain personality as well as levels of intelligence, only Gall looked primarily at the
631
brain. There was much debate over the validity of Gall’s claims however, because he
was often found to be very wrong in his observations. He was sent a cast of
philosopher and great thinker, Renés Descartes skull and through his method of
phrenology, claimed he had very limited capacity for reasoning and higher
cognitions.[6] As controversial and often false as many of Gall’s claims were in regards
to phrenology, his contributions to understanding cortical regions of the brain and
localized activity continued to further develop understanding of the brain and
personality as well as behavior. His work can be considered crucial to laying a firm
foundation in the field of neuropsychology which would develop immensely within the
next few decades.
Jean-Baptiste Bouillaud
Towards the late 19th Century, the belief that
the size of ones skull could determine their
level of intelligence was discarded as science
and medicine moved forward. A physician by
the name of Jean-Baptiste Bouillaud expanded
upon the ideas of Gall and took a closer look
at the idea of distinct cortical regions of the
brain each having their own independent
function. Bouillaud was specifically interested
in speech and wrote many publications on the
anterior region of the brain being responsible
for carrying out the act of ones speech, a
discovery that had stemmed from the research
of Gall. He was also one of the first to use
larger samples for research although it took
many years for that method to be accepted. By
looking at over a hundred different case
studies, Bouillaud came to discover that it was
through different areas of the brain that
speech is completed and understood. By
observing people with brain damage, his
theory was made more concrete. Bouillaud, along with many other pioneers of the time
made great advances within the field of neurology, especially when it came to
localization of function. There are many arguable debates as to who deserves the most
credit for such discoveries,[7] and often, people remain unmentioned, but Paul Broca is
perhaps one of the most famous and well known contributors to neuropsychology –
often referred to as “the father” of the discipline.
Paul Broca
Inspired by the advances being made in the area of localized function within the brain,
Paul Broca committed much of his study to the phenomena of how speech is
understood and produced. Through his study, it was discovered and expanded upon
that we articulate via the left hemisphere. Broca’s observations and methods are widely
considered to be where neuropsychology really takes form as a recognizable and
respected discipline. Armed with the understanding that specific, independent areas of
the brain are responsible for articulation and understanding of speech, the brains
abilities were finally being acknowledged as the complex and highly intricate organ that
it is. Broca was essentially the first to fully break away from the ideas of phrenology
and delve deeper into a more scientific and psychological view of the brain.[8]
632
Karl Spencer Lashley
Karl Lashley (1890-1958) attended the University of West Virginia where he was
introduced to zoology and eventually decided to study the behavior of organisms. He
got his Master’s Degree in Bacteriology from the University of Pittsburgh, and then his
PhD in Genetics from Johns Hopkins University where he minored in psychology under
John B. Watson, whom he continued to work closely with after receiving his PhD. It
was during this time that Lashley worked with Franz and was introduced to his
training/ablation method. Lashley worked at the University of Minnesota for a time and
then at the Institute for Juvenile Research in Chicago before becoming a professor at
the University of Chicago. After this he went to Harvard, but was dissatisfied and from
there became the director of the Yerkes Laboratory of Primate Biology in Orange Park,
Florida. Lashley has always been viewed as an objective scientist, but recently Nadine
Weidmann has tried to expose him as a racist and a genetic determinist. But Donald
Dewsbury and others, have disputed the claim that he was a genetic determinist, citing
research of Lashley’s in which he found evidence of both genetic and environmental
influences on organisms. Dewsbury does admit however, that Lashley was quite racist.
He cites a line from a letter that Lashley wrote to a German colleague which reads:
“Too bad that the beautiful tropical countries are all populated by negros. Heil Hitler
and Apartheit!”.[9] This line alone would leave little debate on this matter, but he cites
others as well. Despite his racism, Lashley has done some important work in
neuropsychology and influenced his students to reach even greater heights. His works
and theories that follow are summarized in his book Brain Mechanisms and
Intelligence.[10] Lashley’s theory of the Engram was the driving force for much of his
research. An engram was believed to be a part of the brain where a specific memory
was stored. He continued to use the training/ablation method that Franz had taught
him. He would train a rat to learn a maze and then use systematic lesions and removed
sections of cortical tissue to see if the rat forgot what it had learned. Through his
research with the rats, he learned that forgetting was dependent on the amount of
tissue removed and not where it was removed from. He called this mass action and he
believed that it was a general rule that governed how brain tissue would respond,
independent of the type of learning. But we know now that mass action was true for
these rats, because learning to run a maze is known as complex learning and it
requires multiple cortical areas, so cutting into individual parts alone will not erase the
memory from the rats’ brains, but taking large sections removes multiple cortical areas
at one time and so they can forget. Lashley also discovered that a portion of a
functional area could carry out the role of the entire area, even when the rest of the
area has been removed. He called this phenomenon equipotentiality. We know now
that he was seeing evidence of plasticity in the brain. The brain has the spectacular
ability for certain areas to take over the functions of other areas if those areas should
fail or be removed.
From then to now
Armed with the new understanding that the brain has independent structures
responsible for both voluntary and involuntary functions, the next steps made were in
developing this new discipline called neuropsychology. The bridging of the two
disciplines meant studying and applying research to the functions and dysfunctions of
the brain and how it affects the body as well as personality. This led to defining mental
disorders and cognitive impairments that were characterized by different models of
treatment. Over the years, different treatment plans and tests have been developed
with the intention to help those with dysfunctions of the mind cope in daily living.
Neuropsychology is a constantly evolving field that relies heavily on research and the
ability for the neuropsychologist to be multidirectional and experimental in nature. It is
essential for them to know and understand intricate behaviors such as emotion in
633
context of brain physiology as well as the ability to assess what treatment would suit an
individual the best. Often, abnormalities of the brain may overlap with one another in
terms of diagnoses, which leads to an ambivalence in the ability to diagnose what the
underlying issue, thus a neuropsychologist must work hard and diligently to assure
accuracy and competency. The discipline is extremely difficult, but is one that is very
rewarding. Although only a few contributors were mentioned in this condensed version
of the history of neuropsychology, they are some of the most well known pioneers in
the development of the discipline. Each person expanded upon the ideas of their
forefathers, and the field of neuropsychology has benefited greatly from the inquisitive
minds that dared to think there might be more to the mysterious organ called the brain
than previously imagined.
Approaches
Experimental neuropsychology is an approach which uses methods from experimental
psychology to uncover the relationship between the nervous system and cognitive
function. The majority of work involves studying healthy humans in a laboratory setting,
although a minority of researchers may conduct animal experiments. Human work in
this area often takes advantage of specific features of our nervous system (for example
that visual information presented to a specific visual field is preferentially processed by
the cortical hemisphere on the opposite side) to make links between neuroanatomy
and psychological function.
Clinical neuropsychology is the application of neuropsychological knowledge to the
assessment (see neuropsychological test and neuropsychological assessment),
management, and rehabilitation of people who have suffered illness or injury
(particularly to the brain) which has caused neurocognitive problems. In particular they
bring a psychological viewpoint to treatment, to understand how such illness and injury
may affect and be affected by psychological factors. They also can offer an opinion as
to whether a person is demonstrating difficulties due to brain pathology or as a
consequence of an emotional or another (potentially) reversible cause or both. For
example, a test might show that both patients X and Y are unable to name items that
they have been previously exposed to within the past 20 minutes (indicating possible
dementia). If patient Y can name some of them with further prompting (e.g. given a
categorical clue such as being told that the item they could not name is a fruit), this
allows a more specific diagnosis than simply dementia (Y appears to have the vascular
type which is due to brain pathology but is usually at least somewhat reversible).
Clinical neuropsychologists often work in hospital settings in an interdisciplinary
medical team; others work in private practice and may provide expert input into
medico-legal proceedings.
Cognitive neuropsychology is a relatively new development and has emerged as a
distillation of the complementary approaches of both experimental and clinical
neuropsychology. It seeks to understand the mind and brain by studying people who
have suffered brain injury or neurological illness. One model of neuropsychological
functioning is known as functional localization. This is based on the principle that if a
specific cognitive problem can be found after an injury to a specific area of the brain, it
is possible that this part of the brain is in some way involved. However, there may be
reason to believe that the link between mental functions and neural regions is not so
simple. An alternative model of the link between mind and brain, such as parallel
processing, may have more explanatory power for the workings and dysfunction of the
human brain. Yet another approach investigates how the pattern of errors produced by
brain-damaged individuals can constrain our understanding of mental representations
and processes without reference to the underlying neural structure. A more recent but
related approach is cognitive neuropsychiatry which seeks to understand the normal
function of mind and brain by studying psychiatric or mental illness.
634
Connectionism is the use of artificial neural networks to model specific cognitive
processes using what are considered to be simplified but plausible models of how
neurons operate. Once trained to perform a specific cognitive task these networks are
often damaged or 'lesioned' to simulate brain injury or impairment in an attempt to
understand and compare the results to the effects of brain injury in humans.
Functional neuroimaging uses specific neuroimaging technologies to take readings
from the brain, usually when a person is doing a particular task, in an attempt to
understand how the activation of particular brain areas is related to the task. In
particular, the growth of methodologies to employ cognitive testing within established
functional magnetic resonance imaging (fMRI) techniques to study brain-behavior
relations is having a notable influence on neuropsychological research.
In practice these approaches are not mutually exclusive and most neuropsychologists
select the best approach or approaches for the task to be completed.
Methods and tools
-The use of standardized neuropsychological tests. These tasks have been
designed so the performance on the task can be linked to specific
neurocognitive processes. These tests are typically standardized, meaning that
they have been administered to a specific group (or groups) of individuals
before being used in individual clinical cases. The data resulting from
standardization are known as normative data. After these data have been
collected and analyzed, they are used as the comparative standard against
which individual performances can be compared. Examples of
neuropsychological tests include: the Wechsler Adult Memory Scale (WMS), the
Wechsler Adult Intelligence Scale (WAIS), and the Wechsler Intelligence Scale
for
Children
(WISC).
Other
tests
include
the
Halstead-Reitan
Neuropsychological Battery, the Boston Naming Test, the Wisconsin Card
Sorting Test, the Benton Visual Retention Test, and the Controlled Oral Word
Association. (The Woodcock Johnson and the Nelson-Denny are not
neuropsychological tests per se. They are psycho-educational batteries of tests
used to measure an individual's intra-disciplinary strengths and weakness in
specific academic areas (writing, reading and arithmetic)).
-The use of brain scans to investigate the structure or function of the brain is
common, either as simply a way of better assessing brain injury with high
resolution pictures, or by examining the relative activations of different brain
areas. Such technologies may include fMRI (functional magnetic resonance
imaging) and positron emission tomography (PET), which yields data related to
functioning, as well as MRI (magnetic resonance imaging) and computed axial
tomography (CAT or CT), which yields structural data...
-The use of electrophysiological measures designed to measure the activation
of the brain by measuring the electrical or magnetic field produced by the
nervous system. This may include electroencephalography (EEG) or magnetoencephalography (MEG).
-The use of designed experimental tasks, often controlled by (AACN and NAN
Joint Position Paper) computer and typically measuring reaction time and
accuracy on a particular tasks thought to be related to a specific neurocognitive
process. An example of this is the Cambridge Neuropsychological Test
Automated Battery (CANTAB) or CNS Vital Signs (CNSVS).
635
636
Perception
Perception (from the Latin perceptio, percipio) is the organization, identification, and
interpretation of sensory information in order to represent and understand the
environment.[1] All perception involves signals in the nervous system, which in turn
result from physical or chemical stimulation of the sense organs.[2] For example, vision
involves light striking the retina of the eye, smell is mediated by odor molecules, and
hearing involves pressure waves. Perception is not the passive receipt of these signals,
but is shaped by learning, memory, expectation, and attention.[3][4] Perception
involves these "top-down" effects as well as the "bottom-up" process of processing
sensory input.[4] The "bottom-up" processing transforms low-level information to
higher-level information (e.g., extracts shapes for object recognition). The "top-down"
processing refers to a person's concept and expectations (knowledge), and selective
mechanisms (attention) that influence perception. Perception depends on complex
functions of the nervous system, but subjectively seems mostly effortless because this
processing happens outside conscious awareness.[2]
Since the rise of experimental psychology in the 19th Century, psychology's
understanding of perception has progressed by combining a variety of techniques.[3]
Psychophysics quantitatively describes the relationships between the physical qualities
of the sensory input and perception.[5] Sensory neuroscience studies the brain
mechanisms underlying perception. Perceptual systems can also be studied
computationally, in terms of the information they process. Perceptual issues in
philosophy include the extent to which sensory qualities such as sound, smell or color
exist in objective reality rather than in the mind of the perceiver.[3]
Although the senses were traditionally viewed as passive receptors, the study of
illusions and ambiguous images has demonstrated that the brain's perceptual systems
actively and pre-consciously attempt to make sense of their input.[3] There is still active
debate about the extent to which perception is an active process of hypothesis testing,
analogous to science, or whether realistic sensory information is rich enough to make
this process unnecessary.[3]
The perceptual systems of the brain enable individuals to see the world around them
as stable, even though the sensory information is typically incomplete and rapidly
varying. Human and animal brains are structured in a modular way, with different areas
processing different kinds of sensory information. Some of these modules take the form
of sensory maps, mapping some aspect of the world across part of the brain's surface.
These different modules are interconnected and influence each other. For instance, the
taste is strongly influenced by its odor.[6]
637
Contents
1 Process and terminology
2 Perception and reality
3 Features
3.1 Constancy
3.2 Grouping
3.3 Contrast effects
4 Effect of experience
5 Effect of motivation and expectation
6 Theories
6.1 Perception as direct perception
6.2 Perception-in-action
6.3 Evolutionary psychology and perception
6.4 Theories of visual perception
7 Physiology
8 Types
8.1 Of sound
8.1.1 Of speech
8.2 Touch
8.3 Taste
8.4 Other senses
8.5 Of the social world
Process and terminology
The process of perception begins with an object in the real world, termed the distal
stimulus or distal object.[2] By means of light, sound or another physical process, the
object stimulates the body's sensory organs. These sensory organs transform the input
energy into neural activity—a process called transduction.[2][7] This raw pattern of
neural activity is called the proximal stimulus.[2] These neural signals are transmitted to
the brain and processed.[2] The resulting mental re-creation of the distal stimulus is the
percept. Perception is sometimes described as the process of constructing mental
representations of distal stimuli using the information available in proximal stimuli.
An example would be a person looking at a shoe. The shoe itself is the distal stimulus.
When light from the shoe enters a person's eye and stimulates their retina, that
stimulation is the proximal stimulus.[8] The image of the shoe reconstructed by the
brain of the person is the percept. Another example would be a telephone ringing. The
ringing of the telephone is the distal stimulus. The sound stimulating a person's
auditory receptors is the proximal stimulus, and the brain's interpretation of this as the
ringing of a telephone is the percept. The different kinds of sensation such as warmth,
sound, and taste are called "sensory modalities".[7][9]
Psychologist Jerome Bruner has developed a model of perception. According to him
people go through the following process to form opinions:[10]
1-When we encounter an unfamiliar target we are open to different informational
cues and want to learn more about the target.
2-In the second step we try to collect more information about the target.
Gradually, we encounter some familiar cues which help us categorize the
target.
638
3-At this stage, the cues become less open and selective. We try to search for
more cues that confirm the categorization of the target. We also actively ignore
and even distort cues that violate our initial perceptions. Our perception
becomes more selective and we finally paint a consistent picture of the target.
According to Alan Saks and Gary Johns, there are three components to perception.[10]
1-The Perceiver, the person who becomes aware about something and comes
to a final understanding. There are 3 factors that can influence his or her
perceptions: experience, motivational state and finally emotional state. In
different motivational or emotional states, the perceiver will react to or perceive
something in different ways. Also in different situations he or she might employ
a "perceptual defence" where they tend to "see what they want to see".
2-The Target. This is the person who is being perceived or judged. "Ambiguity
or lack of information about a target leads to a greater need for interpretation
and addition."
3-The Situation also greatly influences perceptions because different situations
may call for additional information about the target.
Stimuli are not necessarily translated into a percept and rarely does a single stimulus
translate into a percept. An ambiguous stimulus may be translated into multiple
percepts, experienced randomly, one at a time, in what is called "multistable
perception". And the same stimuli, or absence of them, may result in different percepts
depending on subject’s culture and previous experiences. Ambiguous figures
demonstrate that a single stimulus can result in more than one percept; for example the
Rubin vase which can be interpreted either as a vase or as two faces. The percept can
bind sensations from multiple senses into a whole. A picture of a talking person on a
television screen, for example, is bound to the sound of speech from speakers to form
a percept of a talking person. "Percept" is also a term used by Leibniz,[11] Bergson,
Deleuze and Guattari[12] to define perception independent from perceivers.
Perception and reality
In the case of visual perception, some people can actually see the percept shift in their
mind's eye.[13] Others, who are not picture thinkers, may not necessarily perceive the
'shape-shifting' as their world changes. The 'esemplastic' nature has been shown by
experiment: an ambiguous image has multiple interpretations on the perceptual level.
This confusing ambiguity of perception is exploited in human technologies such as
camouflage, and also in biological mimicry, for example by European Peacock
butterflies, whose wings bear eye markings that birds respond to as though they were
the eyes of a dangerous predator.
There is also evidence that the brain in some ways operates on a slight "delay", to
allow nerve impulses from distant parts of the body to be integrated into simultaneous
signals.[14]
Perception is one of the oldest fields in psychology. The oldest quantitative laws in
psychology are Weber's law-which states that the smallest noticeable difference in
stimulus intensity is proportional to the intensity of the reference-and Fechner's law
which quantifies the relationship between the intensity of the physical stimulus and its
perceptual counterpart (for example, testing how much darker a computer screen can
get before the viewer actually notices). The study of perception gave rise to the Gestalt
school of psychology, with its emphasis on holistic approach.
639
Features
Constancy
Perceptual constancy is the ability of perceptual systems to recognise the same object
from widely varying sensory inputs.[4][15] For example, individual people can be
recognised from views, such as frontal and profile, which form very different shapes on
the retina. A coin looked at face-on makes a circular image on the retina, but when held
at angle it makes an elliptical image.[16] In normal perception these are recognised as
a single three-dimensional object. Without this correction process, an animal
approaching from the distance would appear to gain in size.[17][18] One kind of
perceptual constancy is color constancy: for example, a white piece of paper can be
recognised as such under different colors and intensities of light.[18] Another example
is roughness constancy: when a hand is drawn quickly across a surface, the touch
nerves are stimulated more intensely. The brain compensates for this, so the speed of
contact does not affect the perceived roughness.[18] Other constancies include
melody, odor, brightness and words.[19] These constancies are not always total, but
the variation in the percept is much less than the variation in the physical stimulus.[18]
The perceptual systems of the brain achieve perceptual constancy in a variety of ways,
each specialized for the kind of information being processed.[20]
Grouping
The principles of grouping (or Gestalt
laws of grouping) are a set of
principles
in
psychology,
first
proposed by Gestalt psychologists to
explain
how
humans
naturally
perceive
objects
as
organized
patterns
and
objects.
Gestalt
psychologists argued that these
principles exist because the mind has
an innate disposition to perceive
patterns in the stimulus based on
certain rules. These principles are
organized into six categories. The principle of proximity states that, all else being equal,
perception tends to group stimuli that are close together as part of the same object,
and stimuli that are far apart as two separate objects. The principle of similarity states
that, all else being equal, perception lends itself to seeing stimuli that physically
resemble each other as part of the same object, and stimuli that are different as part of
a different object. This allows for people to distinguish between adjacent and
overlapping objects based on their visual texture and resemblance. The principle of
closure refers to the mind’s tendency to see complete figures or forms even if a picture
is incomplete, partially hidden by other objects, or if part of the information needed to
make a complete picture in our minds is missing. For example, if part of a shape’s
border is missing people still tend to see the shape as completely enclosed by the
border and ignore the gaps. The principle of good continuation makes sense of stimuli
that overlap: when there is an intersection between two or more objects, people tend to
perceive each as a single uninterrupted object. The principle of common fate groups
stimuli together on the basis of their movement. When visual elements are seen
moving in the same direction at the same rate, perception associates the movement as
part of the same stimulus. This allows people to make out moving objects even when
other details, such as color or outline, are obscured. The principle of good form refers
to the tendency to group together forms of similar shape, pattern, color,
etc.[21][22][23][24] Later research has identified additional grouping principles.[25]
640
Contrast effects
A common finding across many different kinds of perception is that the perceived
qualities of an object can be affected by the qualities of context. If one object is
extreme on some dimension, then neighboring objects are perceived as further away
from that extreme. "Simultaneous contrast effect" is the term used when stimuli are
presented at the same time, whereas "successive contrast" applies when stimuli are
presented one after another.[26]
The contrast effect was noted by the 17th Century philosopher John Locke, who
observed that lukewarm water can feel hot or cold, depending on whether the hand
touching it was previously in hot or cold water.[27] In the early 20th Century, Wilhelm
Wundt identified contrast as a fundamental principle of perception, and since then the
effect has been confirmed in many different areas.[27] These effects shape not only
visual qualities like color and brightness, but other kinds of perception, including how
heavy an object feels.[28] One experiment found that thinking of the name "Hitler" led
to subjects rating a person as more hostile.[29] Whether a piece of music is perceived
as good or bad can depend on whether the music heard before it was unpleasant or
pleasant.[30] For the effect to work, the objects being compared need to be similar to
each other: a television reporter can seem smaller when interviewing a tall basketball
player, but not when standing next to a tall building.[28] In the brain, contrast exerts
effects on neuronal firing rates but also on neuronal synchrony.[31]
Effect of experience
With experience, organisms can learn to make finer perceptual distinctions, and learn
new kinds of categorization. Wine-tasting, the reading of X-ray images and music
appreciation are applications of this process in the human sphere. Research has
focused on the relation of this to other kinds of learning, and whether it takes place in
peripheral sensory systems or in the brain's processing of sense information.[citation
needed]
Effect of motivation and expectation
A perceptual set, also called perceptual expectancy or just set is a predisposition to
perceive things in a certain way.[32] It is an example of how perception can be shaped
by "top-down" processes such as drives and expectations.[33] Perceptual sets occur in
all the different senses.[17] They can be long term, such as a special sensitivity to
hearing one's own name in a crowded room, or short term, as in the ease with which
hungry people notice the smell of food.[34] A simple demonstration of the effect
involved very brief presentations of non-words such as "sael". Subjects who were told
to expect words about animals read it as "seal", but others who were expecting boatrelated words read it as "sail".[34]
Sets can be created by motivation and so can result in people interpreting ambiguous
figures so that they see what they want to see.[33] For instance, how someone
perceives what unfolds during a sports game can be biased if they strongly support one
of the teams.[35] In one experiment, students were allocated to pleasant or unpleasant
tasks by a computer. They were told that either a number or a letter would flash on the
screen to say whether they were going to taste an orange juice drink or an unpleasanttasting health drink. In fact, an ambiguous figure was flashed on screen, which could
either be read as the letter B or the number 13. When the letters were associated with
the pleasant task, subjects were more likely to perceive a letter B, and when letters
were associated with the unpleasant task they tended to perceive a number 13.[32]
Perceptual set has been demonstrated in many social contexts. People who are primed
to think of someone as "warm" are more likely to perceive a variety of positive
characteristics in them, than if the word "warm" is replaced by "cold". When someone
641
has a reputation for being funny, an audience is more likely to find them amusing.[34]
Individual's perceptual sets reflect their own personality traits. For example, people with
an aggressive personality are quicker to correctly identify aggressive words or
situations.[34]
One classic psychological experiment showed slower reaction times and less accurate
answers when a deck of playing cards reversed the color of the suit symbol for some
cards (e.g. red spades and black hearts).[36]
Philosopher Andy Clark explains that perception, although it occurs quickly, is not
simply a bottom-up process (where minute details are put together to form larger
wholes). Instead, our brains use what he calls Predictive coding. It starts with very
broad constraints and expectations for the state of the world, and as expectations are
met, it makes more detailed predictions (errors lead to new predictions, or learning
processes). Clark says this research has various implications; not only can there be no
completely "unbiased, unfiltered" perception, but this means that there is a great deal
of feedback between perception and expectation (perceptual experiences often shape
our beliefs, but those perceptions were based on existing beliefs).[37]
Theories
Perception as direct perception
Cognitive theories of perception assume there is a poverty of stimulus. This (with
reference to perception) is the claim that sensations are, by themselves, unable to
provide a unique description of the world. "[38] Sensations require 'enriching', which is
the role of the mental model. A different type of theory is the perceptual ecology
approach of James J. Gibson. Gibson rejected the assumption of a poverty of stimulus
by rejecting the notion that perception is based upon sensations – instead, he
investigated what information is actually presented to the perceptual systems. His
theory "assumes the existence of stable, unbounded, and permanent stimulusinformation in the ambient optic array. And it supposes that the visual system can
explore and detect this information. The theory is information-based, not sensationbased."[39] He and the psychologists who work within this paradigm detailed how the
world could be specified to a mobile, exploring organism via the lawful projection of
information about the world into energy arrays.[40] Specification is a 1:1 mapping of
some aspect of the world into a perceptual array; given such a mapping, no enrichment
is required and perception is direct perception.[41]
Perception-in-action
An ecological understanding of perception derived from Gibson's early work is that of
"perception-in-action", the notion that perception is a requisite property of animate
action; that without perception action would be unguided, and without action perception
would serve no purpose. Animate actions require both perception and motion, and
perception and movement can be described as "two sides of the same coin, the coin is
action". Gibson works from the assumption that singular entities, which he calls
"invariants", already exist in the real world and that all that the perception process does
is to home in upon them. A view known as constructivism (held by such philosophers
as Ernst von Glasersfeld) regards the continual adjustment of perception and action to
the external input as precisely what constitutes the "entity", which is therefore far from
being invariant.[42]
Glasersfeld considers an "invariant" as a target to be homed in upon, and a pragmatic
necessity to allow an initial measure of understanding to be established prior to the
updating that a statement aims to achieve. The invariant does not and need not
represent an actuality, and Glasersfeld describes it as extremely unlikely that what is
642
desired or feared by an organism will never suffer change as time goes on. This social
constructionist theory thus allows for a needful evolutionary adjustment.[43]
A mathematical theory of perception-in-action has been devised and investigated in
many forms of controlled movement, and has been described in many different species
of organism using the General Tau Theory. According to this theory, tau information, or
time-to-goal information is the fundamental 'percept' in perception.
Evolutionary psychology and perception
Many philosophers, such as Jerry Fodor, write that the purpose of perception is
knowledge, but evolutionary psychologists hold that its primary purpose is to guide
action.[44] For example, they say, depth perception seems to have evolved not to help
us know the distances to other objects but rather to help us move around in space.[44]
Evolutionary psychologists say that animals from fiddler crabs to humans use eyesight
for collision avoidance, suggesting that vision is basically for directing action, not
providing knowledge.[44]
Building and maintaining sense organs is metabolically expensive, so these organs
evolve only when they improve an organism's fitness.[44] More than half the brain is
devoted to processing sensory information, and the brain itself consumes roughly onefourth of one's metabolic resources, so the senses must provide exceptional benefits to
fitness.[44] Perception accurately mirrors the world; animals get useful, accurate
information through their senses.[44]
Scientists who study perception and sensation have long understood the human
senses as adaptations.[44] Depth perception consists of processing over half a dozen
visual cues, each of which is based on a regularity of the physical world.[44] Vision
evolved to respond to the narrow range of electromagnetic energy that is plentiful and
that does not pass through objects.[44] Sound waves provide useful information about
the sources of and distances to objects, with larger animals making and hearing lowerfrequency sounds and smaller animals making and hearing higher-frequency
sounds.[44] Taste and smell respond to chemicals in the environment that were
significant for fitness in the EEA.[44] The sense of touch is actually many senses,
including pressure, heat, cold, tickle, and pain.[44] Pain, while unpleasant, is
adaptive.[44] An important adaptation for senses is range shifting, by which the
organism becomes temporarily more or less sensitive to sensation.[44] For example,
one's eyes automatically adjust to dim or bright ambient light.[44] Sensory abilities of
different organisms often coevolve, as is the case with the hearing of echolocating bats
and that of the moths that have evolved to respond to the sounds that the bats
make.[44]
Evolutionary psychologists claim that perception demonstrates the principle of
modularity, with specialized mechanisms handling particular perception tasks.[44] For
example, people with damage to a particular part of the brain suffer from the specific
defect of not being able to recognize faces (prospagnosia).[44] EP suggests that this
indicates a so-called face-reading module.[44]
Theories of visual perception
-Empirical theories of perception
-Anne Treisman's feature integration theory
-Interactive activation and competition
-Irving Biederman's recognition by components theory
643
Physiology
A sensory system is a part of the nervous system responsible for processing sensory
information. A sensory system consists of sensory receptors, neural pathways, and
parts of the brain involved in sensory perception. Commonly recognized sensory
systems are those for vision, hearing, somatic sensation (touch), taste and olfaction
(smell). It has been suggested that the immune system is an overlooked sensory
modlality.[45] In short, senses are transducers from the physical world to the realm of
the mind.
The receptive field is the specific part of the world to which a receptor organ and
receptor cells respond. For instance, the part of the world an eye can see, is its
receptive field; the light that each rod or cone can see, is its receptive field.[46]
Receptive fields have been identified for the visual system, auditory system and
somatosensory system, so far.
Types
Of sound
Hearing (or audition) is the ability to
perceive
sound
by
detecting
vibrations. Frequencies capable of
being heard by humans are called
audio or sonic. The range is typically
considered to be between 20 Hz and
20,000 Hz.[47] Frequencies higher
than audio are referred to as
ultrasonic, while frequencies below
audio are referred to as infrasonic.
The auditory system includes the outer
ears which collect and filter sound
waves, the middle ear for transforming
the sound pressure (impedance
matching), and the inner ear which
produces neural signals in response to
the sound. By the ascending auditory
pathway these are led to the primary auditory cortex within the temporal lobe of the
human brain, which is where the auditory information arrives in the cerebral cortex and
is further processed there.
Sound does not usually come from a single source: in real situations, sounds from
multiple sources and directions are superimposed as they arrive at the ears. Hearing
involves the computationally complex task of separating out the sources of interest,
often estimating their distance and direction as well as identifying them.[16]
Of speech
Speech perception is the process by which the sounds of language are heard,
interpreted and understood. Research in speech perception seeks to understand how
human listeners recognize speech sounds and use this information to understand
spoken language. The sound of a word can vary widely according to words around it
and the tempo of the speech, as well as the physical characteristics, accent and mood
of the speaker. Listeners manage to perceive words across this wide range of different
conditions. Another variation is that reverberation can make a large difference in sound
between a word spoken from the far side of a room and the same word spoken up
close. Experiments have shown that people automatically compensate for this effect
when hearing speech.[16][48]
644
The process of perceiving speech
begins at the level of the sound within
the auditory signal and the process of
audition. After processing the initial
auditory signal, speech sounds are
further processed to extract acoustic
cues and phonetic information. This
speech information can then be used
for higher-level language processes,
such as word recognition. Speech
perception is not necessarily unidirectional. That is, higher-level
language processes connected with
morphology, syntax, or semantics may interact with basic speech perception processes
to aid in recognition of speech sounds.[citation needed] It may be the case that it is not
necessary and maybe even not possible for a listener to recognize phonemes before
recognizing higher units, like words for example. In one experiment, Richard M. Warren
replaced one phoneme of a word with a cough-like sound. His subjects restored the
missing speech sound perceptually without any difficulty and what is more, they were
not able to identify accurately which phoneme had been disturbed.[49]
Touch
Haptic perception is the process of recognizing objects through touch. It involves a
combination of somatosensory perception of patterns on the skin surface (e.g., edges,
curvature, and texture) and proprioception of hand position and conformation. People
can rapidly and accurately identify three-dimensional objects by touch.[50] This
involves exploratory procedures, such as moving the fingers over the outer surface of
the object or holding the entire object in the hand.[51] Haptic perception relies on the
forces experienced during touch.[52]
Gibson defined the haptic system as "The sensibility of the individual to the world
adjacent to his body by use of his body".[53] Gibson and others emphasized the close
link between haptic perception and body movement: haptic perception is active
exploration. The concept of haptic perception is related to the concept of extended
physiological proprioception according to which, when using a tool such as a stick,
perceptual experience is transparently transferred to the end of the tool.
Taste
Taste (or, the more formal term, gustation) is the ability to perceive the flavor of
substances including, but not limited to, food. Humans receive tastes through sensory
organs called taste buds, or gustatory calyculi, concentrated on the upper surface of
the tongue.[54] The human tongue has 100 to 150 taste receptor cells on each of its
roughly ten thousand taste buds.[55] There are five primary tastes: sweetness,
bitterness, sourness, saltiness, and umami. Other tastes can be mimicked by
combining these basic tastes.[55][56] The recognition and awareness of umami is a
relatively recent development in Western cuisine.[57] The basic tastes contribute only
partially to the sensation and flavor of food in the mouth — other factors include smell,
detected by the olfactory epithelium of the nose;[6] texture, detected through a variety
of mechanoreceptors, muscle nerves, etc.;[56][58] and temperature, detected by
thermoreceptors.[56] All basic tastes are classified as either appetitive or aversive,
depending upon whether the things they sense are harmful or beneficial.[59]
645
Other senses
Other senses enable perception of body balance, acceleration, gravity, position of body
parts, temperature, pain, time, and perception of internal senses such as suffocation,
gag reflex, intestinal distension, fullness of rectum and urinary bladder, and sensations
felt in the throat and lungs.
Of the social world
Social perception is the part of perception that allows people to understand the
individuals and groups of their social world, and thus an element of social cognition.[60]
646
Arousal
Arousal is a physiological and psychological state of being awake or reactive to stimuli.
It involves the activation of the reticular activating system in the brain stem, the
autonomic nervous system and the endocrine system, leading to increased heart rate
and blood pressure and a condition of sensory alertness, mobility and readiness to
respond.
There are many different neural systems involved in what is collectively known as the
arousal system. Four major systems originating in the brainstem, with connections
extending throughout the cortex, are based on the brain's neurotransmitters,
acetylcholine, norepinephrine, dopamine, and serotonin. When these systems are in
action, the receiving neural areas become sensitive and responsive to incoming
signals.
Contents
1 Importance
2 Personality
2.1 Introversion and extraversion
2.2 Emotional stability vs. introversion-extraversion
2.3 The four personality types
3 Emotion
3.1 Cannon-Bard theory
3.2 James-Lange theory
3.3 Schachter-Singer two-factor theory
4 Memory
5 Preference
6 Associated problems
7 Abnormally increased behavioral arousal
Importance
Arousal is important in regulating consciousness, attention, and information processing.
It is crucial for motivating certain behaviours, such as mobility, the pursuit of nutrition,
the fight-or-flight response and sexual activity (see Masters and Johnson's human
sexual response cycle, where it is known as the arousal phase). It is also very
important in emotion, and has been included as a part of many influential theories such
as the James-Lange theory of emotion. According to Hans Eysenck, differences in
baseline arousal level lead people to be either extroverts or introverts. Later research
suggest it is most likely that extroverts and introverts have different arousability. Their
baseline arousal level is the same, but the response to stimulation is different.[2]
The Yerkes–Dodson law states that there is a relationship between arousal and task
performance, essentially arguing that there is an optimal level of arousal for
performance, and too little or too much arousal can adversely affect task performance.
One interpretation of the Yerkes–Dodson law is the Easterbrook cue-utilisation
hypothesis. Easterbrook states that an increase of emotion leads to a decrease in
number of cues that can be utilised.[3]
In positive psychology, arousal is described as a response to a difficult challenge for
which the subject has moderate skills.[1]
647
Personality
Introversion and extraversion
Hans Eysenck's theory of arousal
describes
the
different
natural
frequency or arousal states of the
brains of people who are introverted
versus people who are extroverted.
The theory states that the brains of
extroverts
are
naturally
less
stimulated, so these types have a
predisposition to seek out situations
and partake in behaviors that will
stimulate
arousal.[4]
Therefore
introverts
are
naturally
overstimulated, so they avoid intense
arousal whereas extroverts are
naturally under-stimulated, so actively
engage
in
arousing
situations.
Campbell and Hawley (1982) studied
the differences in introverts versus
extroverts responses to particular
work environments in the library.[4]
The study found that introverts were more likely to choose quiet areas with minimal to
no noise or people. Extroverts were more likely to choose areas with much activity with
more noise and people.[4] Daoussiss and McKelvie's (1986) research showed that
introverts performed worse on memory tasks when they were in the presence of music
compared to silence. Extroverts were less affected by the presence of music.[4]
Similarly, Belojevic, Slepcevic and Jokovljevic (2001) found that introverts had more
concentration problems and fatigue in their mental processing when work was coupled
with external noise or distracting factors.[4] The level of arousal surrounding the
individuals greatly affected their ability to perform tasks and behaviors, with the
introverts being more affected than the extroverts, because of each's naturally high and
low levels of stimulation, respectively.
Emotional stability vs. introversion-extraversion
Neuroticism or emotional instability and extraversion are two factors of the Big Five
Personality Index. These two dimensions of personality describe how a person deals
with anxiety-provoking or emotional stimuli as well as how a person behaves and
responds to relevant and irrelevant external stimuli in their environment. Neurotics
experience tense arousal which is characterized by tension and nervousness.
Extraverts experience high energetic arousal which is characterized by vigor and
energy.[5] Gray (1981) claimed that extraverts have a higher sensitivity to reward
signals than to punishment in comparison to introverts. Reward signals aim to raise the
energy levels.[5] Therefore extraverts typically have a higher energetic arousal
because of their greater response to rewards.
The four personality types
Hippocrates theorized that there are four personality types: choleric, melancholic,
sanguine, and phlegmatic.
Put in terms of the 5 factor level of personality, choleric people are high in neuroticism
and high in extraversion. The choleric react immediately, and the arousal is strong,
lasting, and can easily create new excitement about similar situations, ideas, or
impressions.[6] Melancholic people are high in neuroticism and low in extraversion (or
more introverted). The melancholic are slow to react and it takes time for an impression
to be made upon them if any is made at all. However, when aroused by something,
648
melancholics have a deeper and longer lasting reaction, especially when exposed to
similar experiences.[6] Sanguine people are low in neuroticism (or more emotionally
stable) and high in extraversion. The sanguine are quickly aroused and excited, like the
cholerics, but unlike the cholerics, their arousal is shallow, superficial, and shortly
leaves them as quickly as it developed.[6] Phlegmatic people are low in neuroticism
and low in extraversion. The phlegmatic are slower to react and the arousal is
fleeting.[6]
The contrasts in the different temperaments come from individuals variations in a
person's brain stem, limbic system, and thalamocortical arousal system. These
changes are observed Electroencephalogram or EEG recordings which monitor brain
activity.[7] Limbic system activation is typically linked to neuroticism, which high
activation showing high neuroticism.[8] Cortical arousal is associated with introversionextraversion differences, with high arousal associated with introversion.[8] Both the
limbic system and the thalamocortical arousal system are influenced by the brain stem
activation.[8] Robinson's study (1982) concluded that melancholic types had the
greatest natural frequencies, or a "predominance of excitation," meaning that
melancholics (who are characterized by introversion) have a higher internal level of
arousal.[7] Sanguine people (or those with high extraversion and low neuroticism) had
the lowest overall levels of internal arousal, or a "predominance of inhibition.[7]"
Melancholics also had the highest overall thalamocortical excitation, whereas cholerics
(those with high extraversion and high neuroticism) had the lowest intrinsic
thalamocortical excitation.[7] The differences in the internal system levels is the
evidence that Eysenck used to explain the differences between the introverted and the
extroverted. Pavlov, the founder of classical conditioning, also partook in temperament
studies with animals. Pavlov's findings with animals are consistent with Eysenck's
conclusions. In his studies, melancholics produced an inhibitory response to all
external stimuli, which holds true that melancholics shut out outside arousal, because
they are deeply internally aroused.[7] Pavlov found that cholerics responded to stimuli
with aggression and excitement whereas melancholics became depressed and
unresponsive.[7] The high neuroticism, characterized by both melancholics and
cholerics both manifested themselves differently because of the different levels of
internal arousal both types had.
Emotion
Cannon-Bard theory
The Cannon-Bard Theory is a theory of undifferentiated arousal, where the physical
and emotional states occur at the same time in response to an event. This theory
states that an emotionally provoking event results in both the physiological arousal and
the emotion occurring concurrently.[9] For example, a dear family member dies. A
potential physiological response would be tears falling down your face and your throat
feeling dry. You are "sad." The Cannon-Bard theory states that the tears and the
sadness both happen at the same time. The process goes: event (family member dies)
--> physiological arousal (tears) AND emotion (sadness) simultaneously.[9]
James-Lange theory
The James-Lange Theory describes how emotion is caused by the bodily changes
which come from the perception of the emotionally arousing experience or
environment.[10] This theory states that events cause the autonomic nervous system
to induce physiological arousal, characterized by muscular tension, heart rate
increases, perspiration, dryness of mouth, tears, etc.[11] According to James and
Lange, the emotion comes as a result of the physiological arousa[12] l. The bodily
feeling as a reaction to the situation IS the emotion.[10] For example, someone just
deeply insulted you and your family. Your fists ball up, you begin to perspire, and you
are tense all around. You feel that your fists are balled and that you are tense. You
then realize that you are angry. The process here is: event (insult) --> physiological
649
arousal (balled fists, sweat, tension) --> interpretation (I have balled fists, and tension) -> emotion (anger: I am angry).[12] This type of theory emphasizes the physiological
arousal as the key, in that the cognitive processes alone would not be sufficient
evidence of an emotion.
Schachter-Singer two-factor theory
The Schachter-Singer Two-Factor Theory or the cognitive labeling theory takes into
account both the physiological arousal and the cognitive processes that respond to an
emotion provoking situation. Schachter and Singer's theory states that an emotional
state is the product of the physiological arousal and the cognition appropriate state of
arousal. Meaning, that cognition determines how the physical response is labeled,
either as "anger," "joy," "fear," etc.[10] Emotion is a product of the interaction between
the state of arousal as well as how one's thought processes appraise the current
situation.[13] The physiological arousal, however, does not label the emotion, but the
cognitive label does. For example, let's say you are being pursued by a serial killer.
You will be sweating and your heart will be racing, which is your physiological state.
Your cognitive label will come from accessing your quickly beating heart and sweat as
"fear." Then you will feel the emotion of "fear," but only after it has been established
through cognition. The process is: the event (serial killer chasing you) --> physiological
arousal (sweat, heart racing) --> cognitive label (reasoning; this is fear) --> emotion
(fear).[12]
Memory
Arousal is involved in the detection, retention, and retrieval of information in the
memory process. Emotionally arousing information can lead to better memory
encoding, therefore influencing better retention and retrieval of information. Arousal is
related to selective attention during the encoding process by showing that people are
more subject to encode arousing information than neutral information.[14] The
selectivity of encoding arousing stimuli produces better long-term memory results than
the encoding of neutral stimuli.[15] In other words, the retention and accumulation of
information is strengthened when exposed to arousing events or information. Arousing
information is also retrieved or remembered more vividly and accurately.[16]
Although arousal improves memory under most circumstances, there are some
considerations. Arousal at learning is associated more with long-term recall and
retrieval of information than short-term recall of information. For example, one study
found that people could remember arousing words better after one week of learning
them than merely two minutes after learning them.[17] Another study found that arousal
affects the memory of people in different ways. Hans Eysenck found an association
between memory and the arousal of introverts versus extroverts. Higher levels of
arousal increased the amount of words retrieved by extroverts and decreased the
amount of words retrieved by introverts.[17]
Preference
A person’s level of arousal when introduced to stimuli can be indicative of his or her
preferences. One study found that familiar stimuli are often preferred to unfamiliar
stimuli. The findings suggested that the exposure to unfamiliar stimuli was correlated to
avoidance behaviors. The unfamiliar stimuli may lead to increased arousal and
increased avoidance behaviors.[18]
On the contrary, increased arousal can increase approach behaviors as well. People
are said to make decisions based on their emotional states. They choose specific
options that lead to more favorable emotional states.[19] When a person is aroused, he
or she may find a wider range of events appealing[20] and view decisions as more
salient, specifically influencing approach-avoidance conflict.[19] The state of arousal
might lead a person to view a decision more positively than he or she would have in a
less aroused state.
650
The reversal theory accounts for the preference of either high or low arousal in different
situations. Both forms of arousal can be pleasant or unpleasant, depending on a
person’s moods and goals at a specific time.[21] Wundt’s hedonic curve and Berlyne’s
hedonic curve differ slightly from this theory. Both theorists explain a person’s arousal
potential in terms of his or her hedonic tone. These individual differences in arousal
demonstrate Eysenck’s theory that extroverts prefer increased stimulation and arousal,
whereas introverts prefer lower stimulation and arousal.[22]
Associated problems
Arousal is associated with both anxiety and depression.
Depression can influence a person’s level of arousal by interfering with the right
hemisphere’s functioning. Arousal in women has been shown to be slowed in the left
visual field due to depression, indicating the influence of the right hemisphere.[23]
Arousal and anxiety have a different relationship than arousal and depression. People
who suffer from anxiety disorders tend to have abnormal and amplified perceptions of
arousal. The distorted perceptions of arousal then create fear and distorted perceptions
of the self. For example, a person may believe that he or she will get sick from being so
nervous about taking an exam. The fear of the arousal of nervousness and how people
will perceive this arousal will then contribute to levels of anxiety.[24]
Abnormally increased behavioral arousal
This is a state caused by withdrawal from alcohol or barbiturates, acute encephalitis,
head trauma resulting in coma, partial seizures in epilepsy, metabolic disorders of
electrolyte imbalance, Intra-cranial space- occupying lesions, Alzheimer's disease,
rabies, hemispheric lesions in stroke and multiple sclerosis.[25]
Anatomically this is a disorder of the limbic system, hypothalamus, temporal lobes,
amygdala and frontal lobes.[25] It is not to be confused with mania.
651
652
Executive Functions
Executive functions (also known as cognitive control and supervisory attentional
system) is an umbrella term for the management (regulation, control) of cognitive
processes,[1] including working memory, reasoning, task flexibility, and problem
solving [2] as well as planning, and execution.[3] The executive system is a theorized
cognitive system in psychology that controls and manages other cognitive processes,
such as executive functions. The prefrontal areas of the frontal lobe are necessary but
not sufficient for carrying out these functions.[4]
Contents
1 Neuroanatomy
2 Hypothesized role
3 Historical perspective
4 Development
4.1 Early childhood
4.2 Preadolescence
4.3 Adolescence
4.4 Adulthood
5 Models
5.1 Top-down inhibitory control
5.2 Working memory model
5.3 Supervisory attentional system (SAS)
5.4 Self-regulatory model
5.5 Problem-solving model
5.6 Lezak’s conceptual model
5.7 Miller & Cohen's (2001) model
5.8 Miyake and Friedman’s model of executive functions
5.9 Banich’s (2009) "Cascade of control" model
6 Assessment
7 Experimental evidence
7.1 Context-sensitivity of PFC neurons
7.2 Attentional biasing in sensory regions
7.3 Connectivity between the PFC and sensory regions
7.4 Bilingualism and executive functions
8 Future directions
653
Neuroanatomy
Historically, the executive functions have
been seen as regulated by the prefrontal
regions of the frontal lobes, but it is still a
matter of ongoing debate if that really is
the case.[4] Even though articles on
prefrontal lobe lesions commonly refer to
disturbances of executive functions and
vice versa, a review found indications for
the sensitivity but not for the specificity of
executive function measures to frontal
lobe functioning. This means that both
frontal and non-frontal brain regions are
necessary for intact executive functions. Probably the frontal lobes need to participate
in basically all of the executive functions, but it is not the only brain structure
involved.[4]
Neuroimaging and lesion studies have identified the functions which are most often
associated with the particular regions of the prefrontal cortex.[4]
-The dorsolateral prefrontal cortex (DLPFC) is involved with "on-line" processing
of information such as integrating different dimensions of cognition and
behaviour.[5] As such, this area has been found to be associated with verbal
and design fluency, ability to maintain and shift set, planning, response
inhibition, working memory, organisational skills, reasoning, problem solving
and abstract thinking.[4][6]
-The anterior cingulate cortex
(ACC) is involved in emotional
drives,
experience
and
integration.[5]
Associated
cognitive
functions
include
inhibition
of
inappropriate
responses, decision making and
motivated behaviours. Lesions in
this area can lead to low drive
states such as apathy, abulia or
akinetic mutism and may also
result in low drive states for such
basic needs as food or drink and
possibly decreased interest in
social or vocational activities and sex.[5][7]
-The orbitofrontal cortex (OFC) plays a key role in impulse control, maintenance
of set, monitoring ongoing behaviour and socially appropriate behaviours.[5]
The orbitofrontal cortex also has roles in representing the value of rewards
based on sensory stimuli and evaluating subjective emotional experiences.[8]
Lesions can cause disinhibition, impulsivity, aggressive outbursts, sexual
promiscuity and antisocial behaviour.[4]
Furthermore, in their review, Alvarez and Emory state that: "The frontal lobes have
multiple connections to cortical, subcortical and brain stem sites. The basis of "higherlevel" cognitive functions such as inhibition, flexibility of thinking, problem solving,
planning, impulse control, concept formation, abstract thinking, and creativity often
arise from much simpler, "lower-level" forms of cognition and behavior. Thus, the
concept of executive function must be broad enough to include anatomical structures
that represent a diverse and diffuse portion of the central nervous system."[4]
654
Hypothesized role
The executive system is thought to be heavily involved in handling novel situations
outside the domain of some of our 'automatic' psychological processes that could be
explained by the reproduction of learned schemas or set behaviors. Psychologists Don
Norman and Tim Shallice have outlined five types of situations in which routine
activation of behavior would not be sufficient for optimal performance:[9]
-Those that involve planning or decision making
-Those that involve error correction or troubleshooting
-Situations where responses are not well-rehearsed or contain novel sequences
of actions
-Dangerous or technically difficult situations
-Situations that require the overcoming of a strong habitual response or
resisting temptation.
A prepotent response is a response for which immediate reinforcement (positive or
negative) is available or has been previously associated with that response.[10] The
executive functions are often invoked when it is necessary to override these prepotent
responses that might otherwise be automatically elicited by stimuli in the external
environment. For example, on being presented with a potentially rewarding stimulus,
such as a tasty piece of chocolate cake, a person might have the automatic response
to take a bite. However, where such behavior conflicts with internal plans (such as
having decided not to eat chocolate cake while on a diet), the executive functions might
be engaged to inhibit that response.
Although suppression of these prepotent responses is ordinarily considered adaptive,
problems for the development of the individual and the culture arise when feelings of
right and wrong are overridden by cultural expectations or when creative impulses are
overridden by executive inhibitions.[11]
Historical perspective
Although research into the executive functions and their neural basis has increased
markedly over recent years, the theoretical framework in which it is situated is not new.
In the 1950s, the British psychologist Donald Broadbent drew a distinction between
"automatic" and "controlled" processes (a distinction characterized more fully by
Shiffrin and Schneider in 1977),[12] and introduced the notion of selective attention, to
which executive functions are closely allied. In 1975, the US psychologist Michael
Posner used the term "cognitive control" in his book chapter entitled "Attention and
cognitive control".[13]
The work of influential researchers such as Michael Posner, Joaquin Fuster, Tim
Shallice, and their colleagues in the 1980s (and later Trevor Robbins, Bob Knight, Don
Stuss, and others) laid much of the groundwork for recent research into executive
functions. For example, Posner proposed that there is a separate "executive" branch of
the attentional system, which is responsible for focusing attention on selected aspects
of the environment.[14] The British neuropsychologist Tim Shallice similarly suggested
that attention is regulated by a "supervisory system", which can override automatic
responses in favour of scheduling behaviour on the basis of plans or intentions.[15]
Throughout this period, a consensus emerged that this control system is housed in the
most anterior portion of the brain, the prefrontal cortex (PFC).
Psychologist Alan Baddeley had proposed a similar system as part of his model of
working memory[16] and argued that there must be a component (which he named the
"central executive") that allows information to be manipulated in short-term memory (for
example, when doing mental arithmetic).
655
Development
When studying executive functions, a developmental framework is helpful because
these abilities mature at different rates over time. Some abilities peak in late childhood
or adolescence while others progress into early adulthood. The brain continues to
mature and develop connections well into adulthood. A person's executive function
abilities are shaped by both physical changes in the brain and by life experiences, in
the classroom and in the world at large. Furthermore, executive functioning
development corresponds to the neurophysiological developments of the growing
brain; as the processing capacity of the frontal lobes and other interconnected regions
increases the core executive functions emerge.[17][18] As these functions are
established, they continue to mature, sometimes in spurts, while other, more complex
functions also develop, underscoring the different directions along which each
component might develop.[17][18]
Early childhood
Inhibitory control and working memory act as basic executive functions that makes it
possible for more complex executive functions like problem-solving to develop.[19]
Inhibitory control and working memory are among the earliest executive functions to
appear, with initial signs observed in infants, 7 to 12-months old.[17][18] Then in the
preschool years, children display a spurt in performance on tasks of inhibition and
working memory, usually between the ages of 3 to 5 years.[17][20] Also during this
time, cognitive flexibility, goal-directed behavior, and planning begin to develop.[17]
Nevertheless, preschool children do not have fully mature executive functions and
continue to make errors related to these emerging abilities - often not due to the
absence of the abilities, but rather because they lack the awareness to know when and
how to use particular strategies in particular contexts.[21]
Preadolescence
Preadolescent children continue to exhibit certain growth spurts in executive functions,
suggesting that this development does not necessarily occur in a linear manner, along
with the preliminary maturing of particular functions as well.[17][18] During
preadolescence, children display major increases in verbal working memory;[22] goaldirected behavior (with a potential spurt around 12 years of age);[23] response
inhibition and selective attention;[24] and strategic planning and organizational
skills.[18][25][26] Additionally, between the ages of 8 to 10, cognitive flexibility in
particular begins to match adult levels.[25][26] However, similar to patterns in childhood
development, executive functioning in preadolescents is limited because they do not
reliably apply these executive functions across multiple contexts as a result of ongoing
development of inhibitory control.[17]
Adolescence
Many executive functions may begin in childhood and preadolescence, such as
inhibitory control. Yet, it is during adolescence when the different brain systems
become better integrated. At this time, youth implement executive functions, such as
inhibitory control, more efficiently and effectively and improve throughout this time
period.[27][28] Just as inhibitory control emerges in childhood and improves over time,
planning and goal-directed behavior also demonstrate an extended time course with
ongoing growth over adolescence.[20][23] Likewise, functions such as attentional
control, with a potential spurt at age 15,[23] along with working memory,[27] continue
developing at this stage.
656
Adulthood
The major change that occurs in the brain in adulthood is the constant myelination of
neurons in the prefrontal cortex.[17] At age 20-29, executive functioning skills are at
their peak, which allows people of this age to participate in some of the most
challenging mental tasks. These skills begin to decline in later adulthood. Working
memory and spatial span are areas where decline is most readily noted. Cognitive
flexibility, however has a late onset of impairment and does not usually start declining
until around age 70 in normally functioning adults.[17] Impaired executive functioning
has been found to be the best predictor of functional decline in the elderly.
Models
Top-down inhibitory control
Aside from facilitatory or amplificatory mechanisms of control, many authors have
argued for inhibitory mechanisms in the domain of response control,[29] memory,[30]
selective attention,[31] theory of mind,[32][33] emotion regulation,[34] as well as social
emotions such as empathy.[35] A recent review on this topic argues that active
inhibition is a valid concept in some domains of psychology/cognitive control.[36]
Working memory model
One influential model is Baddeley’s multicomponent model of working memory, which
is composed of a central executive system that regulates three other subsystems: the
phonological loop, which maintains verbal information; the visuospatial sketchpad,
which maintains visual and spatial information; and the more recently developed
episodic buffer that integrates short-term and long-term memory, holding and
manipulating a limited amount of information from multiple domains in temporal and
spatially sequenced episodes.[37][38]
Supervisory attentional system (SAS)
Another conceptual model is the supervisory attentional system (SAS).[39][40] In this
model, contention scheduling is the process where an individual’s well-established
schemas automatically respond to routine situations while executive functions are used
when faced with novel situations. In these new situations, attentional control will be a
crucial element to help generate new schema, implement these schema, and then
assess their accuracy.
Problem-solving model
Yet another model of executive functions is a problem-solving framework where
executive functions is considered a macroconstruct composed of subfunctions working
in different phases to (a) represent a problem, (b) plan for a solution by selecting and
ordering strategies, (c) maintain the strategies in short-term memory in order to perform
them by certain rules, and then (d) evaluate the results with error detection and error
correction.[42]
Lezak’s conceptual model
One of the most widespread conceptual models on executive functions is Lezak’s
model.[43][44] This framework proposes four broad domains of volition, planning,
purposive action, and effective performance as working together to accomplish global
executive functioning needs. While this model may broadly appeal to clinicians and
researchers to help identify and assess certain executive functioning components, it
lacks a distinct theoretical basis and relatively few attempts at validation.[45]
657
Self-regulatory model
Primarily derived from work examining behavioral inhibition, Barkley’s self-regulatory
model views executive functions as composed of four main abilities.[41] One element is
working memory that allows individuals to resist interfering information. A second
component is the management of emotional responses in order to achieve goaldirected behaviors. Thirdly, internalization of self-directed speech is used to control and
sustain rule-governed behavior and to generate plans for problem-solving. Lastly,
information is analyzed and synthesized into new behavioral responses to meet one’s
goals. Changing one’s behavioral response to meet a new goal or modify an objective
is a higher level skill that requires a fusion of executive functions including selfregulation, and accessing prior knowledge and experiences.
Miller & Cohen's (2001) model
In 2001, Earl Miller and Jonathan Cohen published their article 'An integrative theory of
prefrontal cortex function' in which they argue that cognitive control is the primary
function of the prefrontal cortex (PFC), and that control is implemented by increasing
the gain of sensory or motor neurons that are engaged by task- or goal-relevant
elements of the external environment.[46] In a key paragraph, they argue:
We assume that the PFC serves a specific function in cognitive control: the
active maintenance of patterns of activity that represent goals and the means to
achieve them. They provide bias signals throughout much of the rest of the
brain, affecting not only visual processes but also other sensory modalities, as
well as systems responsible for response execution, memory retrieval,
emotional evaluation, etc. The aggregate effect of these bias signals is to guide
the flow of neural activity along pathways that establish the proper mappings
between inputs, internal states, and outputs needed to perform a given task.
Miller and Cohen draw explicitly upon an earlier theory of visual attention that
conceptualises perception of visual scenes in terms of competition among multiple
representations - such as colors, individuals, or objects.[47] Selective visual attention
acts to 'bias' this competition in favour of certain selected features or representations.
For example, imagine that you are waiting at a busy train station for a friend who is
wearing a red coat. You are able to selectively narrow the focus of your attention to
search for red objects, in the hope of identifying your friend. Desimone and Duncan
argue that the brain achieves this by selectively increasing the gain of neurons
responsive to the color red, such that output from these neurons is more likely to reach
658
a downstream processing stage, and, as a consequence, to guide behaviour.
According to Miller and Cohen, this selective attention mechanism is in fact just a
special case of cognitive control - one in which the biasing occurs in the sensory
domain. According to Miller and Cohen's model, the PFC can exert control over input
(sensory) or output (response) neurons, as well as over assemblies involved in
memory, or emotion. Cognitive control is mediated by reciprocal PFC connectivity with
the sensory and motor cortices, and with the limbic system. Within their approach, thus,
the term 'cognitive control' is applied to any situation where a biasing signal is used to
promote task-appropriate responding, and control thus becomes a crucial component
of a wide range of psychological constructs such as selective attention, error
monitoring, decision-making, memory inhibition, and response inhibition.
Miyake and Friedman’s model of executive functions
Miyake and Friedman’s theory of executive functions proposes that there are three
aspects of executive functions: updating, inhibition, and shifting.[48] A cornerstone of
this theoretical framework is the understanding that individual differences in executive
functions reflect both unity (i.e., common EF skills) and diversity of each component
(e.g., shifting-specific). In other words, aspects of updating, inhibition, and shifting are
related, yet each remains a distinct entity. First, updating is defined as the continuous
monitoring and quick addition or deletion of contents within one’s working memory.
Second, inhibition is one’s capacity to supersede responses that are prepotent in a
given situation. Third, shifting is one’s cognitive flexibility to switch between different
tasks or mental states.
Miyake and Friedman also suggest that the current body of research in executive
functions suggest four general conclusions about these skills. The first conclusion is
the unity and diversity aspects of executive functions.[49][50] Second, recent studies
suggest that much of one’s EF skills are inherited genetically, as demonstrated in twin
studies.[51] Third, clean measures of executive functions can differentiate between
normal and clinical or regulatory behaviors, such as ADHD.[52][53][54] Last,
longitudinal studies demonstrate that EF skills are relatively stable throughout
development.[55][56]
Banich’s (2009) "Cascade of control" model
This model integrates theories from other models, and involves a sequential cascade of
brain regions involved in maintaining attentional sets in order to arrive at a goal. In
sequence, the model assumes the involvement of the posterior dorsolateral prefrontal
cortex (DLPFC), the mid-DLPFC, and the posterior and anterior dorsal ACC.[57]
The cognitive task used in the article is selecting a response in the Stroop task, among
conflicting color and word responses, specifically a stimulus where the word "green" is
printed in red ink. The posterior DLPFC creates an appropriate attentional set, or rules
for the brain to accomplish the current goal. For the Stroop task, this involves activating
the areas of the brain involved in color perception, and not those involved in word
comprehension. It counteracts biases and irrelevant information, like the fact that the
semantic perception of the word is more salient to most people than the color in which
it is printed.
Next, the mid-DLPFC selects the representation that will fulfill the goal. The taskrelevant information must be separated from other sources of information in the task. In
the example, this means focusing on the ink color and not the word.
The posterior dorsal anterior cingulate cortex (ACC) is next in the cascade, and it is
responsible for response selection. This is where the decision is made whether you will
say green (the written word and the incorrect answer) or red (the font color and correct
answer).
659
Following the response, the anterior dorsal ACC is involved in response evaluation,
deciding whether you were correct or incorrect. Activity in this region increases when
the probability of an error is higher.
The activity of any of the areas involved in this model depends on the efficiency of the
areas that came before it. If the DLPFC imposes a lot of control on the response, the
ACC will require less activity.[57]
Assessment
Assessment of executive functions involves gathering data from several sources and
synthesizing the information to look for trends and patterns across time and setting.
Apart from formal tests, other measures can be used, such as standardized checklists,
observations, interviews, and work samples. From these, conclusions may be drawn on
the use of executive functions.[58]
There are several different kinds of tests (e.g., performance based, self-report) that
measure executive functions across development. These assessments can serve a
diagnostic purpose for a number of clinical populations.
Experimental evidence
The executive system has been traditionally quite hard to define, mainly due to what
psychologist Paul W. Burgess calls a lack of "process-behaviour correspondence".[59]
That is, there is no single behavior that can in itself be tied to executive function, or
indeed executive dysfunction. For example, it is quite obvious what reading-impaired
patients cannot do, but it is not so obvious what exactly executive-impaired patients
might be incapable of.
This is largely due to the nature of the executive system itself. It is mainly concerned
with the dynamic, "online" co-ordination of cognitive resources, and, hence, its effect
can be observed only by measuring other cognitive processes. In similar manner, it
does not always fully engage outside of real-world situations. As neurologist Antonio
Damasio has reported, a patient with severe day-to-day executive problems may still
pass paper-and-pencil or lab-based tests of executive function.[60]
Theories of the executive system were largely driven by observations of patients
having suffered frontal lobe damage. They exhibited disorganized actions and
strategies for everyday tasks (a group of behaviors now known as dysexecutive
syndrome) although they seemed to perform normally when clinical or lab-based tests
were used to assess more fundamental cognitive functions such as memory, learning,
language, and reasoning. It was hypothesized that, to explain this unusual behaviour,
there must be an overarching system that co-ordinates other cognitive resources.[61]
Much of the experimental evidence for the neural structures involved in executive
functions comes from laboratory tasks such as the Stroop task or the Wisconsin Card
Sorting Task (WCST). In the Stroop task, for example, human subjects are asked to
name the color that color words are printed in when the ink color and word meaning
often conflict (for example, the word "RED" in green ink). Executive functions are
needed to perform this task, as the relatively overlearned and automatic behaviour
(word reading) has to be inhibited in favour of a less practiced task - naming the ink
color. Recent functional neuroimaging studies have shown that two parts of the PFC,
the anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (DLPFC), are
thought to be particularly important for performing this task.
660
Context-sensitivity of PFC neurons
Other evidence for the involvement of the PFC in executive functions comes from
single-cell electrophysiology studies in non-human primates, such as the macaque
monkey, which have shown that (in contrast to cells in the posterior brain) many PFC
neurons are sensitive to a conjunction of a stimulus and a context. For example, PFC
cells might respond to a green cue in a condition where that cue signals that a
leftwards fast movement of the eyes and the head should be made, but not to a green
cue in another experimental context. This is important, because the optimal
deployment of executive functions is invariably context-dependent. To quote an
example offered by Miller and Cohen, a US resident might have an overlearned
response to look left when crossing the road. However, when the "context" indicates
that he or she is in the UK, this response would have to be suppressed in favour of a
different stimulus-response pairing (look right when crossing the road). This
behavioural repertoire clearly requires a neural system that is able to integrate the
stimulus (the road) with a context (US, UK) to cue a behaviour (look left, look right).
Current evidence suggests that neurons in the PFC appear to represent precisely this
sort of information.[citation needed] Other evidence from single-cell electrophysiology
in monkeys implicates ventrolateral PFC (inferior prefrontal convexity) in the control of
motor responses. For example, cells that increase their firing rate to NoGo signals[62]
as well as a signal that says "don't look there!"[63] have been identified.
Attentional biasing in sensory regions
Electrophysiology and functional neuroimaging studies involving human subjects have
been used to describe the neural mechanisms underlying attentional biasing. Most
studies have looked for activation at the 'sites' of biasing, such as in the visual or
auditory cortices. Early studies employed event-related potentials to reveal that
electrical brain responses recorded over left and right visual cortex are enhanced when
the subject is instructed to attend to the appropriate (contralateral) side of space.[64]
The advent of bloodflow-based neuroimaging techniques such as functional magnetic
resonance imaging (fMRI) and positron emission tomography (PET) has more recently
permitted the demonstration that neural activity in a number of sensory regions,
including color-, motion-, and face-responsive regions of visual cortex, is enhanced
when subjects are directed to attend to that dimension of a stimulus, suggestive of gain
control in sensory neocortex. For example, in a typical study, Liu and coworkers[65]
presented subjects with arrays of dots moving to the left or right, presented in either red
or green. Preceding each stimulus, an instruction cue indicated whether subjects
should respond on the basis of the colour or the direction of the dots. Even though
colour and motion were present in all stimulus arrays, fMRI activity in colour-sensitive
regions (V4) was enhanced when subjects were instructed to attend to the colour, and
activity in motion-sensitive regions was increased when subjects were cued to attend to
the direction of motion. Several studies have also reported evidence for the biasing
signal prior to stimulus onset, with the observation that regions of the frontal cortex
tend to come active prior to the onset of an expected stimulus.[66]
Connectivity between the PFC and sensory regions
Despite the growing currency of the 'biasing' model of executive functions, direct
evidence for functional connectivity between the PFC and sensory regions when
executive functions are used, is to date rather sparse.[67] Indeed, the only direct
evidence comes from studies in which a portion of frontal cortex is damaged, and a
corresponding effect is observed far from the lesion site, in the responses of sensory
neurons.[68][69] However, few studies have explored whether this effect is specific to
situations where executive functions are required. Other methods for measuring
661
connectivity between distant brain regions, such as correlation in the fMRI response,
have yielded indirect evidence that the frontal cortex and sensory regions communicate
during a variety of processes thought to engage executive functions, such as working
memory,[70] but more research is required to establish how information flows between
the PFC and the rest of the brain when executive functions are used. As an early step
in this direction, an fMRI study on the flow of information processing during visuospatial
reasoning has provided evidence for causal associations (inferred from the temporal
order of activity) between sensory-related activity in occipital and parietal cortices and
activity in posterior and anterior PFC.[71] Such approaches can further elucidate the
distribution of processing between executive functions in PFC and the rest of the brain.
Bilingualism and executive functions
A growing body of research demonstrates that bilinguals show advantages in executive
functions, specifically inhibitory control and task switching.[72][73] A possible
explanation for this is that speaking two languages requires controlling one's attention
and choosing the correct language to speak. Across development, bilingual infants,[74]
children,[73] and elderly[75] show a bilingual advantage when it comes to executive
functioning. Interestingly, bimodal bilinguals, or people who speak one language and
also know sign language, do not demonstrate this bilingual advantage in executive
functioning tasks.[76] This may be because one is not required to actively inhibit one
language in order to speak the other. Bilingual individuals also seem to have an
advantage in an area known as conflict processing, which occurs when there are
multiple representations of one particular response (for example, a word in one
language and its translation in the individual’s other language).[77] Specifically, the
lateral prefrontal cortex has been shown to be involved with conflict processing.
Future directions
Other important evidence for executive functions processes in the prefrontal cortex
have been described. One widely cited review article[78] emphasizes the role of the
medial part of the PFC in situations where executive functions are likely to be engaged
– for example, where it is important to detect errors, identify situations where stimulus
conflict may arise, make decisions under uncertainty, or when a reduced probability of
obtaining favourable performance outcomes is detected. This review, like many
others,[79] highlights interactions between medial and lateral PFC, whereby posterior
medial frontal cortex signals the need for increased executive functions and sends this
signal on to areas in dorsolateral prefrontal cortex that actually implement control. Yet
there has been no compelling evidence at all that this view is correct, and, indeed, one
article showed that patients with lateral PFC damage had reduced ERNs (a putative
sign of dorsomedial monitoring/error-feedback)[80] - suggesting, if anything, that the
direction of flow of the control could be in the reverse direction. Another prominent
theory[81] emphasises that interactions along the perpendicular axis of the frontal
cortex, arguing that a 'cascade' of interactions between anterior PFC, dorsolateral PFC,
and premotor cortex guides behaviour in accordance with past context, present
context, and current sensorimotor associations, respectively.
Advances in neuroimaging techniques have allowed studies of genetic links to
executive functions, with the goal of using the imaging techniques as potential
endophenotypes for discovering the genetic causes of executive function.[82]
662
Attention
Attention is the cognitive process of
selectively concentrating on one aspect of
the environment while ignoring other things.
Attention has also been referred to as the
allocation of processing resources. [1]
Attention is one of the most intensely
studied topics within psychology and
cognitive neuroscience. Attention remains a
major area of investigation within education,
psychology and neuroscience. Areas of
active investigation involve determining the
source of the signals that generate
attention, the effects of these signals on the
tuning properties of sensory neurons, and
the relationship between attention and other
cognitive processes like working memory
and vigilance. A relatively new body of
research is investigating the phenomenon of
traumatic brain injuries and their effects on
attention.
Attention also
cultures.[2]
has
variations
amongst
The relationships between attention and
consciousness are complex enough that
they have warranted perennial philosophical exploration. Such exploration is both
ancient and continually relevant, as it can have effects in fields ranging from mental
health to artificial intelligence research and development.
Contents
1 History of the study of attention
1.1 Philosophical period
1.2 1860-1909
1.3 1910-1949
1.4 1950-1974
1.5 1975-present
2 Selective attention
2.1 Visual attention
2.2 Auditory Attention
3 Multitasking and divided attention
4 Bottom-up versus top-down
5 Overt and covert orienting
6 Exogenous and endogenous orienting
7 Influence of processing load
8 Clinical model of attention
9 Neural correlates of attention
10 Cultural variation
11 Attention modelling
12 Hemispatial neglect
663
History of the study of attention
Philosophical period
Prior to the founding of psychology as a scientific discipline, attention was studied in
the field of philosophy. Due to this, many of the discoveries in the field of Attention
were made by philosophers. Psychologist John Watson cites Juan Luis Vives as the
Father of Modern Psychology due to his book De Anima et Vita in which Vives was the
first to recognize the importance of empirical investigation.[3] In his work on memory,
Vives found that the more closely one attends to stimuli, the better they will be retained.
Psychologist Daniel E. Berlyne credits the first extended treatment of attention to
philosopher Nicolas Malebranche in his work "The Search After Truth". "Malebranche
held that we have access to ideas, or mental representations of the external world, but
not direct access to the world itself." [3] Thus in order to keep these ideas organized,
attention is necessary. Otherwise we will confuse these ideas. Malebranche writes in
"The Search After Truth", "because it often happens that the understanding has only
confused and imperfect perceptions of things, it is truly a cause of our errors.... It is
therefore necessary to look for means to keep our perceptions from being confused
and imperfect. And, because, as everyone knows, there is nothing that makes them
clearer and more distinct than attentiveness, we must try to find the means to become
more attentive than we are".[4] According to Malebranche, attention is crucial to
understanding and keeping thoughts organized.
Philosopher Gottfried Wilhelm Leibniz introduced the concept of apperception to this
philosophical approach to attention. Apperception refers to "the process by which new
experience is assimilated to and transformed by the residuum of past experience of an
individual to form a new whole." [5] Apperception is required for a perceived event to
become a conscious event. Leibniz emphasized a reflexive involuntary view of
attention known as exogenous orienting. However there is also endogenous orienting
which is voluntary and directed attention. Philosopher Johann Friedrich Herbart agreed
with Leibniz's view of apperception however he expounded on it in by saying that new
experiences had to be tied to ones already existing in the mind. Herbart was also the
first person to stress the importance of applying mathematical modeling to the study of
psychology.[3]
It was previously thought in the beginning of the 19th century that people were not able
to attend to more than one stimulus at a time. However with research contributions by
Sir William Hamilton, 9th Baronet this view was changed. Hamilton proposed a view of
attention that likened its capacity to holding marbles. You can only hold a certain
amount of marbles at a time before it starts to spill over. His view states that we can
attend to more than one stimulus at once. William Stanley Jevons later expanded this
view and stated that we can attend to up to four items at a time[citation needed] .
During this period of attention, various philosophers made significant contributions to
the field. They began the research on the extent of attention and how attention is
directed.
1860-1909
This period of attention research took the focus from conceptual findings to
experimental testing. It also involved psychophysical methods that allowed
measurement of the relation between physical stimulus properties and the
psychological perceptions of them. This period covers the development of attentional
research from the founding of psychology to 1909.
Wilhelm Wundt introduced the study of attention to the field of psychology. Wundt
measured mental processing speed by likening it to differences in stargazing
measurements. Astronomers in this time would measure the time it took for stars to
travel. Among these measurements when astronomers recorded the times, there were
664
personal differences in calculation. These different readings resulted in different reports
from each astronomer. To correct for this, a personal equation was developed. Wundt
applied this to mental processing speed. Wundt realized that the time it takes to see
the stimulus of the star and write down the time was being called an "observation error"
but actually was the time it takes to switch voluntarily one's attention from one stimulus
to another. Wundt called his school of psychology voluntarism. It was his belief that
psychological processes can only be understood in terms of goals and consequences.
Franciscus Donders used mental chronometry to study attention and it was considered
a major field of intellectual inquiry by authors such as Sigmund Freud. Donders and his
students conducted the first detailed investigations of the speed of mental processes.
Donders measured the time required to identify a stimulus and to select a motor
response. This was the time difference between stimulus discrimination and response
initiation. Donders also formalized the subtractive method which states that the time for
a particular process can be estimated by adding that process to a task and taking the
difference in reaction time between the two tasks. He also differentiated between three
types of reactions: simple reaction, choice reaction, and go/no-go reaction.
Hermann von Helmholtz also contributed to the field of attention relating to the extent of
attention. Von Helmholtz stated that it is possible to focus on one stimulus and still
perceive or ignore others. An example of this is being able to focus on the letter u in the
word house and still perceiving the letters h, o, s, and e.
One major debate in this period was whether it was possible to attend to two things at
once (split attention). Walter Benjamin described this experience as "reception in a
state of distraction." This disagreement could only be resolved through
experimentation.
In 1890, William James, in his textbook Principles of Psychology, remarked:
“ Everyone knows what attention is. It is the taking possession by the mind, in
clear and vivid form, of one out of what seem several simultaneously possible
objects or trains of thought. Focalization, concentration, of consciousness are of
its essence. It implies withdrawal from some things in order to deal effectively
with others, and is a condition which has a real opposite in the confused, dazed,
scatterbrained state which in French is called distraction, and Zerstreutheit in
German.[6] ”
James differentiated between sensorial attention and intellectual attention. Sensorial
attention is when attention is directed to objects of sense, stimuli that are physically
present. Intellectual attention is attention directed to ideal or represented objects;
stimuli that are not physically present. James also distinguished between immediate or
derived attention: attention to the present versus to something not physically present.
According to James, attention has five major effects. Attention works to make us
perceive, conceive, distinguish, remember, and shorten reactions time.
1910-1949
During the period from 1910-1949, research in attention waned and interest in
behaviorism flourished. It is often stated that there was no research during this period.
Ulric Neisser stated that in this period, "There was no research on attention". This is
simply not true. In 1927 Jersild published very important work on "Mental Set and
Shift". He stated, "The fact of mental set is primary in all conscious activity. The same
stimulus may evoke any one of a large number of responses depending upon the
contextual setting in which it is placed".[7] This research found that the time to
complete a list was longer for mixed lists than for pure lists. For example, if a list was
names of animals versus a list with names of animals, books, makes and models of
cars, and types of fruits, it takes longer to process. This is task switching.
665
In 1931, Telford discovered the psychological refractory period. The stimulation of
neurons is followed by a refractory phase during which neurons are less sensitive to
stimulation. In 1935 John Ridley Stroop developed the Stroop Task which elicited the
Stroop Effect. Stroop's task showed that irrelevant stimulus information can have a
major impact on performance. In this task, subjects were to look at a list of colors. This
list of colors had each color typed in a color different from the actual text. For example
the word Blue would be typed in Orange, Pink in Black, etc.
Example: Blue Purple Red Green Purple Green
Subjects were then instructed to say the name of the ink color and ignore the text. It
took 110 seconds to complete a list of this type compared to 63 seconds to name the
colors when presented in the form of solid squares.[3] The naming time nearly doubled
in the presence of conflicting color words, an effect known as the Stroop Effect.
1950-1974
In the 1950s, research psychologists renewed their interest in attention when the
dominant epistemology shifted from positivism (i.e., behaviorism) to realism during
what has come to be known as the "cognitive revolution".[8] The cognitive revolution
admitted unobservable cognitive processes like attention as legitimate objects of
scientific study.
Modern research on attention began with the analysis of the "cocktail party problem" by
Colin Cherry in 1953. At a cocktail party how do people select the conversation that
they are listening to and ignore the rest? This problem is at times called "focused
attention", as opposed to "divided attention". Cherry performed a number of
experiments which became known as dichotic listening and were extended by Donald
Broadbent and others.[9] In a typical experiment, subjects would use a set of
headphones to listen to two streams of words in different ears and selectively attend to
one stream. After the task, the experimenter would question the subjects about the
content of the unattended stream.
Broadbent's Filter Model of Attention states that information is held in a pre-attentive
temporary store, and only sensory events that have some physical feature in common
are selected to pass into the limited capacity processing system. This implies that the
meaning of unattended messages is not identified. Also, a significant amount of time is
required to shift the filter from one channel to another. Experiments by Gray and
Wedderburn and later Anne Treisman pointed out various problems in Broadbent's
early model and eventually led to the Deutsch-Norman model in 1968. In this model, no
signal is filtered out, but all are processed to the point of activating their stored
representations in memory. The point at which attention becomes "selective" is when
one of the memory representations is selected for further processing. At any time, only
one can be selected, resulting in the attentional bottleneck.[10]
This debate became known as the early-selection vs late-selection models. In the early
selection models (first proposed by Donald Broadbent), attention shuts down (in
Broadbent's model) or attenuates (in Triesman's refinement) processing in the
unattended ear before the mind can analyze its semantic content. In the late selection
models (first proposed by J. Anthony Deutsch and Diana Deutsch), the content in both
ears is analyzed semantically, but the words in the unattended ear cannot access
consciousness.[11] This debate has still not been resolved.
In the 1960s, Robert Wurtz at the National Institutes of Health began recording
electrical signals from the brains of macaques who were trained to perform attentional
tasks. These experiments showed for the first time that there was a direct neural
correlate of a mental process (namely, enhanced firing in the superior
colliculus).[12][not specific enough to verify]
666
1975-present
In the mid-1970s, multiple resource models were put forth. These studies showed that
it is easier to perform two tasks together when the tasks use different stimulus or
response modalities than when they use the same modalities. Michael Posner did
research on space-based versus object-based approaches to attention in the 1980s.
For space-based attention, attention is likened to that of a spotlight. Attention is
directed to everything in the spotlight's field.
Anne Treisman developed the highly influential feature integration theory.[13]
According to this model, attention binds different features of an object (e.g., color and
shape) into consciously experienced wholes. Although this model has received much
criticism, it is still widely cited and spawned similar theories with modification, such as
Jeremy Wolfe's Guided Search Theory.[14]
In the 1990s, psychologists began using PET and later fMRI to image the brain in
attentive tasks. Because of the highly expensive equipment that was generally only
available in hospitals, psychologists sought for cooperation with neurologists. Pioneers
of brain imaging studies of selective attention are psychologist Michael I. Posner (then
already renowned for his seminal work on visual selective attention) and neurologist
Marcus Raichle.[citation needed] Their results soon sparked interest from the entire
neuroscience community in these psychological studies, which had until then focused
on monkey brains. With the development of these technological innovations
neuroscientists became interested in this type of research that combines sophisticated
experimental paradigms from cognitive psychology with these new brain imaging
techniques. Although the older technique of EEG had long been used to study the brain
activity underlying selective attention by cognitive psychophysiologists, the ability of the
newer techniques to actually measure precisely localized activity inside the brain
generated renewed interest by a wider community of researchers. The results of these
experiments have shown a broad agreement with the psychological,
psychophysiological and the experiments performed on monkeys.
Selective attention
Visual attention
In cognitive psychology there are at least
two models which describe how visual
attention operates. These models may be
considered loosely as metaphors which are
used to describe internal processes and to
generate hypotheses that are falsifiable.
Generally speaking, visual attention is
thought to operate as a two-stage
process.[15] In the first stage, attention is
distributed uniformly over the external visual
scene and processing of information is
performed in parallel. In the second stage,
attention is concentrated to a specific area
of the visual scene (i.e. it is focused), and
processing is performed in a serial fashion.
The first of these models to appear in the
literature is the spotlight model. The term "spotlight" was inspired by the work of
William James who described attention as having a focus, a margin, and a fringe.[16]
The focus is an area that extracts information from the visual scene with a highresolution, the geometric center of which being where visual attention is directed.
Surrounding the focus is the fringe of attention which extracts information in a much
667
more crude fashion (i.e. low-resolution). This fringe extends out to a specified area and
this cut-off is called the margin.
The second model that is called the zoom-lens model, and was first introduced in
1983.[17] This model inherits all properties of the spotlight model (i.e. the focus, the
fringe, and the margin) but has the added property of changing in size. This sizechange mechanism was inspired by the zoom lens you might find on a camera, and
any change in size can be described by a trade-off in the efficiency of processing.[18]
The zoom-lens of attention can be described in terms of an inverse trade-off between
the size of focus and the efficiency of processing: because attentional resources are
assumed to be fixed, then it follows that the larger the focus is, the slower processing
will be of that region of the visual scene since this fixed resource will be distributed over
a larger area. It is thought that the focus of attention can subtend a minimum of 1° of
visual angle,[16][19] however the maximum size has not yet been determined.
Auditory Attention
Main article: Selective Auditory Attention:
Selective auditory attention or selective hearing is a type of selective attention and
involves the auditory system of the nervous system. Selective hearing does not involve
the sounds that are not heard. However, it is characterized as the action in which
people focus their attention on a specific source of a sound or spoken words. The
sounds and noise in the surrounding environment is heard by the auditory system but
certain parts of the auditory information are processed in the brain only. Most often,
auditory attention is directed at things people would like to hear. The increased
instances of selective hearing can be seen in family homes. A common example would
be a mother asking her child to do something before he or she can enjoy a reward.
Mother may say: “James, you can have an ice-cream after you clear your room.” And
James replies: “Thanks mom! I needed that ice-cream.” Selective hearing is not a
physiological disorder but rather it is the capability of humans to block out sounds and
noise. It is the notion of ignoring certain things in the surrounding environment. Over
the years, there has been increased research in the selectivity of auditory attention,
namely selective hearing.
Through observations, one would realize that it is the reward that is heard almost all
the time. The mind does not process the auditory information about the chore. It is
basically the filtration of positive pleasant information. If the child was not physically
impaired in hearing, he would have heard the whole sentence being said. It has
puzzled parents, as well as psychologists, the way the child’s mind can selectively hear
the things that they want to hear and leave out unpleasant tasks.
Contents:
1 History -
2 Recent Research -
3 Prevalence -
4 Disorder status
History
Researchers have been studying the reasons and possibilities of this missing link in the
procedure of the selectivity of auditory information in the brain. In 1953, a cognitive
scientist from England, Colin Cherry, was the first person to discover a phenomenon
called the cocktail party problem. He suggested that the auditory system can filter
sounds being heard. Cherry also mentioned that the physical characteristics of an
auditory message were perceived but the message was not semantically processed.
Another psychologist, Albert Bregman, came up with the auditory scene analysis
model. The model has three main characteristics: segmentation, integration, and
segregation. Segmentation involves the division of auditory messages into segments of
importance. The process of combining parts of an auditory message to form a whole is
associated with integration. Segregation is the separation of important auditory
messages and the unwanted information in the brain. It is important to note that
668
Bregman also makes a link back to the idea of perception. He states that it is essential
for one to make a useful representation of the world from sensory inputs around us.
Without perception, an individual will not recognize or have the knowledge of what is
going on around them.
Recent Research
Recently, researchers have attempted to explain mechanisms implicated in selective
auditory attention. In 2012, an assistant professor in residence of the Neurological
Surgery and Physiology in the University of California San Francisco examined the
selective cortical representation of attended speaker in multiple-talker speech
perception. Edward Chang and his colleague, Nima Mesgarani undertook a study that
recruited three patients affected by severe epilepsy, who were undergoing treatment
surgery. All patients were recorded to have normal hearing. The procedure of this study
required the surgeons to place a thin sheet of electrodes under the skull on the outer
surface of the cortex. The activity of electrodes was recorded in the auditory cortex.
The patients were given two speech samples to listen to and they were told to
distinguish the words spoken by the speakers. The speech samples were
simultaneously played and different speech phrases were spoken by different
speakers. Chang and Mesgarani found an increase in neural responses in the auditory
cortex when the patients heard words from the target speaker. Chang went on to
explain that the method of this experiment was well-conducted as it was able to
observe the neural patterns that tells when the patient’s auditory attention shifted to the
other speaker. This clearly shows the selectivity of auditory attention in humans.
Prevalence
The prevalence of selective hearing has not been clearly researched yet. However,
there are some that have argued that the proportion of selective hearing is particularly
higher in males than females. Ida Zündorf, Hans-Otto Karnath and Jörg Lewald carried
out a study in 2010 which investigated the advantages and abilities males have in the
localization of auditory information. A sound localization task centered on the cocktail
party effect was utilized in their study. The male and female participants had to try to
pick out sounds from a specific source, on top of other competing sounds from other
sources. The results showed that the males had a better performance overall. Female
participants found it more difficult to locate target sounds in a multiple-source
environment. Zündorf et al. suggested that there may be sex differences in the
attention processes that helped locate the target sound from a multiple-source auditory
field.
Disorder status
Selective hearing is not known to be a disorder of the physiological or psychological
aspect. Under the World Health Organization (WHO), a hearing disorder happens
when there is a complete loss of hearing in the ears. It means the loss of the ability to
hear. Technically speaking, selective hearing is not “deafness” to a certain sound
message. Rather, it is the selectivity of an individual to attend audibly to a sound
message. The whole sound message is physically heard by the ear but the idea is the
capacity of the mind to systematically filter out unwanted information. Therefore,
selective hearing should not be confused as a physiological hearing disorder.
669
Multitasking and divided attention
Multitasking can be defined as the attempt to perform two or more tasks
simultaneously; however, research shows that when multitasking, people make more
mistakes or perform their tasks more slowly.[20] Attention must be divided among all of
the component tasks to perform them.
Older research involved looking at the limits of people performing simultaneous tasks
like reading stories, while listening and writing something else,[21] or listening to two
separate messages through different ears (i.e., dichotic listening). Generally, classical
research into attention investigated the ability of people to learn new information when
there were multiple tasks to be performed, or to probe the limits of our perception (c.f.
Donald Broadbent). There is also older literature on people's performance on multiple
tasks performed simultaneously, such as driving a car while tuning a radio[22] or
driving while telephoning.[23]
The vast majority of current research on human multitasking is based on performance
of doing two tasks simultaneously,[20] usually that involves driving while performing
another task, such as texting, eating, or even speaking to passengers in the vehicle, or
with a friend over a cellphone. This research reveals that the human attentional system
has limits for what it can process: driving performance is worse while engaged in other
tasks; drivers make more mistakes, brake harder and later, get into more accidents,
veer into other lanes, and/or are less aware of their surroundings when engaged in the
previously discussed tasks.[24][25][26]
There has been little difference found between speaking on a hands-free cell phone or
a hand-held cell phone,[27][28] which suggests that it is the strain of attentional system
that causes problems, rather than what the driver is doing with his or her hands. While
speaking with a passenger is as cognitively demanding as speaking with a friend over
the phone,[29] passengers are able to change the conversation based upon the needs
of the driver. For example, if traffic intensifies, a passenger may stop talking to allow
the driver to navigate the increasingly difficult roadway; a conversation partner over a
phone would not be aware of the change in environment.
There have been multiple theories regarding divided attention. One, conceived by
Kahneman,[30] explains that there is a single pool of attentional resources that can be
freely divided among multiple tasks. This model seems to be too oversimplified,
however, due to the different modalities (e.g., visual, auditory, verbal) that are
perceived.[31] When the two simultaneous tasks use the same modality, such as
listening to a radio station and writing a paper, it is much more difficult to concentrate
on both because the tasks are likely to interfere with each other. The specific modality
model was theorized by Navon and Gopher in 1979. Although this model is more
adequate at explaining divided attention among simple tasks, resource theory is
another, more accurate metaphor for explaining divided attention on complex tasks.
Resource theory states that as each complex task is automatized, performing that task
requires less of the individual's limited-capacity attentional resources.[31]
Other variables play a part in our ability to pay attention to and concentrate on many
tasks at once. These include, but are not limited to, anxiety, arousal, task difficulty, and
skills.[31]
Bottom-up versus top-down
Researchers have described two different aspects of how the mind comes to attend to
items present in the environment.
The first aspect is called bottom-up processing, also known as stimulus-driven attention
or exogenous attention. These describe attentional processing which is driven by the
properties of the objects themselves. Some processes, such as motion or a sudden
loud noise, can attract our attention in a pre-conscious, or non-volitional way. We
670
attend to them whether we want to or not.[32] These aspects of attention are thought to
involve parietal and temporal cortices, as well as the brainstem.[33]
The second aspect is called top-down processing, also known as goal-driven,
endogenous attention, attentional control or executive attention. This aspect of our
attentional orienting is under the control of the person who is attending. It is mediated
primarily by the frontal cortex and basal ganglia[33][34] as one of the executive
functions.[33][35] Research has shown that it is related to other aspects of the
executive functions, such as working memory[36] and conflict resolution and
inhibition.[37]
Overt and covert orienting
Attention may be differentiated into "overt" versus "covert" orienting.[38]
Overt orienting is the act of selectively attending to an item or location over others by
moving the eyes to point in that direction.[39] Overt orienting can be directly observed
in the form of eye movements. Although overt eye movements are quite common, there
is a distinction that can be made between two types of eye movements; reflexive and
controlled. Reflexive movements are commanded by the superior colliculus of the
midbrain. These movements are fast and are activated by the sudden appearance of
stimuli. In contrast, controlled eye movements are commanded by areas in the frontal
lobe. These movements are slow and voluntary.
Covert orienting is the act to mentally shifting one's focus without moving one's
eyes.[39][40][41] Simply, it is changes in attention that are not attributable to overt eye
movements. Covert orienting has the potential to affect the output of perceptual
processes by governing attention to particular items or locations, but does not influence
the information that is processed by the senses. Researchers often use "filtering" tasks
to study the role of covert attention of selecting information. These tasks often require
participants to observe a number of stimuli, but attend to only one.
The current view is that visual covert attention is a mechanism for quickly scanning the
field of view for interesting locations. This shift in covert attention is linked to eye
movement circuitry that sets up a slower saccade to that location.[citation needed]
There are studies that suggest the mechanisms of overt and covert orienting may not
be as separate as previously believed. This is due to the fact that central mechanisms
that may control covert orienting, such as the parietal lobe also receive input from
subcortical centres involved in overt orienting.[39] General theories of attention actively
assume bottom-up (covert) processes and top-down (overt) processes converge on a
common neural architecture.[42] For example, if individuals attend to the right hand
corner field of view, movement of the eyes in that direction may have to be actively
suppressed.
Exogenous and endogenous orienting
Orienting attention is vital and can be controlled through external (exogenous) or
internal (endogenous) processes. However, comparing these two processes is
challenging because external signals do not operate completely exogenously, but will
only summon attention and eye movements if they are important to the subject.[39]
Exogenous (from Greek exo, meaning "outside", and genein, meaning "to produce")
orienting is frequently described as being under control of a stimulus.[43] Exogenous
orienting is considered to be reflexive and automatic and is caused by a sudden
change in the periphery. This often results in a reflexive saccade. Since exogenous
cues are typically presented in the periphery, they are referred to as peripheral cues.
Exogenous orienting can even be observed when individuals are aware that the cue
will not relay reliable, accurate information about where a target is going to occur. This
671
means that the mere presence of an exogenous cue will affect the response to other
stimuli that are subsequently presented in the cue's previous location.[citation needed]
Several studies have investigated the influence of valid and invalid
cues.[44][45][46][47] They concluded that valid peripheral cues benefit performance,
for instance when the peripheral cues are brief flashes at the relevant location before to
the onset of a visual stimulus. Posner and Cohen (1984) noted a reversal of this benefit
takes place when the interval between the onset of the cue and the onset of the target
is longer than about 300 ms.[48] The phenomenon of valid cues producing longer
reaction times than invalid cues is called inhibition of return.
Endogenous (from Greek endo, meaning "within" or "internally") orienting is the
intentional allocation of attentional resources to a predetermined location or space.[43]
Simply stated, endogenous orienting occurs when attention is oriented according to an
observer's goals or desires, allowing the focus of attention to be manipulated by the
demands of a task. In order to have an effect, endogenous cues must be processed by
the observer and acted upon purposefully. These cues are frequently referred to as
central cues. This is because they are typically presented at the center of a display,
where an observer's eyes are likely to be fixated. Central cues, such as an arrow or
digit presented at fixation, tell observers to attend to a specific location.[49]
When examining differences between exogenous and endogenous orienting, some
researchers suggests that there are four differences between the two kinds of cues:
-exogenous orienting is less affected by cognitive load than endogenous
orienting;
-observers are able to ignore endogenous cues but not exogenous cues;
-exogenous cues have bigger effects than endogenous cues; and
-expectancies about cue validity and predictive value affects endogenous
orienting more than exogenous orienting.[50]
There exist both overlaps and differences in the areas of the brain that are responsible
for endogenous and exogenous orientating.[51]
Influence of processing load
One theory regarding selective attention is the cognitive load theory, which states that
there are two mechanisms that affect attention: cognitive and perceptual. The
perceptual considers the subject’s ability to perceive or ignore stimuli, both task-related
and non task-related. Studies show that if there are many stimuli present (especially if
they are task-related), it is much easier to ignore the non-task related stimuli, but if
there are few stimuli the mind will perceive the irrelevant stimuli as well as the relevant.
The cognitive refers to the actual processing of the stimuli, studies regarding this
showed that the ability to process stimuli decreased with age, meaning that younger
people were able to perceive more stimuli and fully process them, but were likely to
process both relevant and irrelevant information, while older people could process
fewer stimuli, but usually processed only relevant information.[52]
Some people can process multiple stimuli, e.g. trained morse code operators have
been able to copy 100% of a message while carrying on a meaningful conversation.
This relies on the reflexive response due to "overlearning" the skill of morse code
reception/detection/transcription so that it is an autonomous function requiring no
specific attention to perform.
672
Clinical model of attention
Attention is best described as the sustained focus of cognitive resources on information
while filtering or ignoring extraneous information. Attention is a very basic function that
often is a precursor to all other neurological/cognitive functions. As is frequently the
case, clinical models of attention differ from investigation models. One of the most used
models for the evaluation of attention in patients with very different neurologic
pathologies is the model of Sohlberg and Mateer.[53] This hierarchic model is based in
the recovering of attention processes of brain damage patients after coma. Five
different kinds of activities of growing difficulty are described in the model; connecting
with the activities those patients could do as their recovering process advanced.
-Focused attention: The ability to respond discretely to specific visual, auditory
or tactile stimuli.
-Sustained attention (vigilance): The ability to maintain a consistent behavioral
response during continuous and repetitive activity.
-Selective attention: The ability to maintain a behavioral or cognitive set in the
face of distracting or competing stimuli. Therefore it incorporates the notion of
"freedom from distractibility."
-Alternating attention: The ability of mental flexibility that allows individuals to
shift their focus of attention and move between tasks having different cognitive
requirements.
-Divided attention: This is the highest level of attention and it refers to the
ability to respond simultaneously to multiple tasks or multiple task demands.
This model has been shown to be very useful in evaluating attention in very different
pathologies, correlates strongly with daily difficulties and is especially helpful in
designing stimulation programs such as attention process training, a rehabilitation
program for neurologic patients of the same authors.
Neural correlates of attention
Most experiments show that one neural correlate of attention is enhanced firing. If a
neuron has a certain response to a stimulus when the animal is not attending to the
stimulus, then when the animal does attend to the stimulus, the neuron's response will
be enhanced even if the physical characteristics of the stimulus remain the same.
In a 2007 review, Knudsen[54] describes a more general model which identifies four
core processes of attention, with working memory at the center:
-Working memory temporarily stores information for detailed analysis.
-Competitive selection is the process that determines which information gains
access to working memory.
-Through top-down sensitivity control, higher cognitive processes can regulate
signal intensity in information channels that compete for access to working
memory, and thus give them an advantage in the process of competitive
selection. Through top-down sensitivity control, the momentary content of
working memory can influence the selection of new information, and thus
mediate voluntary control of attention in a recurrent loop (endogenous
attention).[55]
-Bottom-up saliency filters automatically enhance the response to infrequent
stimuli, or stimuli of instinctive or learned biological relevance (exogenous
attention).[55]
Neutrally, at different hierarchical levels spatial maps can enhance or inhibit activity in
sensory areas, and induce orienting behaviors like eye movement.
673
-At the top of the hierarchy, the frontal eye fields (FEF) on the dorsolateral
frontal cortex contain a retinocentric spatial map. Microstimulation in the FEF
induces monkeys to make a saccade to the relevant location. Stimulation at
levels too low to induce a saccade will nonetheless enhance cortical responses
to stimuli located in the relevant area.
-At the next lower level, a variety of spatial maps are found in the parietal
cortex. In particular, the lateral intraparietal area (LIP) contains a saliency map
and is interconnected both with the FEF and with sensory areas.
-Certain automatic responses that influence attention, like orienting to a highly
salient stimulus, are mediated subcortically by the superior colliculi.
-At the neural network level, it is thought that processes like lateral inhibition
mediate the process of competitive selection.
In many cases attention produces changes in the EEG. Many animals, including
humans, produce gamma waves (40–60 Hz) when focusing attention on a particular
object or activity.[56]
Another commonly used model for the attention system has been put forth by
researchers such as Michael Posner divides attention into three functional
components: alerting, orienting, and executive attention.[57][58]
-Alerting is the process involved in becoming and staying attentive toward the
surroundings. It appears to exist in the frontal and parietal lobes of the right
hemisphere, and is modulated by norepinephrine.[59][60]
-Orienting is the directing of attention to a specific stimulus.
-Executive attention is used when there is a conflict between multiple attention
cues. It is essentially the same as the central executive in Baddeley's model of
working memory. The Eriksen flanker task has shown that the executive control
of attention may take place in the anterior cingulate cortex[61]
Cultural variation
Children appear to develop patterns of attention related to the cultural practices of their
families, communities, and the institutions in which they participate.[62]
In 1955 Henry suggested that there are societal differences in sensitivity to signals
from many ongoing sources that call for the awareness of several levels of attention
simultaneously. He tied his speculation to ethnographic observations of communities in
which children are involved in a complex social community with multiple
relationships.[63]
Attention can be focused in skilled ways on more than one activity at a time, which can
be seen in different communities and cultures such as the Mayans of San Pedro.[63]
One example is simultaneous attention which involves uninterrupted attention to
several activities occurring at the same time. Another cultural practice that may relate
to simultaneous attention strategies is coordination within a group. San Pedro toddlers
and caregivers frequently coordinated their activities with other members of a group in
multiway engagements rather than in a dyadic fashion.[2][64]
674
Attention modelling
In the domain of computer vision, efforts have been made in modelling the mechanism
of human attention, especially the bottom-up attentional mechanism.[65]
Generally speaking, there are two kinds of models to mimic the bottom-up saliency
mechanism. One way is based on the spatial contrast analysis. For example, in [66] a
center-surround mechanism is used to define saliency across scales, which is inspired
by the putative neural mechanism. It has also been hypothesized that some visual
inputs are intrinsically salient in certain background contexts and that these are actually
task-independent. This model has established itself as the exemplar for saliency
detection and consistently used for comparison in the literature.;[65] the other way is
based on the frequency domain analysis. This method was first proposed by Hou et
al.,[67] this method was called SR, and then PQFT method was also introduced. Both
SR and PQFT only use the phase information.[65] In 2012, the HFT method was
introduced, and both the amplitude and the phase information are made use of.[68]
Hemispatial neglect
Hemispatial neglect, also called unilateral neglect, often occurs when people have
damage to their right hemisphere.[69] This damage often leads to a tendency to ignore
the left side of one's body or even the left side of an object that can be seen. Damage
to the left side of the brain (the left hemisphere) rarely yields significant neglect of the
right side of the body or object in the person's local environments.[70]
The effects of spatial neglect, however, may vary and differ depending on what area of
the brain was damaged. Damage to different neural substrates can result in different
types of neglect. Attention disorders (lateralized and nonlaterized) may also contribute
to the symptoms and effects.[70] Much research has asserted that damage to gray
matter within the brain results in spatial neglect.[71]
New technology has yielded more information, such that there is a large, distributed
network of frontal, parietal, temporal, and subcortical brain areas that have been tied to
neglect.[72] This network can be related to other research as well; the dorsal attention
network is tied to spatial orienting.[73] The effect of damage to this network may result
in patients neglecting their left side when distracted about their right side or an object
on their right side.[69]
675
676
Memory
In psychology, memory is the process
in which information is encoded,
stored, and retrieved. Encoding allows
information that is from the outside
world to reach our senses in the forms
of chemical and physical stimuli. In
this first stage we must change the
information so that we may put the
memory into the encoding process.
Storage is the second memory stage
or process. This entails that we
maintain information over periods of
time. Finally the third process is the retrieval of information that we have stored. We
must locate it and return it to our consciousness. Some retrieval attempts may be
effortless due to the type of information.
From an information processing perspective there are three main stages in the
formation and retrieval of memory:
-Encoding or registration: receiving, processing and combining of received
information
-Storage: creation of a permanent record of the encoded information
-Retrieval, recall or recollection: calling back the stored information in response
to some cue for use in a process or activity
The loss of memory is described as forgetfulness, or as a medical disorder, amnesia.
677
Contents
1 Sensory memory
2 Short-term memory
3 Long-term memory
4 Models
4.1 Atkinson-Shiffrin model
4.2 Working memory
5 Types of memory
5.1 Classification by information type
5.1.1 Declarative memory
5.1.2 Procedural memory
5.2 Classification by temporal direction
6 Techniques used to study memory
6.1 Techniques used to assess infants’ memory
6.2 Techniques used to assess older children and adults' memory
7 Memory failures
8 Physiology
9 Cognitive neuroscience of memory
10 Genetics
11 Memory in infancy
12 Memory and aging
13 Effects of physical exercise on memory
14 Disorders
15 Factors that influence memory
15.1 Influence of odors and emotions
15.2 Interference from previous knowledge
16 Memory and stress
17 Memory construction and manipulation
18 Improving memory
18.1 Levels of processing
18.2 Methods to optimize memorization
Sensory memory
Sensory memory holds sensory information for a few seconds or less after an item is
perceived. The ability to look at an item, and remember what it looked like with just a
second of observation, or memorisation, is an example of sensory memory. It is out of
cognitive control and is an automatic response. With very short presentations,
participants often report that they seem to "see" more than they can actually report.
The first experiments exploring this form of sensory memory were conducted by
George Sperling (1963)[1] using the "partial report paradigm". Subjects were presented
with a grid of 12 letters, arranged into three rows of four. After a brief presentation,
subjects were then played either a high, medium or low tone, cuing them which of the
rows to report. Based on these partial report experiments, Sperling was able to show
that the capacity of sensory memory was approximately 12 items, but that it degraded
very quickly (within a few hundred milliseconds). Because this form of memory
degrades so quickly, participants would see the display, but be unable to report all of
the items (12 in the "whole report" procedure) before they decayed. This type of
memory cannot be prolonged via rehearsal.
678
There are three types of sensory memories. Iconic memory is a fast decaying store of
visual information, a type of sensory memory that briefly stores an image which has
been perceived for a small duration. Echoic memory is a fast decaying store of auditory
information, another type of sensory memory that briefly stores sounds that have been
perceived for short durations.[2] Haptic memory is a type of sensory memory that
represents a database for touch stimuli.
Short-term memory
Short-term memory allows recall for a period of several seconds to a minute without
rehearsal. Its capacity is also very limited: George A. Miller (1956), when working at
Bell Laboratories, conducted experiments showing that the store of short-term memory
was 7±2 items (the title of his famous paper, "The magical number 7±2"). Modern
estimates of the capacity of short-term memory are lower, typically of the order of 4–5
items;[3] however, memory capacity can be increased through a process called
chunking.[4] For example, in recalling a ten-digit telephone number, a person could
chunk the digits into three groups: first, the area code (such as 123), then a three-digit
chunk (456) and lastly a four-digit chunk (7890). This method of remembering
telephone numbers is far more effective than attempting to remember a string of 10
digits; this is because we are able to chunk the information into meaningful groups of
numbers. This may be reflected in some countries in the tendency to display telephone
numbers as several chunks of two to four numbers.
Short-term memory is believed to rely mostly on an acoustic code for storing
information, and to a lesser extent a visual code. Conrad (1964)[5] found that test
subjects had more difficulty recalling collections of letters that were acoustically similar
(e.g. E, P, D). Confusion with recalling acoustically similar letters rather than visually
similar letters implies that the letters were encoded acoustically. Conrad's (1964) study
however, deals with the encoding of written text, thus while memory of written language
may rely on acoustic components, generalisations to all forms of memory cannot be
made.
Long-term memory
The storage in sensory memory and short-term memory generally have a strictly limited
capacity and duration, which means that information is not retained indefinitely. By
contrast, long-term memory can store much larger quantities of information for
potentially unlimited duration (sometimes a whole life span). Its capacity is
immeasurably large. For example, given a random seven-digit number we may
remember it for only a few seconds before forgetting, suggesting it was stored in our
short-term memory. On the other hand, we can remember telephone numbers for many
years through repetition; this information is said to be stored in long-term memory.
While short-term memory encodes information acoustically, long-term memory
encodes it semantically: Baddeley (1966)[6] discovered that after 20 minutes, test
subjects had the most difficulty recalling a collection of words that had similar meanings
(e.g. big, large, great, huge) long-term. Another part of long-term memory is episodic
memory, "which attempts to capture information such as 'what', 'when' and 'where'".[7]
With episodic memory individuals are able to recall specific events such as birthday
parties and weddings.
Short-term memory is supported by transient patterns of neuronal communication,
dependent on regions of the frontal lobe (especially dorsolateral prefrontal cortex) and
the parietal lobe. Long-term memory, on the other hand, is maintained by more stable
and permanent changes in neural connections widely spread throughout the brain. The
hippocampus is essential (for learning new information) to the consolidation of
information from short-term to long-term memory, although it does not seem to store
679
information itself. Without the hippocampus, new memories are unable to be stored into
long-term memory, as learned from patient Henry Molaison after removal of both his
hippocampi,[8] and there will be a very short attention span. Furthermore, it may be
involved in changing neural connections for a period of three months or more after the
initial learning. One of the primary functions of sleep is thought to be the improvement
of the consolidation of information, as several studies have demonstrated that memory
depends on getting sufficient sleep between training and test.[9] Additionally, data
obtained from neuroimaging studies have shown activation patterns in the sleeping
brain that mirror those recorded during the learning of tasks from the previous day,[9]
suggesting that new memories may be solidified through such rehearsal.
Research has suggested that long-term memory storage in humans may be maintained
by DNA methylation,[10] or prions.[11]
Models
Models of memory provide abstract representations of how memory is believed to
work. Below are several models proposed over the years by various psychologists.
There is some controversy as to whether there are several memory structures.
Atkinson-Shiffrin model
The multi-store model (also known as
Atkinson-Shiffrin memory model) was
first described in 1968 by Atkinson
and Shiffrin.
The multi-store model has been
criticised for being too simplistic. For
instance, long-term memory is believed to be actually made up of multiple
subcomponents, such as episodic and procedural memory. It also proposes that
rehearsal is the only mechanism by which information eventually reaches long-term
storage, but evidence shows us capable of remembering things without rehearsal.
The model also shows all the memory stores as being a single unit whereas research
into this shows differently. For example, short-term memory can be broken up into
different units such as visual information and acoustic information. Patient KF proves
this. Patient KF was brain damaged and had problems with his short term memory. He
had problems with things such as spoken numbers, letters and words and with
significant sounds (such as doorbells and cats meowing). Other parts of short term
memory were unaffected, such as visual (pictures).[12]
Working memory
In 1974 Baddeley and Hitch proposed a
"working memory model" that replaced
the general concept of short term
memory with an active maintenance of
information in the short term storage. In
this model, working memory consists of
three basic stores: the central executive,
the phonological loop and the visuospatial sketchpad. In 2000 this model
was expanded with the multimodal
episodic buffer (Baddeley's model of
working memory).[13]
The central executive essentially acts as
680
an attention sensory store. It channels information to the three component processes:
the phonological loop, the visuo-spatial sketchpad, and the episodic buffer.
The phonological loop stores auditory information by silently rehearsing sounds or
words in a continuous loop: the articulatory process (for example the repetition of a
telephone number over and over again). A short list of data is easier to remember.
The visuospatial sketchpad stores visual and spatial information. It is engaged when
performing spatial tasks (such as judging distances) or visual ones (such as counting
the windows on a house or imagining images).
The episodic buffer is dedicated to linking information across domains to form
integrated units of visual, spatial, and verbal information and chronological ordering
(e.g., the memory of a story or a movie scene). The episodic buffer is also assumed to
have links to long-term memory and semantical meaning.
The working memory model explains many practical observations, such as why it is
easier to do two different tasks (one verbal and one visual) than two similar tasks (e.g.,
two visual), and the aforementioned word-length effect. However, the concept of a
central executive as noted here has been criticised as inadequate and vague.[citation
needed] Working memory is also the premise for what allows us to do everyday
activities involving thought. It is the section of memory where we carry out thought
processes and use them to learn and reason about topics.[13]
Types of memory
Researchers distinguish between recognition and recall memory. Recognition memory
tasks require individuals to indicate whether they have encountered a stimulus (such as
a picture or a word) before. Recall memory tasks require participants to retrieve
previously learned information. For example, individuals might be asked to produce a
series of actions they have seen before or to say a list of words they have heard
before.
Classification by information type
Topographic memory involves the ability to orient oneself in space, to recognize and
follow an itinerary, or to recognize familiar places.[14] Getting lost when traveling alone
is an example of the failure of topographic memory. This is often reported among
elderly patients who are evaluated for dementia. The disorder could be caused by
multiple impairments, including difficulties with perception, orientation, and memory.[15]
Flashbulb memories are clear episodic memories of unique and highly emotional
events.[16] People remembering where they were or what they were doing when they
first heard the news of President Kennedy’s assassination[17] or of 9/11 are examples
of flashbulb memories.
Anderson (1976)[18] divides long-term memory into declarative (explicit) and
procedural (implicit) memories.
Declarative memory
Declarative memory requires conscious recall, in that some conscious process must
call back the information. It is sometimes called explicit memory, since it consists of
information that is explicitly stored and retrieved.
Declarative memory can be further sub-divided into semantic memory, which concerns
facts taken independent of context; and episodic memory, which concerns information
specific to a particular context, such as a time and place. Semantic memory allows the
encoding of abstract knowledge about the world, such as "Paris is the capital of
France". Episodic memory, on the other hand, is used for more personal memories,
such as the sensations, emotions, and personal associations of a particular place or
681
time. Autobiographical memory - memory for particular events within one's own life - is
generally viewed as either equivalent to, or a subset of, episodic memory. Visual
memory is part of memory preserving some characteristics of our senses pertaining to
visual experience. One is able to place in memory information that resembles objects,
places, animals or people in sort of a mental image. Visual memory can result in
priming and it is assumed some kind of perceptual representational system underlies
this phenomenon.
Procedural memory
In contrast, procedural memory (or implicit memory) is not based on the conscious
recall of information, but on implicit learning. Procedural memory is primarily employed
in learning motor skills and should be considered a subset of implicit memory. It is
revealed when one does better in a given task due only to repetition - no new explicit
memories have been formed, but one is unconsciously accessing aspects of those
previous experiences. Procedural memory involved in motor learning depends on the
cerebellum and basal ganglia.
A characteristic of procedural memory is that the things that are remembered are
automatically translated into actions, and thus sometimes difficult to describe. Some
examples of procedural memory are the ability to ride a bike or tie shoelaces.[19]
Classification by temporal direction
A further major way to distinguish different memory functions is whether the content to
be remembered is in the past, retrospective memory, or whether the content is to be
remembered in the future, prospective memory. Thus, retrospective memory as a
category includes semantic, episodic and autobiographical memory. In contrast,
prospective memory is memory for future intentions, or remembering to remember
(Winograd, 1988). Prospective memory can be further broken down into event- and
time-based prospective remembering. Time-based prospective memories are triggered
by a time-cue, such as going to the doctor (action) at 4pm (cue). Event-based
prospective memories are intentions triggered by cues, such as remembering to post a
letter (action) after seeing a mailbox (cue). Cues do not need to be related to the action
(as the mailbox/letter example), and lists, sticky-notes, knotted handkerchiefs, or string
around the finger all exemplify cues that people use as strategies to enhance
prospective memory.
Techniques used to study memory
Techniques used to assess infants’ memory
Infants do not have the language ability to report on their memories, and so, verbal
reports cannot be used to assess very young children’s memory. Throughout the years,
however, researchers have adapted and developed a number of measures for
assessing both infants’ recognition memory and their recall memory. Habituation and
operant conditioning techniques have been used to assess infants’ recognition memory
and the deferred and elicited imitation techniques have been used to assess infants’
recall memory.
Techniques used to assess infants’ recognition memory include the following:
-Visual paired comparison procedure (relies on habituation): infants are first
presented with pairs of visual stimuli, such as two black-and-white photos of
human faces, for a fixed amount of time; then, after being familiarized with the
two photos, they are presented with the "familiar" photo and a new photo. The
time spent looking at each photo is recorded. Looking longer at the new photo
indicates that they remember the "familiar" one. Studies using this procedure
have found that 5- to 6-month-olds can retain information for as long as fourteen
days.[20]
682
-Operant conditioning technique: infants are placed in a crib and a ribbon that is
connected to a mobile overhead is tied to one of their feet. Infants notice that
when they kick their foot the mobile moves – the rate of kicking increases
dramatically within minutes. Studies using this technique have revealed that
infants’ memory substantially improves over the first 18-months. Whereas 2- to
3-month-olds can retain an operant response (such as activating the mobile by
kicking their foot) for a week, 6-month-olds can retain it for two weeks, and 18month-olds can retain a similar operant response for as long as 13
weeks.[21][22][23]
Techniques used to assess infants’ recall memory include the following:
-Deferred imitation technique: an experimenter shows infants a unique
sequence of actions (such as using a stick to push a button on a box) and then,
after a delay, asks the infants to imitate the actions. Studies using deferred
imitation have shown that 14-month-olds’ memories for the sequence of actions
can last for as long as four months.[24]
-Elicited imitation technique: is very similar to the deferred imitation technique;
the difference is that infants are allowed to imitate the actions before the delay.
Studies using the elicited imitation technique have shown that 20-month-olds
can recall the action sequences twelve months later.[25][26]
Techniques used to assess older children and adults' memory
Researchers use a variety of tasks to assess older children and adults' memory. Some
examples are:
-Paired associate learning - when one learns to associate one specific word
with another. For example when given a word such as "safe" one must learn to
say another specific word, such as "green". This is stimulus and
response.[27][28]
-Free recall - during this task a subject would be asked to study a list of words
and then later they will be asked to recall or write down as many words that they
can remember.[29] Earlier items are affected by retroactive interference (RI),
which means the longer the list, the greater the interference, and the less
likelihood that they are recalled. On the other hand, items that have been
presented lastly suffer little RI, but suffer a great deal from proactive
interference (PI), which means the longer the delay in recall, the more likely that
the items will be lost.[30]
-Recognition - subjects are asked to remember a list of words or pictures, after
which point they are asked to identify the previously presented words or
pictures from among a list of alternatives that were not presented in the original
list.[31]
-Detection paradigm - Individuals are shown a number of objects and color
samples during a certain period of time. They are then tested on their visual
ability to remember as much as they can by looking at testers and pointing out
whether the testers are similar to the sample, or if any change is present.
Memory failures
-Transience - memories degrade with the passing of time. This occurs in the
storage stage of memory, after the information has been stored and before it is
retrieved. This can happen in sensory, short-term, and long-term storage. It
follows a general pattern where the information is rapidly forgotten during the
first couple of days or years, followed by small losses in later days or years.
683
-Absentmindedness - Memory failure due to the lack of attention. Attention
plays a key role in storing information into long-term memory; without proper
attention, the information might not be stored, making it impossible to be
retrieved later.
Physiology
Brain areas involved in the neuroanatomy of memory such as the hippocampus, the
amygdala, the striatum, or the mammillary bodies are thought to be involved in specific
types of memory. For example, the hippocampus is believed to be involved in spatial
learning and declarative learning, while the amygdala is thought to be involved in
emotional memory.[32] Damage to certain areas in patients and animal models and
subsequent memory deficits is a primary source of information. However, rather than
implicating a specific area, it could be that damage to adjacent areas, or to a pathway
traveling through the area is actually responsible for the observed deficit. Further, it is
not sufficient to describe memory, and its counterpart, learning, as solely dependent on
specific brain regions. Learning and memory are attributed to changes in neuronal
synapses, thought to be mediated by long-term potentiation and long-term depression.
In general, the more emotionally charged an event or experience is, the better it is
remembered; this phenomenon is known as the memory enhancement effect. Patients
with amygdala damage, however, do not show a memory enhancement effect.[33][34]
Hebb distinguished between short-term and long-term memory. He postulated that any
memory that stayed in short-term storage for a long enough time would be
consolidated into a long-term memory. Later research showed this to be false.
Research has shown that direct injections of cortisol or epinephrine help the storage of
recent experiences. This is also true for stimulation of the amygdala. This proves that
excitement enhances memory by the stimulation of hormones that affect the amygdala.
Excessive or prolonged stress (with prolonged cortisol) may hurt memory storage.
Patients with amygdalar damage are no more likely to remember emotionally charged
words than nonemotionally charged ones. The hippocampus is important for explicit
memory. The hippocampus is also important for memory consolidation. The
hippocampus receives input from different parts of the cortex and sends its output out
to different parts of the brain also. The input comes from secondary and tertiary
sensory areas that have processed the information a lot already. Hippocampal damage
may also cause memory loss and problems with memory storage.[35]
Cognitive neuroscience of memory
Cognitive neuroscientists consider memory as the retention, reactivation, and
reconstruction of the experience-independent internal representation. The term of
internal representation implies that such definition of memory contains two
components: the expression of memory at the behavioral or conscious level, and the
underpinning physical neural changes (Dudai 2007). The latter component is also
called engram or memory traces (Semon 1904). Some neuroscientists and
psychologists mistakenly equate the concept of engram and memory, broadly
conceiving all persisting after-effects of experiences as memory; others argue against
this notion that memory does not exist until it is revealed in behavior or thought
(Moscovitch 2007).
One question that is crucial in cognitive neuroscience is how information and mental
experiences are coded and represented in the brain. Scientists have gained much
knowledge about the neuronal codes from the studies of plasticity, but most of such
research has been focused on simple learning in simple neuronal circuits; it is
considerably less clear about the neuronal changes involved in more complex
examples of memory, particularly declarative memory that requires the storage of facts
and events (Byrne 2007).
684
-Encoding. Encoding of working memory involves the spiking of individual
neurons induced by sensory input, which persists even after the sensory input
disappears (Jensen and Lisman 2005; Fransen et al. 2002). Encoding of
episodic memory involves persistent changes in molecular structures that alter
synaptic transmission between neurons. Examples of such structural changes
include long-term potentiation (LTP) or spike-timing-dependent plasticity
(STDP). The persistent spiking in working memory can enhance the synaptic
and cellular changes in the encoding of episodic memory (Jensen and Lisman
2005).
-Working memory. Recent functional imaging studies detected working memory
signals in both medial temporal lobe (MTL), a brain area strongly associated
with long-term memory, and prefrontal cortex (Ranganath et al. 2005),
suggesting a strong relationship between working memory and long-term
memory. However, the substantially more working memory signals seen in the
prefrontal lobe suggest that this area play a more important role in working
memory than MTL (Suzuki 2007).
-Consolidation and reconsolidation. Short-term memory (STM) is temporary and
subject to disruption, while long-term memory (LTM), once consolidated, is
persistent and stable. Consolidation of STM into LTM at the molecular level
presumably involves two processes: synaptic consolidation and system
consolidation. The former involves a protein synthesis process in the medial
temporal lobe (MTL), whereas the latter transforms the MTL-dependent memory
into an MTL-independent memory over months to years (Ledoux 2007). In
recent years, such traditional consolidation dogma has been re-evaluated as a
result of the studies on reconsolidation. These studies showed that prevention
after retrieval affects subsequent retrieval of the memory (Sara 2000). New
studies have shown that post-retrieval treatment with protein synthesis inhibitors
and many other compounds can lead to an amnestic state (Nadel et al. 2000b;
Alberini 2005; Dudai 2006). These findings on reconsolidation fit with the
behavioral evidence that retrieved memory is not a carbon copy of the initial
experiences, and memories are updated during retrieval.
Genetics
Study of the genetics of human memory is in its infancy. A notable initial success was
the association of APOE with memory dysfunction in Alzheimer's Disease. The search
for genes associated with normally varying memory continues. One of the first
candidates for normal variation in memory is the gene KIBRA,[36] which appears to be
associated with the rate at which material is forgotten over a delay period.
Memory in infancy
Up until the middle of the 1980s it was assumed that infants could not encode, retain,
and retrieve information.[37] A growing body of research now indicates that infants as
young as 6-months can recall information after a 24-hour delay.[38] Furthermore,
research has revealed that as infants grow older they can store information for longer
periods of time; 6-month-olds can recall information after a 24-hour period, 9-montholds after up to five weeks, and 20-month-olds after as long as twelve months.[39] In
addition, studies have shown that with age, infants can store information faster.
Whereas 14-month-olds can recall a three-step sequence after being exposed to it
once, 6-month-olds need approximately six exposures in order to be able to remember
it.[24][38]
It should be noted that although 6-month-olds can recall information over the shortterm, they have difficulty recalling the temporal order of information. It is only by 9
685
months of age that infants can recall the actions of a two-step sequence in the correct
temporal order - that is, recalling step 1 and then step 2.[40][41] In other words, when
asked to imitate a two-step action sequence (such as putting a toy car in the base and
pushing in the plunger to make the toy roll to the other end), 9-month-olds tend to
imitate the actions of the sequence in the correct order (step 1 and then step 2).
Younger infants (6-month-olds) can only recall one step of a two-step sequence.[38]
Researchers have suggested that these age differences are probably due to the fact
that the dentate gyrus of the hippocampus and the frontal components of the neural
network are not fully developed at the age of 6-months.[42][25][43]
Memory and aging
One of the key concerns of older adults is the experience of memory loss, especially as
it is one of the hallmark symptoms of Alzheimer's disease. However, memory loss is
qualitatively different in normal aging from the kind of memory loss associated with a
diagnosis of Alzheimer's (Budson & Price, 2005). Research has revealed that
individuals’ performance on memory tasks that rely on frontal regions declines with
age. Older adults tend to exhibit deficits on tasks that involve knowing the temporal
order in which they learned information;[44] source memory tasks that require them to
remember the specific circumstances or context in which they learned information;[45]
and prospective memory tasks that involve remembering to perform an act at a future
time. Older adults can manage their problems with prospective memory by using
appointment books, for example.
Effects of physical exercise on memory
Physical exercise, particularly continuous aerobic exercises such as running, cycling
and swimming, has many cognitive benefits and effects on the brain. Influences on the
brain include increases in neurotransmitter levels, improved oxygen and nutrient
delivery, and increased neurogenesis in the hippocampus. The effects of exercise on
memory have important implications for improving children's academic performance,
maintaining mental abilities in old age, and the prevention and potential cure of
neurological diseases.
Disorders
Much of the current knowledge of memory has come from studying memory disorders,
particularly amnesia. Loss of memory is known as amnesia. Amnesia can result from
extensive damage to: (a) the regions of the medial temporal lobe, such as the
hippocampus, dentate gyrus, subiculum, amygdala, the parahippocampal, entorhinal,
and perirhinal cortices[46] or the (b) midline diencephalic region, specifically the
dorsomedial nucleus of the thalamus and the mammillary bodies of the
hypothalamus.[47] There are many sorts of amnesia, and by studying their different
forms, it has become possible to observe apparent defects in individual sub-systems of
the brain's memory systems, and thus hypothesize their function in the normally
working brain. Other neurological disorders such as Alzheimer's disease and
Parkinson's disease [48] can also affect memory and cognition. Hyperthymesia, or
hyperthymesic syndrome, is a disorder which affects an individual's autobiographical
memory, essentially meaning that they cannot forget small details that otherwise would
not be stored.[49] Korsakoff's syndrome, also known as Korsakoff's psychosis,
amnesic-confabulatory syndrome, is an organic brain disease that adversely affects
memory.
While not a disorder, a common temporary failure of word retrieval from memory is the
tip-of-the-tongue phenomenon. Sufferers of Anomic aphasia (also called Nominal
686
aphasia or Anomia), however, do experience the tip-of-the-tongue phenomenon on an
ongoing basis due to damage to the frontal and parietal lobes of the brain.
Factors that influence memory
Influence of odors and emotions
In March 2007 German researchers found they could use odors to re-activate new
memories in the brains of people while they slept and the volunteers remembered
better later.[50] Emotion can have a powerful impact on memory. Numerous studies
have shown that the most vivid autobiographical memories tend to be of emotional
events, which are likely to be recalled more often and with more clarity and detail than
neutral events.[51]
The part of the brain that is critical in creating the feeling of emotion is the amygdala,
which allows for stress hormones to strengthen neuron communication.[52] The
chemicals cortisone and adrenaline are released in the brain when the amygdala is
activated by positive or negative excitement. The most effective way to activate the
amygdala is fear, because fear is an instinctive, protective mechanism which comes on
strong making it memorable. Sometimes the feeling can be overwhelming. This is when
a memory can be hazy yet vivid, or haunting with perfect clarity. This discovery led to
the development of a drug to help treat posttraumatic stress disorder (PTSD).[53]
When someone is in a heightened emotional state, the events causing it become
strong and ground in the memory, sometimes disrupting daily life for years.[54]
An experiment done with rats helped create the drug for treating this issue. Dr. Kerry
Ressler at Emory University, used tones and shocks to test an existing drug called
dicyclomine used commonly for tuberculosis. Rats would hear a tone and receive a
mild shock, training them to fear the tone. Then the drug was given to one set of rats,
and the tests were done again. The rats that did not receive the drug froze in fear.
When the tone was heard, the rats given the drug ignored the tone and continued
on.[55] The drug can effectively allow for new receptor connections between neurons
and relaxing of the amygdala when it comes to fear, allowing patients to have a chance
of recovery from PTSD.
Dr. Barbara Rothbaum at Emory University conducts experimental treatments for
PTSD using the knowledge that exactly the same neurons are active when
remembering an event as when it was created. Her administration of the drug
dicyclomine is intended to help patients foster new connections between neurons,
providing a window to lessen former traumatic connections. Rothbaum decided to use
the drug in a therapy session that utilizes virtual reality to give PTSD suffers a second
chance. Once the events that have caused the PTSD are identified, the process can
begin. The surroundings of the events are recreated in a virtual reality helmet (for
instance, in a combat vehicle in the desert).[56] This would help to recall the target
memories in a safe environment, and activate the neurons without activating the fear
response from the amygdala. When the dicyclomine is in the patient's system and the
same neurons are active that were active during the event, the patient can now have a
chance to re-form neural connections, with less chemicals present from the amygdala.
This does not erase the memory, but rather lessens the strength of it, giving some relief
so that people suffering from PTSD can try to move on and live their lives.
Recall is linked with emotion. If pain, joy, excitement, or any other strong emotion is
present during an event, the neurons active during this event produce strong
connections with each other. When this event is remembered or recalled in the future,
the neurons will more easily and speedily make the same connections. The strength
and longevity of memories is directly related to the amount of emotion felt during the
event of their creation.[57]
687
Interference from previous knowledge
At the Center for Cognitive Science at Ohio State University, researchers have found
that memory accuracy of adults is hurt by the fact that they know more, and have more
experience than children, and tend to apply all this knowledge when learning new
information. The findings appeared in the August 2004 edition of the journal
Psychological Science.
Interference can hamper memorization and retrieval. There is retroactive interference,
when learning new information makes it harder to recall old information[58] and
proactive interference, where prior learning disrupts recall of new information. Although
interference can lead to forgetting, it is important to keep in mind that there are
situations when old information can facilitate learning of new information. Knowing
Latin, for instance, can help an individual learn a related language such as French –
this phenomenon is known as positive transfer.[59]
Memory and stress
Stress has a significant effect on memory formation and learning. In response to
stressful situations, the brain releases hormones and neurotransmitters (ex.
glucocorticoids and catecholamines) which affect memory encoding processes in the
hippocampus. Behavioural research on animals shows that chronic stress produces
adrenal hormones which impact the hippocampal structure in the brains of rats.[60] An
experimental study by German cognitive psychologists L. Schwabe and O. Wolf
demonstrates how learning under stress also decreases memory recall in humans.[61]
In this study, 48 healthy female and male university students participated in either a
stress test or a control group. Those randomly assigned to the stress test group had a
hand immersed in ice cold water (the reputable SECPT or ‘Socially Evaluated Cold
Pressor Test’) for up to three minutes, while being monitored and videotaped. Both the
stress and control groups were then presented with 32 words to memorize. Twenty-four
hours later, both groups were tested to see how many words they could remember
(free recall) as well as how many they could recognize from a larger list of words
(recognition performance). The results showed a clear impairment of memory
performance in the stress test group, who recalled 30% fewer words than the control
group. The researchers suggest that stress experienced during learning distracts
people by diverting their attention during the memory encoding process.
However, memory performance can be enhanced when material is linked to the
learning context, even when learning occurs under stress. A separate study by
cognitive psychologists Schwabe and Wolf shows that when retention testing is done in
a context similar to or congruent with the original learning task (i.e., in the same room),
memory impairment and the detrimental effects of stress on learning can be
attenuated.[62] Seventy-two healthy female and male university students, randomly
assigned to the SECPT stress test or to a control group, were asked to remember the
locations of 15 pairs of picture cards – a computerized version of the card game
"Concentration" or "Memory". The room in which the experiment took place was
infused with the scent of vanilla, as odour is a strong cue for memory. Retention testing
took place the following day, either in the same room with the vanilla scent again
present, or in a different room without the fragrance. The memory performance of
subjects who experienced stress during the object-location task decreased significantly
when they were tested in an unfamiliar room without the vanilla scent (an incongruent
context); however, the memory performance of stressed subjects showed no
impairment when they were tested in the original room with the vanilla scent (a
congruent context). All participants in the experiment, both stressed and unstressed,
performed faster when the learning and retrieval contexts were similar.[63]
This research on the effects of stress on memory may have practical implications for
education, for eyewitness testimony and for psychotherapy: students may perform
688
better when tested in their regular classroom rather than an exam room, eyewitnesses
may recall details better at the scene of an event than in a courtroom, and persons
suffering from post-traumatic stress may improve when helped to situate their
memories of a traumatic event in an appropriate context.
Memory construction and manipulation
Although people often think that memory operates like recording equipment, it is not
the case. The molecular mechanisms underlying the induction and maintenance of
memory are very dynamic and comprise distinct phases covering a time window from
seconds to even a lifetime.[64] In fact, research has revealed that our memories are
constructed. People can construct their memories when they encode them and/or when
they recall them. To illustrate, consider a classic study conducted by Elizabeth Loftus
and John Palmer (1974) [65] in which people were instructed to watch a film of a traffic
accident and then asked about what they saw. The researchers found that the people
who were asked, "How fast were the cars going when they smashed into each other?"
gave higher estimates than those who were asked, "How fast were the cars going
when they hit each other?" Furthermore, when asked a week later whether they have
seen broken glass in the film, those who had been asked the question with smashed
were twice more likely to report that they have seen broken glass than those who had
been asked the question with hit. There was no broken glass depicted in the film. Thus,
the wording of the questions distorted viewers’ memories of the event. Importantly, the
wording of the question led people to construct different memories of the event – those
who were asked the question with smashed recalled a more serious car accident than
they had actually seen. The findings of this experiment were replicated around the
world, and researchers consistently demonstrated that when people were provided with
misleading information they tended to misremember, a phenomenon known as the
misinformation effect.[66]
Interestingly, research has revealed that asking individuals to repeatedly imagine
actions that they have never performed or events that they have never experienced
could result in false memories. For instance, Goff and Roediger [67] (1998) asked
participants to imagine that they performed an act (e.g., break a toothpick) and then
later asked them whether they had done such a thing. Findings revealed that those
participants who repeatedly imagined performing such an act were more likely to think
that they had actually performed that act during the first session of the experiment.
Similarly, Garry and her colleagues (1996) [68] asked college students to report how
certain they were that they experienced a number of events as children (e.g., broke a
window with their hand) and then two weeks later asked them to imagine four of those
events. The researchers found that one-fourth of the students asked to imagine the
four events reported that they had actually experienced such events as children. That
is, when asked to imagine the events they were more confident that they experienced
the events.
Research reported in 2013 revealed that it is possible to artificially stimulate prior
memories and artificially implant false memories in mice. Using optogenetics, a team of
RIKEN-MIT scientists caused the mice to incorrectly associate a benign environment
with a prior unpleasant experience from different surroundings. Some scientists believe
that the study may have implications in studying false memory formation in humans,
and in treating PTSD and schizophrenia.[69]
Improving memory
A UCLA research study published in the June 2006 issue of the American Journal of
Geriatric Psychiatry found that people can improve cognitive function and brain
efficiency through simple lifestyle changes such as incorporating memory exercises,
689
healthy eating, physical fitness and stress reduction into their daily lives. This study
examined 17 subjects, (average age 53) with normal memory performance. Eight
subjects were asked to follow a "brain healthy" diet, relaxation, physical, and mental
exercise (brain teasers and verbal memory training techniques). After 14 days, they
showed greater word fluency (not memory) compared to their baseline performance.
No long term follow up was conducted, it is therefore unclear if this intervention has
lasting effects on memory.[70]
There are a loosely associated group of mnemonic principles and techniques that can
be used to vastly improve memory known as the Art of memory.
The International Longevity Center released in 2001 a report[71] which includes in
pages 14–16 recommendations for keeping the mind in good functionality until
advanced age. Some of the recommendations are to stay intellectually active through
learning, training or reading, to keep physically active so to promote blood circulation to
the brain, to socialize, to reduce stress, to keep sleep time regular, to avoid depression
or emotional instability and to observe good nutrition.
Levels of processing
Craik and Lockhart (1972) proposed that it is the method and depth of processing that
affects how an experience is stored in memory, rather than rehearsal.
-Organization - Mandler (1967) gave participants a pack of word cards and
asked them to sort them into any number of piles using any system of
categorisation they liked. When they were later asked to recall as many of the
words as they could, those who used more categories remembered more
words. This study suggested that the organization of memory is one of its
central aspects (Mandler, 2011).
-Distinctiveness - Eysenck and Eysenck (1980) asked participants to say
words in a distinctive way, e.g. spell the words out loud. Such participants
recalled the words better than those who simply read them off a list.
-Effort - Tyler et al. (1979) had participants solve a series of anagrams, some
easy (FAHTER) and some difficult (HREFAT). The participants recalled the
difficult anagrams better, presumably because they put more effort into them.
-Elaboration - Palmere et al. (1983) gave participants descriptive paragraphs of
a fictitious African nation. There were some short paragraphs and some with
extra sentences elaborating the main idea. Recall was higher for the ideas in
the elaborated paragraphs.
Methods to optimize memorization
Memorization is a method of learning that allows an individual to recall information
verbatim. Rote learning is the method most often used. Methods of memorizing things
have been the subject of much discussion over the years with some writers, such as
Cosmos Rossellius using visual alphabets. The spacing effect shows that an individual
is more likely to remember a list of items when rehearsal is spaced over an extended
period of time. In contrast to this is cramming which is intensive memorization in a short
period of time. Also relevant is the Zeigarnik effect which states that people remember
uncompleted or interrupted tasks better than completed ones. The so-called Method of
loci uses spatial memory to memorize non-spatial information.[72]
690
Learning
Contents
1 Types of learning
1.1 Non-associative learning
1.1.1 Habituation
1.1.2 Sensitisation
1.2 Associative learning
1.2.1 Classical conditioning
1.3 Imprinting
1.4 Observational learning
1.5 Play
1.6 Enculturation
1.7 Episodic learning
1.8 Multimedia learning
1.9 E-learning and augmented learning
1.10 Rote learning
1.11 Meaningful learning
1.12 Informal learning
1.13 Formal learning
1.14 Nonformal learning
1.15 Nonformal learning and combined approaches
1.16 Tangential learning
1.17 Dialogic learning
2 Domains of learning
3 Transfer of learning
4 Active learning
5 Evolution of Learning
5.1 Costs and Benefits of Learned and Innate Knowledge
6 Machine learning
Learning is acquiring new, or modifying and reinforcing, existing knowledge, behaviors,
skills, values, or preferences and may involve synthesizing different types of
information. The ability to learn is possessed by humans, animals and some machines.
Progress over time tends to follow learning curves. Learning is not compulsory; it is
contextual. It does not happen all at once, but builds upon and is shaped by what we
already know. To that end, learning may be viewed as a process, rather than a
collection of factual and procedural knowledge. Learning produces changes in the
organism and the changes produced are relatively permanent.[1]
Human learning may occur as part of education, personal development, schooling, or
training. It may be goal-oriented and may be aided by motivation. The study of how
learning occurs is part of neuropsychology, educational psychology, learning theory,
and pedagogy. Learning may occur as a result of habituation or classical conditioning,
seen in many animal species, or as a result of more complex activities such as play,
seen only in relatively intelligent animals.[2][3] Learning may occur consciously or
without conscious awareness. Learning that an aversive event can't be avoided nor
escaped is called learned helplessness.[4] There is evidence for human behavioral
learning prenatally, in which habituation has been observed as early as 32 weeks into
691
gestation, indicating that the central nervous system is sufficiently developed and
primed for learning and memory to occur very early on in development.[5]
Play has been approached by several theorists as the first form of learning. Children
experiment with the world, learn the rules, and learn to interact through play. Lev
Vygotsky agrees that play is pivotal for children's development, since they make
meaning of their environment through play. 85 percent of brain development occurs
during the first five years of a child's life.[6] The context of conversation based on moral
reasoning offers some proper observations on the responsibilities of parents.[7]
Types of learning
Non-associative learning
Non-associative learning refers to "a relatively permanent change in the strength of
response to a single stimulus due to repeated exposure to that stimulus. Changes due
to such factors as sensory adaptation, fatigue, or injury do not qualify as nonassociative learning."[8]
Non-associative learning can be divided into habituation and sensitization.
Habituation
In psychology, habituation is an example of non-associative learning in which there is a
progressive diminution of behavioral response probability with repetition stimulus. An
animal first responds to a stimulus, but if it is neither rewarding nor harmful the animal
reduces subsequent responses. One example of this can be seen in small song birds—
if a stuffed owl (or similar predator) is put into the cage, the birds initially react to it as
though it were a real predator. Soon the birds react less, showing habituation. If
another stuffed owl is introduced (or the same one removed and re-introduced), the
birds react to it again as though it were a predator, demonstrating that it is only a very
specific stimulus that is habituated to (namely, one particular unmoving owl in one
place). Habituation has been shown in essentially every species of animal, as well as
the large protozoan Stentor coeruleus.[9]
Sensitisation
Sensitisation is an example of non-associative learning in which the progressive
amplification of a response follows repeated administrations of a stimulus (Bell et al.,
1995)[citation needed]. An everyday example of this mechanism is the repeated tonic
stimulation of peripheral nerves that will occur if a person rubs his arm continuously.
After a while, this stimulation will create a warm sensation that will eventually turn
painful. The pain is the result of the progressively amplified synaptic response of the
peripheral nerves warning the person that the stimulation is harmful.[clarification
needed] Sensitisation is thought to underlie both adaptive as well as maladaptive
learning processes in the organism.
Associative learning
Associative learning is the process by which an association between two stimuli or a
behavior and a stimulus is learned. The two forms of associative learning are classical
and operant conditioning. In the former a previously neutral stimulus is repeatedly
presented together with a reflex eliciting stimuli until eventually the neutral stimulus will
elicit a response on its own. In operant conditioning a certain behavior is either
reinforced or punished which results in an altered probability that the behavior will
happen again. Honeybees display associative learning through the proboscis extension
reflex paradigm.[10]
692
Operant conditioning is the use of consequences to modify the occurrence and form of
behavior. Operant conditioning is distinguished from Pavlovian conditioning in that
operant conditioning uses reinforcement/punishment to alter an action-outcome
association. In contrast Pavlovian conditioning involves strengthening of the stimulusoutcome association.
Elemental theories of associative learning argue that concurrent stimuli tend to be
perceived as separate units rather than 'holistically' (i.e. as a single unit)[11]
Behaviorism is a psychological movement that seeks to alter behavior by arranging the
environment to elicit successful changes and to arrange consequences to maintain or
diminish a behavior. Behaviorists study behaviors that can be measured and changed
by the environment. However, they do not deny that there are thought processes that
interact with those behaviors (see Relational Frame Theory for more information).
Delayed discounting is the process of devaluing rewards based on the delay of time
they are presented. This process is thought to be tied to impulsivity. Impulsivity is a
core process for many behaviors (e.g., substance abuse, problematic gambling, OCD).
Making decisions is an important part of everyday functioning. How we make those
decisions is based on what we perceive to be the most valuable or worthwhile actions.
This is determined by what we find to be the most reinforcing stimuli. So when teaching
an individual a response, you need to find the most potent reinforcer for that person.
This may be a larger reinforcer at a later time or a smaller immediate reinforcer.
Classical conditioning
The typical paradigm for classical conditioning involves repeatedly pairing an
unconditioned stimulus (which unfailingly evokes a reflexive response) with another
previously neutral stimulus (which does not normally evoke the response). Following
conditioning, the response occurs both to the unconditioned stimulus and to the other,
unrelated stimulus (now referred to as the "conditioned stimulus"). The response to the
conditioned stimulus is termed a conditioned response. The classic example is Pavlov
and his dogs. Meat powder naturally will make a dog salivate when it is put into a dog's
mouth; salivating is a reflexive response to the meat powder. Meat powder is the
unconditioned stimulus (US) and the salivation is the unconditioned response (UR).
Then Pavlov rang a bell before presenting the meat powder. The first time Pavlov rang
the bell, the neutral stimulus, the dogs did not salivate, but once he put the meat
powder in their mouths they began to salivate. After numerous pairings of the bell and
the food the dogs learned that the bell was a signal that the food was about to come
and began to salivate when the bell was rung. Once this occurred, the bell became the
conditioned stimulus (CS) and the salivation to the bell became the conditioned
response (CR).
Another influential person in the world of Classical Conditioning is John B. Watson.
Watson's work was very influential and paved the way for B.F. Skinner's radical
behaviorism. Watson's behaviorism (and philosophy of science) stood in direct contrast
to Freud. Watson's view was that Freud's introspective method was too subjective, and
that we should limit the study of human development to directly observable behaviors.
In 1913, Watson published the article "Psychology as the Behaviorist Views," in which
he argued that laboratory studies should serve psychology best as a science. Watson's
most famous, and controversial, experiment, "Little Albert", where he demonstrated
how psychologists can account for the learning of emotion through classical
conditioning principles.
693
Imprinting
Imprinting is the term used in psychology and ethology to describe any kind of phasesensitive learning (learning occurring at a particular age or a particular life stage) that is
rapid and apparently independent of the consequences of behavior. It was first used to
describe situations in which an animal or person learns the characteristics of some
stimulus, which is therefore said to be "imprinted" onto the subject.
Observational learning
The
learning
process
most
characteristic of humans is imitation;
one's personal repetition of an
observed behavior, such as a dance.
Recent research[citation needed] with
children has shown that observational
learning is well suited to seeding
behaviors that can spread widely
across a culture through a process
called a diffusion hain, where
individuals initially learn a behavior by
observing another individual perform
that behavior, and then serve as a
model from which other individuals
learn the behavior. Humans can copy
three
types
of
information
simultaneously: the demonstrator's
goals, actions, and environmental outcomes (results, see Emulation (observational
learning)). Through copying these types of information, (most) infants will tune into their
surrounding culture. Humans aren't the only creatures capable of learning through
observing. A wide variety of species learn by observing. In one study, for example,
pigeons watched other pigeons set reinforced for either pecking at the feeder or
stepping on a bar. When placed in the box later, the pigeons tended to use whatever
technique they had observed other pigeons using earlier.(Zentall, Sutton & Sherburne,
1996)[full citation needed]
Observational learning involves a neural component as well. Mirror neurons, then, may
play a critical role in the imitation of behavior as well as the prediction of future
behavior. (Rizzolatti,2004)[full citation needed] Mirror neurons are thought to be
represented in specific subregions in the frontal and partietal lobes, and there is
evidence that individual subregions respond most strongly to observing certain kinds of
actions.
Play
Play generally describes behavior which has no particular end in itself, but improves
performance in similar situations in the future. This is seen in a wide variety of
vertebrates besides humans, but is mostly limited to mammals and birds. Cats are
known to play with a ball of string when young, which gives them experience with
catching prey. Besides inanimate objects, animals may play with other members of
their own species or other animals, such as orcas playing with seals they have caught.
Play involves a significant cost to animals, such as increased vulnerability to predators
and the risk of injury and possibly infection. It also consumes energy, so there must be
significant benefits associated with play for it to have evolved. Play is generally seen in
younger animals, suggesting a link with learning. However, it may also have other
benefits not associated directly with learning, for example improving physical fitness.
694
Play, as it pertains to humans as a form of learning is central to a child’s learning and
development. Through play, children learn social skills such as sharing and
collaboration. Children develop emotional skills such as learning to deal with the
emotion of anger, through play activities. As a form of learning, play also facilitates the
development of thinking and language skills in children.[12]
There are five types of play: 1) sensorimotor play aka functional play, characterized by
repetition of activity. 2) role play occurs from 3 to 15 years of age. 3) rule-based play
where authoritative prescribed codes of conduct are primary. 4) construction play
involves experimentation and building. 5) movement play aka physical play.[12]
These 5 types of play are often intersected. All types of play generate thinking and
problem-solving skills in children. Children learn to think creatively when they learn
through play.[13] Specific activities involved in each type of play change over time as
humans progress through the lifespan. Play as a form of learning, can occur solitarily,
or involve interacting with others.
Enculturation
Enculturation is the process by which a person learns the requirements of their native
culture by which he or she is surrounded, and acquires values and behaviors that are
appropriate or necessary in that culture.[14] The influences which, as part of this
process limit, direct or shape the individual, whether deliberately or not, include
parents, other adults, and peers.[14] If successful, enculturation results in competence
in the language, values and rituals of the culture.[14] (compare acculturation, where a
person is within a culture different to their normal culture, and learns the requirements
of this different culture).
Episodic learning
Episodic learning is a change in behavior that occurs as a result of an event.[15] For
example, a fear of dogs that follows being bitten by a dog is episodic learning. Episodic
learning is so named because events are recorded into episodic memory, which is one
of the three forms of explicit learning and retrieval, along with perceptual memory and
semantic memory.[16]
Multimedia learning
Multimedia learning is where a person uses both auditory and visual stimuli to learn
information (Mayer 2001). This type of learning relies on dual-coding theory (Paivio
1971).
E-learning and augmented learning
Electronic learning or e-learning is a general term used to refer to computer-enhanced
learning. A specific and always more diffused e-learning is mobile learning (mlearning), which uses different mobile telecommunication equipment, such as cellular
phones.
When a learner interacts with the e-learning environment, it's called augmented
learning. By adapting to the needs of individuals, the context-driven instruction can be
dynamically tailored to the learner's natural environment. Augmented digital content
may include text, images, video, audio (music and voice). By personalizing instruction,
augmented learning has been shown to improve learning performance for a
lifetime.[17] See also Minimally Invasive Education.
Moore (1989)[18]purported that three core types of interaction are necessary for
quality, effective online learning:
695
-learner-learner (i.e. communication between and among peers with or without
the teacher present),
-learner-instructor (i.e. student teacher communication), and
-learner-content (i.e. intellectually interacting with content that results in
changes in learners’ understanding, perceptions, and cognitive structures).
In his theory of transactional distance, Moore (1993)[19] contented that structure and
interaction or dialogue bridge the gap in understanding and communication that is
created by geographical distances (known as transactional distance).
Rote learning
Rote learning is memorizing information so that it can be recalled by the learner exactly
the way it was read or heard. The major technique used for rote learning is learning by
repetition, based on the idea that a learner can recall the material exactly (but not its
meaning) if the information is repeatedly processed. Rote learning is used in diverse
areas, from mathematics to music to religion. Although it has been criticized by some
educators, rote learning is a necessary precursor to meaningful learning.
Meaningful learning
Meaningful learning is the concept that learned knowledge (e.g., a fact) is fully
understood to the extent that it relates to other knowledge. To this end, meaningful
contrasts with rote learning in which information is acquired without regard to
understanding. Meaningful learning, on the other hand, implies there is a
comprehensive knowledge of the context of the facts learned.[20]
Informal learning
Informal learning occurs through the experience of day-to-day situations (for example,
one would learn to look ahead while walking because of the danger inherent in not
paying attention to where one is going). It is learning from life, during a meal at table
with parents, play, exploring, etc.
Formal learning
Formal learning is learning that takes
place
within
a
teacher-student
relationship, such as in a school
system. The term formal learning has
nothing to do with the formality of the
learning, but rather the way it is
directed and organized. In formal
learning, the learning or training
departments set out the goals and
objectives of the learning.[21]
Nonformal learning
Nonformal learning is organized learning outside the formal learning system. For
example: learning by coming together with people with similar interests and exchanging
viewpoints, in clubs or in (international) youth organizations, workshops.
696
Nonformal learning and combined approaches
The educational system may use a combination of formal, informal, and nonformal
learning methods. The UN and EU recognize these different forms of learning (cf. links
below). In some schools students can get points that count in the formal-learning
systems if they get work done in informal-learning circuits. They may be given time to
assist international youth workshops and training courses, on the condition they
prepare, contribute, share and can prove this offered valuable new insight, helped to
acquire new skills, a place to get experience in organizing, teaching, etc.
In order to learn a skill, such as solving a Rubik's Cube quickly, several factors come
into play at once:
-Directions help one learn the patterns of solving a Rubik's Cube.
-Practicing the moves repeatedly and for extended time helps with "muscle
memory" and therefore speed.
-Thinking critically about moves helps find shortcuts, which in turn helps to
speed up future attempts.
-The Rubik's Cube's six colors help anchor solving it within the head.
Occasionally revisiting the cube helps prevent negative learning or loss of
skill.
Tangential learning
Tangential learning is the process by which people will self-educate if a topic is
exposed to them in a context that they already enjoy. For example, after playing a
music-based video game, some people may be motivated to learn how to play a real
instrument, or after watching a TV show that references Faust and Lovecraft, some
people may be inspired to read the original work.[22] Self-education can be improved
with systematization. According to experts in natural learning, self-oriented learning
training has proven to be an effective tool for assisting independent learners with the
natural phases of learning.[23]
Dialogic learning
Dialogic learning is a type of learning based on dialogue.
Domains of learning
Benjamin Bloom has suggested three domains of learning:
-Cognitive – To recall, calculate, discuss, analyze, problem solve, etc.
-Psychomotor – To dance, swim, ski, dive, drive a car, ride a bike, etc.
-Affective – To like something or someone, love, appreciate, fear, hate, worship,
etc.
These domains are not mutually exclusive. For example, in learning to play chess, the
person will have to learn the rules of the game (cognitive domain); but he also has to
learn how to set up the chess pieces on the chessboard and also how to properly hold
and move a chess piece (psychomotor). Furthermore, later in the game the person
may even learn to love the game itself, value its applications in life, and appreciate its
history (affective domain).[24]
697
Transfer of learning
Transfer of learning is the application of skill, knowledge or understanding to resolve a
novel problem or situation. which happens when certain conditions are fulfilled.
Research indicates that learning transfer is infrequent; most common when "... cued,
primed, and guided..."[25] and has sought to clarify what it is, and how it might be
promoted through instruction.
Over the history of its discourse, various hypotheses and definitions have been
advanced. First, it is speculated that different types of transfer exist, including near
transfer, or the application of skill to solve a novel problem in a similar context, and far
transfer, or the application of skill to solve novel problem presented in a different
context.[26] Furthermore, Perkins and Salomon (1992) suggest that positive transfer in
cases when learning supports novel problem solving, and negative transfer occurs
when prior learning inhibits performance on highly correlated tasks, such as second or
third-language learning.[27] Concepts of positive and negative transfer have a long
history; researchers in the early 20th century described the possibility that "...habits or
mental acts developed by a particular kind of training may inhibit rather than facilitate
other mental activities".[28] Finally, Schwarz, Bransford and Sears (2005) have
proposed that transferring knowledge into a situation may differ from transferring
knowledge out to a situation as a means to reconcile findings that transfer may both be
frequent and challenging to promote.[29]
A significant and long research history has also attempted to explicate the conditions
under which transfer of learning might occur. Early research by Ruger, for example,
found that the "level of attention", "attitudes", "method of attack" (or method for tackling
a problem), a "search for new points of view", "a careful testing of hypothesis" and
"generalization" were all valuable approaches for promoting transfer.[30] To encourage
transfer through teaching, Perkins and Salomon recommend aligning ("hugging")
instruction with practice and assessment, and "bridging", or encouraging learners to
reflect on past experiences or make connections between prior knowledge and current
content.[27]
Active learning
Active learning occurs when a person takes control of their learning experience. Since
understanding information is the key aspect of learning, it is important for learners to
recognize what they understand and what they do not. By doing so, they can monitor
their own mastery of subjects. Active learning encourages learners to have an internal
dialogue in which they are verbalizing their understandings. This and other metacognitive strategies can be taught to a child over time. Studies within metacognition
have proven the value in active learning, claiming that the learning is usually at a
stronger level as a result.[31] In addition, learners have more incentive to learn when
they have control over not only how they learn but also what they learn.[32]
Evolution of Learning
There are two ways in which animals can gain knowledge. The first of these two ways
is learning. This is when an animal gathers information about its surrounding
environment and then proceeds to use this information. For example, if an animal eats
something that hurts its stomach, it may learn not to eat this again. The second way
that an animal can acquire knowledge is through innate knowledge. This knowledge is
genetically inherited. The animal automatically knows it without any prior experience.
An example of this is when a horse is born and can immediately walk. The horse has
not learned this behavior; it simply knows how to do it.[33] In some scenarios, innate
knowledge is more beneficial than learned knowledge. However, in other scenarios the
opposite is true - animals must learn certain behaviors when it is disadvantageous to
have a specific innate behavior. In these situations, learning evolves in the species.
698
Costs and Benefits of Learned and Innate Knowledge
In a changing environment, an animal must constantly be gaining new information in
order to survive. However, in a stable environment this same individual need only to
gather the information it needs once and rely on it for the duration of its life. Therefore,
there are different scenarios in which learning or innate knowledge is better suited.
Essentially, the cost of obtaining certain knowledge versus the benefit of already
having it determined whether an animal evolved to learn in a given situation or whether
it innately knew the information. If the cost of gaining the knowledge outweighed the
benefit of having it, then the individual would not have evolved to learn in this scenario;
instead, non-learning would evolve. However, if the benefit of having certain
information outweighed the cost of obtaining it, then the animal would be far more likely
to evolve to have to learn this information.[33]
Non-learning is more likely to evolve in two scenarios. If an environment is static and
change does not or rarely occurs then learning would simply be unnecessary. Because
there is no need for learning in this scenario – and because learning could prove to be
disadvantageous due to the time it took to learn the information – non-learning evolves.
However, if an environment were in a constant state of change then learning would
also prove to be disadvantageous. Anything learned would immediately become
irrelevant because of the changing environment.[33] The learned information would no
longer apply. Essentially, the animal would be just as successful if it took a guess as if
it learned. In this situation, non-learning would evolve. In fact, it was shown in a study
of Drosophila melanogaster that learning can actually lead to a decrease in
productivity, possibly because egg-laying behaviors and decisions were impaired by
interference from the memories gained from the new learned materials or because of
the cost of energy in learning.[34]
However, in environments where change occurs within an animal's lifetime but is not
constant, learning is more likely to evolve. Learning is beneficial in these scenarios
because an animal can adapt to the new situation, but can still apply the knowledge
that it learns for a somewhat extended period of time. Therefore, learning increases the
chances of success as opposed to guessing.[33] An example of this is seen in aquatic
environments with landscapes subject to change. In these environments learning is
favored because the fish are predisposed to learn the specific spatial cues where they
live.[35]
Machine learning
Machine learning, a branch of artificial intelligence, concerns the construction and
study of systems that can learn from data. For example, a machine learning system
could be trained on email messages to learn to distinguish between spam and nonspam messages.
699
700
Decision-making
Decision-making can be regarded as the cognitive process resulting in the selection of
a belief and/or a course of action among several alternative possibilities. Every
decision-making process produces a final choice[1] that may or may not prompt action.
Contents
1 Overview
2 Rational and irrational decision-making
3 Information overload
4 Problem analysis vs. decision-making
4.1 Decision planning
4.2 Analysis paralysis
5 Everyday techniques
5.1 Group decision-making techniques
5.2 Individual decision-making techniques
6 Stages of group decision-making
7 Decision-making steps
8 Cognitive and personal biases
9 Post-decision analysis
10 Cognitive styles
10.1 Influence of Myers-Briggs type
10.2 Optimizing vs. satisficing
10.3 Combinatorial vs. positional
11 Neuroscience
12 Decision-making in adolescents vs. adults
Overview
Human performance as regards decisions has been the subject of active research from
several perspectives:
-Psychological: examining individual decisions in the context of a set of needs,
preferences and values the individual has or seeks.
-Cognitive: the decision-making process regarded as a continuous process
integrated in the interaction with the environment.
-Normative: the analysis of individual decisions is concerned with the logic of
decision-making and rationality and the invariant choice it leads to.[2]
Decision-making can also be regarded as a problem-solving activity terminated by a
solution deemed to be satisfactory. It is, therefore, a reasoning or emotional process
which can be rational or irrational and can be based on explicit assumptions or tacit
assumptions. Most decisions are followed by some form of cost-benefit analysis.[3]
Rational choice theory encompasses the notion that people try to maximize benefits
while minimizing costs.[4]
701
Some have argued that most
decisions are made unconsciously, if
not involuntarily. Jim Nightingale,
author of Think Smart – Act Smart,
states that "we simply decide without
thinking much about the decision
process." In a controlled environment,
such as a classroom, instructors might
try to encourage students to weigh
pros and cons before making a
decision. This strategy is known as
Franklin's rule. However, because
such a rule requires time, cognitive
resources and full access to relevant
information about the decision, this
rule may not best describe how people
make decisions.[citation needed]
Logical
decision-making
is
an
important part of all science-based
professions, where specialists apply
their knowledge in a given area to
make
informed
decisions.
For
example, medical decision-making
often involves a diagnosis and the selection of appropriate treatment. Some[which?]
research using naturalistic methods shows, however, that in situations with higher time
pressure, higher stakes, or increased ambiguities, experts use intuitive decisionmaking rather than structured approaches – following a recognition primed decision
that fits the their experience – and arrive at a course of action without weighing
alternatives. Recent robust decision research has formally integrated uncertainty into
its decision-making model.[citation needed] Decision analysis recognized and included
uncertainties in its theorizing since its conception in 1964.[citation needed]
A major part of decision-making involves the analysis of a finite set of alternatives
described in terms of evaluative criteria. Information overload occurs when there is a
substantial gap between the capacity of information and the ways in which people may
or can adapt. The overload of information can be related to problem≠ processing and
tasking, which effects decision-making.[5] These criteria may be benefit or cost in
nature. Then the problem might be to rank these alternatives in terms of how attractive
they are to the decision-maker(s) when all the criteria are considered simultaneously.
Another goal might be to just find the best alternative or to determine the relative total
priority of each alternative (for instance, if alternatives represent projects competing for
funds) when all the criteria are considered simultaneously. Solving such problems is
the focus of multi-criteria decision analysis (MCDA), also known as multi-criteria
decision-making (MCDM). This area of decision-making, although very old, has
attracted the interest of many researchers and practitioners and is still highly debated
as there are many MCDA/MCDM methods which may yield very different results when
they are applied on exactly the same data.[6] This leads to the formulation of a
decision-making paradox.
Rational and irrational decision-making
In economics, it is thought that if humans are rational and free to make their own
decisions, then they would behave according to rational choice theory.[7] This theory
states that people make decisions by determining the likelihood of a potential outcome,
the value of the outcome and then multiplying the two. For example, with a 50%
702
chance of winning $20 or a 100% chance of winning $10, people more likely to choose
the first option.[7]
In reality, however, there are some factors that affect decision-making abilities and
cause people to make irrational decisions, one of them being availability bias.
Availability bias is the tendency for some items that are more readily available in
memory to be judged as more frequently occurring.[7] For example, someone who
watches a lot of movies about terrorist attacks may think the frequency of terrorism to
be higher than it actually is.
Information overload
Information overload is "a gap between the volume of information and the tools we
need to assimilate it."[8] It is proven in some studies[which?] that the more information
overload, the worse the quality of decisions made. There are five factors:
-Personal Information Factors: personal qualifications, experiences, attitudes
etc.
-Information Characteristics: information quality, quantity and frequency
-Tasks and Process: standardized procedures or methods
-Organizational Design: organizations' cooperation, processing capacity and
organization relationship
-Information Technology: IT management, and general technology
Hall, Ariss & Todorov with an assistant Rashar phinyor (2007) described an illusion of
knowledge, meaning that as individuals encounter too much knowledge it actually
interferes with their ability to make rational decisions.[9]
Problem analysis vs. decision-making
It is important to differentiate between problem analysis and decision-making. The
concepts are completely separate from one another. Traditionally, it is argued that
problem analysis must be done first, so that the information gathered in that process
may be used towards decision-making.[10]
Problem analysis
-Analyze performance, what should the results be against what they actually are
-Problems are merely deviations from performance standards
-Problem must be precisely identified and described
-Problems are caused by a change from a distinctive feature
-Something can always be used to distinguish between what has and hasn't
been affected by a cause
-Causes to problems can be deducted from relevant changes found in analyzing
the problem
-Most likely cause to a problem is the one that exactly explains all the facts
Decision-making
-Objectives must first be established
-Objectives must be classified and placed in order of importance
-Alternative actions must be developed
-The alternative must be evaluated against all the objectives
-The alternative that is able to achieve all the objectives is the tentative decision
-The tentative decision is evaluated for more possible consequences
703
-The decisive actions are taken, and additional actions are taken to prevent any
adverse consequences from becoming problems and starting both systems
(problem analysis and decision-making) all over again
-There are steps that are generally followed that result in a decision model that
can be used to determine an optimal production plan.[11]
-In a situation featuring conflict, role-playing may be helpful for predicting
decisions to be made by involved parties.[12]
Decision planning
Making a decision without planning is fairly common, but does not often end well.
Planning allows for decisions to be made comfortably and in a smart way. Planning
makes decision-making a lot more simple than it is.
Decision will get four benefits out of planning: 1. Planning give chance to the
establishment of independent goals. It is a conscious and directed series of choices. 2.
Planning provides a standard of measurement. It is a measurement of whether you are
going towards or further away from your goal. 3. Planning converts values to action.
You think twice about the plan and decide what will help advance your plan best. 4.
Planning allows for limited resources to be committed in an orderly way. Always govern
the use of what is limited to you. (e.g. money, time, etc.)[13]
Analysis paralysis
Analysis paralysis is the state of over-analyzing (or over-thinking) a situation, or citing
sources, so that a decision or action is never taken, in effect paralyzing the outcome.
Everyday techniques
Decision-making techniques can be separated into two broad categories: Group
decision-making and individual decision-making techniques.
Group decision-making techniques
-Consensus decision-making tries to avoid "winners" and "losers". Consensus
requires that a majority approve a given course of action, but that the minority
agree to go along with the course of action. In other words, if the minority
opposes the course of action, consensus requires that the course of action be
modified to remove objectionable features.
-Voting-based methods.
-Range voting lets each member score one or more of the available
options. The option with the highest average is chosen. This method has
experimentally been shown to produce the lowest Bayesian regret
among common voting methods, even when voters are strategic.[citation
needed]
-Majority requires support from more than 50% of the members of the
group. Thus, the bar for action is lower than with unanimity and a group
of "losers" is implicit to this rule.
-Plurality, where the largest block in a group decides, even if it falls short
of a majority.
-Delphi method is structured communication technique for groups, originally
developed for collaborative forecasting but has also been used for policy
making.
-Dotmocracy is a facilitation method that relies on the use of special forms
called Dotmocracy Sheets to allow large groups to collectively brainstorm and
recognize agreement on an unlimited number of ideas they have authored.
704
Individual decision-making techniques
-Pros and cons: listing the advantages and disadvantages of each option,
popularized by Plato and Benjamin Franklin.[14][15] Contrast the costs and
benefits of all alternatives. Also called "rational decision-making".
-Simple prioritization: choosing the alternative with the highest probabilityweighted utility for each alternative (see Decision analysis).
-Satisficing: examining alternatives only until an acceptable one is found.
-Elimination by aspects: choosing between alternatives using Mathematical
psychology[16] The technique was introduced by Amos Tversky in 1972. It is a
covert elimination process that involves comparing all available alternatives by
aspects. The decision-maker chooses an aspect; any alternatives without that
aspect are then eliminated. The decision-maker repeats this process with as
many aspects as needed until there remains only one alternative[17]
-Preference trees: In 1979, Tversky and Shmuel Sattach updated the
elimination by aspects technique by presenting a more ordered and structured
way of comparing the available alternatives. This technique compared the
alternatives by presenting the aspects in a decided and sequential order. It
became a more hierarchical system in which the aspects are ordered from
general to specific [18]
-Acquiesce to a person in authority or an "expert"; "just following orders".
-Flipism: flipping a coin, cutting a deck of playing cards, and other random or
coincidence methods[19]
-Prayer, tarot cards, astrology, augurs, revelation, or other forms of divination.
-Taking the most opposite action compared to the advice of mistrusted
authorities (parents, police officers, partners...)
-Opportunity cost: calculating the opportunity cost of each options and decide
the decision.
-Bureaucratic: set up criteria for automated decisions.
-Political: negotiate choices among interest groups.
-Participative decision-making (PDM): a methodology in which a single
decision-maker, in order to take advantage of additional input, opens up the
decision-making process to a group for a collaborative effort.
-Use of a structured decision-making method.[20]
Individual decision-making techniques can often be applied by a group as part of a
group decision-making technique.
A need to use software for a decision-making process is emerging for individuals and
businesses. This is due to increasing decision complexity and an increase in the need
to consider additional stakeholders, categories, elements or other factors that effect
decisions.
Stages of group decision-making
According to B. Aubrey Fisher,[citation needed] there are four stages or phases that
should be involved in all group decision-making:
-Orientation. Members meet for the first time and start to get to know each
other.
-Conflict. Once group members become familiar with each other, disputes, little
fights and arguments occur. Group members eventually work it out.
-Emergence. The group begins to clear up vague opinions by talking about
them.
-Reinforcement. Members finally make a decision and provide justification for it.
705
It is said that critical norms in a group improves the quality of decisions, while the
majority of opinions (called consensus norms) do not. This is due to collaboration
between one another, and when group members get used to, and familiar with, each
other, they will tend to argue and create more of a dispute to agree upon one decision.
This does not mean that all group members fully agree; they may not want argue
further just to be liked by other group members or to "fit in".[21]
Decision-making steps
Each step in the decision-making process may include social, cognitive and cultural
obstacles to successfully negotiating dilemmas. It has been suggested that becoming
more aware of these obstacles allows one to better anticipate and overcome them.[22]
The Arkansas program presents eight stages of moral decision-making based on the
work of James Rest:
1-Establishing community: creating and nurturing the relationships, norms, and
procedures that will influence how problems are understood and communicated.
This stage takes place prior to and during a moral dilemma.
2-Perception: recognizing that a problem exists.
3-Interpretation: identifying competing explanations for the problem, and
evaluating the drivers behind those interpretations.
4-Judgment: sifting through various possible actions or responses and
determining which is more justifiable.
5-Motivation: examining the competing commitments which may distract from a
more moral course of action and then prioritizing and committing to moral
values over other personal, institutional or social values.
6-Action: following through with action that supports the more justified decision.
Integrity is supported by the ability to overcome distractions and obstacles,
developing implementing skills, and ego strength.
7-Reflection in action.
8-Reflection on action.
Other decision-making processes have also been proposed. One such process,
proposed by Pam Brown of Singleton Hospital in Swansea, Wales, breaks decisionmaking down into seven steps:[23]
1-Outline your goal and outcome.
2-Gather data.
3-Develop alternatives (i.e., brainstorming)
4-List pros and cons of each alternative.
5-Make the decision.
6-Immediately take action to implement it.
7-Learn from and reflect on the decision.
Cognitive and personal biases
Biases usually creep into decision-making processes. Many different people have
made a decision about the same question (e.g. "Should I have a doctor look at this
troubling breast cancer symptom I've discovered?" "Why did I ignore the evidence that
the project was going over budget?") and then craft potential cognitive interventions
aimed at improving the outcome of decision-making.
Here is a list of commonly-debated biases in judgment and decision-making.
-Selective search for evidence (aka confirmation bias; Scott Plous, 1993).
People tend to be willing to gather facts that support certain conclusions but
disregard other facts that support different conclusions. Individuals who are
706
highly defensive in this manner show significantly greater left prefrontal cortex
activity as measured by EEG than do less defensive individuals.[24]
-Premature termination of search for evidence. People tend to accept the first
alternative that looks like it might work.
-Cognitive inertia. Unwillingness to change existing thought patterns in the face
of new circumstances.
-Selective perception. We actively screen-out information that we do not think is
important (see also prejudice). In one demonstration of this effect, discounting
of arguments with which one disagrees (by judging them as untrue or irrelevant)
was decreased by selective activation of right prefrontal cortex.[25]
-Wishful thinking. A tendency to want to see things in a certain – usually positive
– light, which can distort perception and thinking.[26]
-Choice-supportive bias occurs when people distort their memories of chosen
and rejected options to make the chosen options seem more attractive.
-Recency. People tend to place more attention on more recent information and
either ignore or forget more distant information (see semantic priming). The
opposite effect in the first set of data or other information is termed primacy
effect.[27]
-Repetition bias. A willingness to believe what one has been told most often and
by the greatest number of different sources.
-Anchoring and adjustment. Decisions are unduly influenced by initial
information that shapes our view of subsequent information.
-Group think. Peer pressure to conform to the opinions held by the group.
-Source credibility bias. A tendency to reject a person's statement on the basis
of a bias against the person, organization, or group to which the person
belongs. People preferentially accept statement by others that they like (see
prejudice).
-Incremental decision-making and escalating commitment. We look at a
decision as a small step in a process and this tends to perpetuate a series of
similar decisions. This can be contrasted with "zero-based decision-making"
(see slippery slope).
-Attribution asymmetry. People tend to attribute their own success to internal
factors, including abilities and talents, but explain their failures in terms of
external factors such as bad luck. The reverse bias is shown when people
explain others' success or failure.
-Role fulfillment. A tendency to conform to others' decision-making
expectations.
-Underestimating uncertainty and the illusion of control. People tend to
underestimate future uncertainty because of a tendency to believe they have
more control over events than they really do.
-Framing bias. This is best avoided by using numeracy with absolute measures
of efficacy.[28]
-Sunk-cost fallacy. A specific type of framing effect that affects decisionmaking. It involves an individual making a decision about a current
situation based on what they have previously invested in the
situation.[29] A possible example to this would be an individual that is
refraining from dropping a class that that they are most likely to fail, due
to the fact that they feel as though they have done so much work in the
course thus far.
-Prospect theory. Involves the idea that when faced with a decision-making
event, an individual is more likely to take on a risk when evaluating potential
losses, and are more likely to avoid risks when evaluating potential gains. This
can influence one's decision-making depending if the situation entails a threat,
or opportunity.[30]
707
Reference class forecasting was developed to eliminate or reduce cognitive biases in
decision-making.
Post-decision analysis
Evaluation and analysis of past decisions is complementary to decision-making; see
also mental accounting and postmortem documentation.
Cognitive styles
Influence of Myers-Briggs type
According to behavioralist Isabel Briggs Myers, a person's decision-making process
depends to a significant degree on their cognitive style.[31] Myers developed a set of
four bi-polar dimensions, called the Myers-Briggs Type Indicator (MBTI). The terminal
points on these dimensions are: thinking and feeling; extroversion and introversion;
judgment and perception; and sensing and intuition. She claimed that a person's
decision-making style correlates well with how they score on these four dimensions.
For example, someone who scored near the thinking, extroversion, sensing, and
judgment ends of the dimensions would tend to have a logical, analytical, objective,
critical, and empirical decision-making style. However, some[who?] psychologists say
that the MBTI lacks reliability and validity and is poorly constructed.
Other studies suggest that these national or cross-cultural differences exist across
entire societies. For example, Maris Martinsons has found that American, Japanese
and Chinese business leaders each exhibit a distinctive national style of decisionmaking.[32]
Optimizing vs. satisficing
Herbert A. Simon coined the phrase "bounded rationality" to express the idea that
human decision-making is limited by available information, available time and the
mind's information-processing ability. Simon also defined two cognitive styles:
maximizers try to make an optimal decision, whereas satisficers simply try to find a
solution that is "good enough". Maximizers tend to take longer making decisions due to
the need to maximize performance across all variables and make tradeoffs carefully;
they also tend to more often regret their decisions (perhaps because they are more
able than satisficers to recognise that a decision turned out to be sub-optimal).[33]
Combinatorial vs. positional
Styles and methods of decision-making were elaborated by Aron Katsenelinboigen, the
founder of predispositioning theory. In his analysis on styles and methods,
Katsenelinboigen referred to the game of chess, saying that “chess does disclose
various methods of operation, notably the creation of predisposition – methods which
may be applicable to other, more complex systems.”[34]
In his book, Katsenelinboigen states that apart from the methods (reactive and
selective) and sub-methods (randomization, predispositioning, programming), there are
two major styles: positional and combinational. Both styles are utilized in the game of
chess. According to Katsenelinboigen, the two styles reflect two basic approaches to
the uncertainty: deterministic (combinational style) and indeterministic (positional style).
Katsenelinboigen’s definition of the two styles are the following.
The combinational style is characterized by:
-a very narrow, clearly defined, primarily material goal; and
-a program that links the initial position with the final outcome.
In defining the combinational style in chess, Katsenelinboigen writes:
708
The combinational style features a clearly formulated limited objective, namely the
capture of material (the main constituent element of a chess position). The objective is
implemented via a well-defined, and in some cases, unique sequence of moves aimed
at reaching the set goal. As a rule, this sequence leaves no options for the opponent.
Finding a combinational objective allows the player to focus all his energies on efficient
execution, that is, the player’s analysis may be limited to the pieces directly partaking in
the combination. This approach is the crux of the combination and the combinational
style of play.[34]
The positional style is distinguished by:
-a positional goal; and
-a formation of semi-complete linkages between the initial step and final
outcome.
“Unlike the combinational player, the positional player is occupied, first and foremost,
with the elaboration of the position that will allow him to develop in the unknown future.
In playing the positional style, the player must evaluate relational and material
parameters as independent variables. ... The positional style gives the player the
opportunity to develop a position until it becomes pregnant with a combination.
However, the combination is not the final goal of the positional player—it helps him to
achieve the desirable, keeping in mind a predisposition for the future development. The
pyrrhic victory is the best example of one’s inability to think positionally."[35]
The positional style serves to:
-create a predisposition to the future development of the position;
-induce the environment in a certain way;
-absorb an unexpected outcome in one’s favor;
-avoid the negative aspects of unexpected outcomes.
Katsenelinboigen writes:
"As the game progressed and defense became more sophisticated the
combinational style of play declined. ... The positional style of chess does not
eliminate the combinational one with its attempt to see the entire program of
action in advance. The positional style merely prepares the transformation to a
combination when the latter becomes feasible.”[36]
Neuroscience
The anterior cingulate cortex (ACC), orbitofrontal cortex and the overlapping
ventromedial prefrontal cortex are brain regions involved in decision-making processes.
A recent neuroimaging study[37] found distinctive patterns of neural activation in these
regions depending on whether decisions were made on the basis of perceived personal
volition or following directions from someone else. Patients with damage to the
ventromedial prefrontal cortex have difficulty making advantageous decisions.[38]
A recent study[39] of a two-alternative forced choice task involving rhesus monkeys
found that neurons in the parietal cortex not only represent the formation of a decision
but also signal the degree of certainty (or "confidence") associated with the decision.
Another recent study[40] found that lesions to the ACC in the macaque resulted in
impaired decision-making in the long run of reinforcement guided tasks suggesting that
the ACC may be involved in evaluating past reinforcement information and guiding
future action.
Emotion appears able to aid the decision-making process. Decision-making often
occurs in the face of uncertainty about whether one's choices will lead to benefit or
harm (see also risk). The somatic-marker hypothesis is a neurobiological theory of how
decisions are made in the face of uncertain outcome. This theory holds that such
decisions are aided by emotions, in the form of bodily states, that are elicited during the
deliberation of future consequences and that mark different options for behavior as
being advantageous or disadvantageous. This process involves an interplay between
709
neural systems that elicit emotional/bodily states and neural systems that map these
emotional/bodily states.[41]
Although it is unclear whether the studies generalize to all processing, subconscious
processes have been implicated in the initiation of conscious volitional movements.
See the Neuroscience of free will.
Decision-making in adolescents vs. adults
During their adolescent years, teens are known for their high-risk behaviors and rash
decisions. There has not, however, been that much research in this area. Recent
research[citation needed] has shown, though, that there are some differences in
cognitive processes between adolescents and adults during decision-making.
Researchers have concluded that differences in decision-making are not due to a lack
of logic or reasoning, but more due to the immaturity of psychosocial capacities,
capacities that influence decision-making. Examples would be impulse control, emotion
regulation, delayed gratification and resistance to peer pressure. In the past,
researchers have thought that adolescent behavior was simply due to incompetency
regarding decision-making. Currently, researchers have concluded that adults and
adolescents are both competent decision-makers, not just adults. However,
adolescents’ competent decision-making skills decrease when psychosocial capacities
become present.
Recent research[citation needed] has shown that risk-taking behaviors in adolescents
may be the product of interactions between the socioemotional brain network and its
cognitive-control network. The socioemotional part of the brain processes social and
emotional stimuli and has been shown to be important in reward processing. The
cognitive-control network assists in planning and self-regulation. Both of these sections
of the brain change over the course of puberty. However, the socioemotional network
changes quickly and abruptly, while the cognitive-control network changes more
gradually. Because of this difference in change the cognitive-control network, which
usually regulates the socioemotional network, [the adolescent?] struggles to control the
socioemotional network when psychosocial capacities are present.[clarification needed]
When adolescents are exposed to social and emotional stimuli, their socioemotional
network is activated as well as areas of the brain involved in reward processing.
Because teens often gain a sense of reward from risk-taking behaviors, their repetition
becomes ever more probable due to the reward experienced. In this, the process
mirrors addiction. Teens can become addicted to risky behavior because they are in a
high state of arousal and are rewarded for it not only by their own internal functions but
also by their peers around them.
This is why adults are generally better able to control their risk-taking because their
cognitive-control system has matured enough to the point where it can control the
socioemotional network, even in the context of high arousal or when psychosocial
capacities are present. Also, adults are less likely to find themselves in situations that
push them to do risky things. For example, teens are more likely to be around peers
who peer pressure them into doing things, while adults are not as exposed to this sort
of social setting.[42][43]
710
Problem Solving
Problem-solving consists of using generic or ad hoc methods, in an orderly manner, for
finding solutions to problems. Some of the problem-solving techniques developed and
used in artificial intelligence, computer science, engineering, mathematics, medicine,
etc. are related to mental problem-solving techniques studied in psychology.
Contents
1 Definition
1.1 Psychology
1.2 Clinical Psychology
1.3 Cognitive Sciences
1.4 Computer Science and Algorithmics
1.5 Engineering
2 Cognitive Sciences: Two Schools
2.1 Europe
2.2 North America
3 Characteristics of Difficult Problems
4 Problem-Solving Strategies
5 Problem-Solving Methodologies
6 Common barriers to problem solving
6.1 Confirmation Bias
6.2 Mental Set
6.2.1 Functional Fixedness
6.3 Unnecessary Constraints
6.4 Irrelevant Information
Definition
The term problem-solving is used in many disciplines, sometimes with different
perspectives, and often with different terminologies. For instance, it is a mental process
in psychology and a computerized process in computer science. Problems can also be
classified into two different types (ill-defined and well-defined) from which appropriate
solutions are to be made. Ill-defined problems are those that do not have clear goals,
solution paths, or expected solution. Well-defined problems have specific goals, clearly
defined solution paths, and clear expected solutions. These problems also allow for
more initial planning than ill-defined problems.[1] Being able to solve problems
sometimes involves dealing with pragmatics (logic) and semantics (interpretation of the
problem). The ability to understand what the goal of the problem is and what rules
could be applied represent the key to solving the problem. Sometimes the problem
requires some abstract thinking and coming up with a creative solution.
711
Psychology
In psychology, problem solving refers to a state of desire for reaching a definite 'goal'
from a present condition that either is not directly moving toward the goal, is far from it,
or needs more complex logic for finding a missing description of conditions or steps
toward the goal.[2] In psychology, problem solving is the concluding part of a larger
process that also includes problem finding and problem shaping.
Considered the most complex of all intellectual functions, problem solving has been
defined as a higher-order cognitive process that requires the modulation and control of
more routine or fundamental skills.[3] Problem solving has two major domains:
mathematical problem solving and personal problem solving where, in the second,
some difficulty or barrier is encountered.[4] Further problem solving occurs when
moving from a given state to a desired goal state is needed for either living organisms
or an artificial intelligence system.
While problem solving accompanies the very beginning of human evolution and
especially the history of mathematics,[4] the nature of human problem solving
processes and methods has been studied by psychologists over the past hundred
years. Methods of studying problem solving include introspection, behaviorism,
simulation, computer modeling, and experiment. Social psychologists have recently
distinguished between independent and interdependent problem-solving (see more).[5]
Clinical Psychology
Simple laboratory-based tasks can be useful in explicating the steps of logic and
reasoning that underlie problem solving; however, they usually omit the complexity and
emotional valence of "real-world" problems. In clinical psychology, researchers have
focused on the role of emotions in problem solving (D'Zurilla & Goldfried, 1971;
D'Zurilla & Nezu, 1982), demonstrating that poor emotional control can disrupt focus on
the target task and impede problem resolution (Rath, Langenbahn, Simon, Sherr, &
Diller, 2004). In this conceptualization, human problem solving consists of two related
processes: problem orientation, the motivational/attitudinal/affective approach to
problematic situations and problem-solving skills. Working with individuals with frontal
lobe injuries, neuropsychologists have discovered that deficits in emotional control and
reasoning can be remedied, improving the capacity of injured persons to resolve
everyday problems successfully (Rath, Simon, Langenbahn, Sherr, & Diller, 2003).
Cognitive Sciences
The early experimental work of the Gestaltists in Germany placed the beginning of
problem solving study (e.g., Karl Duncker in 1935 with his book The psychology of
productive thinking [6]). Later this experimental work continued through the 1960s and
early 1970s with research conducted on relatively simple (but novel for participants)
laboratory tasks of problem solving.[7][8] Choosing simple novel tasks was based on
the clearly defined optimal solutions and their short time for solving, which made
possible for the researchers to trace participants' steps in problem-solving process.
Researchers' underlying assumption was that simple tasks such as the Tower of Hanoi
correspond to the main properties of "real world" problems and thus the characteristic
cognitive processes within participants' attempts to solve simple problems are the
same for "real world" problems too; simple problems were used for reasons of
convenience and with the expectation that thought generalizations to more complex
problems would become possible. Perhaps the best-known and most impressive
example of this line of research is the work by Allen Newell and Herbert A. Simon.[9]
Other experts have shown that the principle of decomposition improves the ability of
the problem solver to make good judgment.[10]
712
Computer Science and Algorithmics
In computer science and in the part of artificial intelligence that deals with algorithms
("algorithmics"), problem solving encompasses a number of techniques known as
algorithms, heuristics, root cause analysis, etc. In these disciplines, problem solving is
part of a larger process that encompasses problem determination, de-duplication,
analysis, diagnosis, repair, etc.
Engineering
Problem solving is used in engineering when products or processes fail, so corrective
action can be taken to prevent further failures. It can also be applied to a product or
process prior to an actual fail event, i.e., when a potential problem can be predicted
and analyzed, and mitigation applied so the problem never actually occurs. Techniques
such as Failure Mode Effects Analysis can be used to proactively reduce the likelihood
of problems occurring.
Forensic engineering is an important technique of failure analysis that involves tracing
product defects and flaws. Corrective action can then be taken to prevent further
failures.
Reverse engineering attempts to discover the original problem-solving logic used in
developing a product by taking it apart.
Cognitive Sciences: Two Schools
In cognitive sciences, researchers' realization that problem-solving processes differ
across knowledge domains and across levels of expertise (e.g. Sternberg, 1995) and
that, consequently, findings obtained in the laboratory cannot necessarily generalize to
problem-solving situations outside the laboratory, has led to an emphasis on real-world
problem solving since the 1990s. This emphasis has been expressed quite differently
in North America and Europe, however. Whereas North American research has
typically concentrated on studying problem solving in separate, natural knowledge
domains, much of the European research has focused on novel, complex problems,
and has been performed with computerized scenarios (see Funke, 1991, for an
overview).
Europe
In Europe, two main approaches have surfaced, one initiated by Donald Broadbent
(1977; see Berry & Broadbent, 1995) in the United Kingdom and the other one by
Dietrich Dörner (1975, 1985; see Dörner & Wearing, 1995) in Germany. The two
approaches share an emphasis on relatively complex, semantically rich, computerized
laboratory tasks, constructed to resemble real-life problems. The approaches differ
somewhat in their theoretical goals and methodology, however. The tradition initiated
by Broadbent emphasizes the distinction between cognitive problem-solving processes
that operate under awareness versus outside of awareness, and typically employs
mathematically well-defined computerized systems. The tradition initiated by Dörner,
on the other hand, has an interest in the interplay of the cognitive, motivational, and
social components of problem solving, and utilizes very complex computerized
scenarios that contain up to 2,000 highly interconnected variables (e.g., Dörner,
Kreuzig, Reither & Stäudel's 1983 LOHHAUSEN project; Ringelband, Misiak & Kluwe,
1990). Buchner (1995) describes the two traditions in detail.
713
North America
In North America, initiated by the work of Herbert A. Simon on "learning by doing" in
semantically rich domains (e.g. Anzai & Simon, 1979; Bhaskar & Simon, 1977),
researchers began to investigate problem solving separately in different natural
knowledge domains – such as physics, writing, or chess playing – thus relinquishing
their attempts to extract a global theory of problem solving (e.g. Sternberg & Frensch,
1991). Instead, these researchers have frequently focused on the development of
problem solving within a certain domain, that is on the development of expertise (e.g.
Anderson, Boyle & Reiser, 1985; Chase & Simon, 1973; Chi, Feltovich & Glaser,
1981).
Areas that have attracted rather intensive attention in North America include:
Reading (Stanovich & Cunningham, 1991)
Writing (Bryson, Bereiter, Scardamalia & Joram, 1991)
Calculation (Sokol & McCloskey, 1991)
Political decision making (Voss, Wolfe, Lawrence & Engle, 1991)
Problem Solving for Business (Cornell, 2010)
Managerial problem solving (Wagner, 1991)
Lawyers' reasoning (Amsel, Langer & Loutzenhiser, 1991)
Mechanical problem solving (Hegarty, 1991)
Problem solving in electronics (Lesgold & Lajoie, 1991)
Computer skills (Kay, 1991)
Game playing (Frensch & Sternberg, 1991)
Personal problem solving (Heppner & Krauskopf, 1987)
Mathematical problem solving (Pólya, 1945; Schoenfeld, 1985)
Social problem solving (D'Zurilla & Goldfreid, 1971; D'Zurilla & Nezu, 1982)
Problem solving for innovations and inventions: TRIZ (Altshuller, 1973, 1990,
1995)
Characteristics of Difficult Problems
As elucidated by Dietrich Dörner and later expanded upon by Joachim Funke, difficult
problems have some typical characteristics that can be summarized as follows:
-Intransparency (lack of clarity of the situation)
commencement opacity
continuation opacity
-Polytely (multiple goals)
inexpressiveness
opposition
transience
-Complexity (large numbers of items, interrelations and decisions)
enumerability
connectivity (hierarchy relation, communication relation, allocation
relation)
heterogeneity
-Dynamics (time considerations)
temporal constraints
temporal sensitivity
phase effects
dynamic unpredictability
The resolution of difficult problems requires a direct attack on each of these
characteristics that are encountered.[11]
714
Problem-Solving Strategies
Problem-solving strategies are the steps that one would use to find the problem(s) that
are in the way to getting to one’s own goal. Some would refer to this as the ‘problemsolving cycle’. (Bransford & Stein, 1993) In this cycle one will recognize the problem,
define the problem, develop a strategy to fix the problem, organize the knowledge of
the problem, figure-out the resources at the user's disposal, monitor one's progress,
and evaluate the solution for accuracy. Although called a cycle, one does not have to
do each step in order to fix the problem, in fact those who don’t are usually better at
problem solving.[citation needed] The reason it is called a cycle is that once one is
completed with a problem another usually will pop up. Blanchard-Fields (2007) looks at
problem solving from one of two facets. The first looking at those problems that only
have one solution (like math problems, or fact based questions) which are grounded in
psychometric intelligence. The other that is socioemotional in nature and are
unpredictable with answers that are constantly changing (like what’s your favorite color
or what you should get someone for Christmas).
The following techniques are usually called problem-solving strategies:
-Abstraction: solving the problem in a model of the system before applying it to
the real system
-Analogy: using a solution that solves an analogous problem
-Brainstorming: (especially among groups of people) suggesting a large number
of solutions or ideas and combining and developing them until an optimum
solution is found
-Divide and conquer: breaking down a large, complex problem into smaller,
solvable problems
-Hypothesis testing: assuming a possible explanation to the problem and trying
to prove (or, in some contexts, disprove) the assumption
-Lateral thinking: approaching solutions indirectly and creatively
-Means-ends analysis: choosing an action at each step to move closer to the
goal
-Method of focal objects: synthesizing seemingly non-matching characteristics
of different objects into something new
-Morphological analysis: assessing the output and interactions of an entire
system
-Proof: try to prove that the problem cannot be solved. The point where the
proof fails will be the starting point for solving it
-Reduction: transforming the problem into another problem for which solutions
exist
-Research: employing existing ideas or adapting existing solutions to similar
problems
-Root cause analysis: identifying the cause of a problem
-Trial-and-error: testing possible solutions until the right one is found
Problem-Solving Methodologies
-Eight Disciplines Problem Solving
-GROW model
-How to Solve It
-Kepner-Tregoe Problem Solving and Decision Making
-OODA loop (observe, orient, decide, and act)
-PDCA (plan–do–check–act)
-RPR Problem Diagnosis (rapid problem resolution)
-TRIZ (in Russian: Teoriya Resheniya Izobretatelskikh Zadatch, "theory of
solving inventor's problems")
715
Common barriers to problem solving
Common barriers to problem solving' are mental constructs that impede our ability to
correctly solve problems. These barriers prevent people from solving problems in the
most efficient manner possible. Five of the most common processes and factors that
researchers have identified as barriers to problem solving are confirmation bias, mental
set, functional fixedness, unnecessary constraints, and irrelevant information.
Confirmation Bias
Within the field of science there exists a fundamental standard, the scientific method,
which outlines the process of discovering facts or truths about the world through
unbiased consideration of all pertinent information, and impartial observation of and/or
experimentation with that information. According to this theory, one is able to most
accurately find a solution to a perceived problem by performing the aforementioned
steps. The scientific method is not a process that is limited to scientists, but rather it is
one that all people can practice in their respective fields of work as well as in their
personal lives. Confirmation bias can be described as one's unconscious or
unintentional corruption of the scientific method. Thus when one demonstrates
confirmation bias, he or she is formally or informally collecting data, and then
subsequently observing and experimenting with that data in such a way that favors a
preconceived notion that may or may not have motivation.[12] Interestingly, research
has found that professionals within scientific fields of study also experience
confirmation bias. In Andreas Hergovich, Reinhard Schott, and Christoph Burger's
experiment conducted online, for instance, it was discovered that professionals within
the field of psychological research are likely to view scientific studies that are congruent
with their preconceived understandings more favorably than studies that are
incongruent with their established beliefs.[13]
Motivation refers to one’s desire to defend or find substantiation for beliefs (e.g.,
religious beliefs) that are important to him or her.[14] According to Raymond Nickerson,
one can see the consequences of confirmation bias in real life situations, which range
in severity from inefficient government policies to genocide. With respect to the latter
and most severe ramification of this cognitive barrier, Nickerson argued that those
involved in committing genocide of persons accused of witchcraft, an atrocity that
occurred from the 1400s to 1600s AD, demonstrated confirmation bias with motivation.
Researcher Michael Allen found evidence for confirmation bias with motivation in
school children who worked to manipulate their science experiments in such a way that
would produce their hoped for results.[15] However, confirmation bias does not
necessarily require motivation. In 1960, Peter Cathcart Wason conducted an
experiment in which participants first viewed three numbers and then created a
hypothesis that proposed a rule that could have been used to create that triplet of
numbers. When testing their hypotheses, participants tended to only create additional
triplets of numbers that would confirm their hypotheses, and tended not to create
triplets that would negate or disprove their hypotheses. Thus research also shows that
people can and do work to confirm theories or ideas that do not support or engage
personally significant beliefs.[16]
Mental Set
Mental set was first articulated by Abraham Luchins in the 1940s and demonstrated in
his well-known water jug experiments.[17] In these experiments, participants were
asked to fill one jug with a specific amount of water using only other jugs (typically
three) with different maximum capacities as tools. After Luchins gave his participants a
set of water jug problems that could all be solved by employing a single technique, he
would then give them a problem that could either be solved using that same technique
716
or a novel and simpler method. Luchins discovered that his participants tended to use
the same technique that they had become accustomed to despite the possibility of
using a simpler alternative.[18] Thus mental set describes one's inclination to attempt
to solve problems in such a way that has proved successful in previous experiences.
However, as Luchins' work revealed, such methods for finding a solution that have
worked in the past may not be adequate or optimal for certain new but similar
problems. Therefore, it is often necessary for people to move beyond their mental sets
in order to find solutions. This was again demonstrated in Norman Maier's 1931
experiment, which challenged participants to solve a problem by using a household
object (pliers) in an unconventional manner. Maier observed that participants were
often unable to view the object in a way that strayed from its typical use, a
phenomenon regarded as a particular form of mental set (more specifically known as
functional fixedness, which is the topic of the following section). When people cling
rigidly to their mental sets, they are said to be experiencing fixation, a seeming
obsession or preoccupation with attempted strategies that are repeatedly
unsuccessful.[19] In the late 1990s, researcher Jennifer Wiley worked to reveal that
expertise can work to create a mental set in persons considered to be experts in
certain fields, and she furthermore gained evidence that the mental set created by
expertise could lead to the development of fixation.[20]
Functional Fixedness
Functional fixedness is a specific form of mental set and fixation, which was alluded to
earlier in the Maier experiment, and furthermore it is another way in which cognitive
bias can be seen throughout daily life. Tim German and Clark Barrett describe this
barrier as the fixed design of an object hindering the individual's ability to see it serving
other functions. In more technical terms, these researchers explained that “[s]ubjects
become “fixed” on the design function of the objects, and problem solving suffers
relative to control conditions in which the object’s function is not demonstrated.”[21]
Functional fixedness is defined as only having that primary function of the object itself
hinder the ability of it serving another purpose other than its original function. In
research that highlighted the primary reasons that young children are immune to
functional fixedness, it was stated that “functional fixedness...[is when]subjects are
hindered in reaching the solution to a problem by their knowledge of an object’s
conventional function.”[22] Furthermore, it is important to note that functional fixedness
can be easily expressed in commonplace situations. For instance, imagine the
following situation: a man sees a bug on the floor that he wants to kill, but the only thing
in his hand at the moment is a can of air freshener. If the man starts looking around for
something in the house to kill the bug with instead of realizing that the can of air
freshener could in fact be used not only as having its main function as to freshen the
air, he is said to be experiencing functional fixedness. The man’s knowledge of the can
being served as purely an air freshener hindered his ability to realize that it too could
have been used to serve another purpose, which in this instance was as an instrument
to kill the bug. Functional fixedness can happen on multiple occasions and can cause
us to have certain cognitive biases. If we only see an object as serving one primary
focus than we fail to realize that the object can be used in various ways other than its
intended purpose. This can in turn cause many issues with regards to problem solving.
Common sense seems to be a plausible answer to functional fixedness. One could
make this argument because it seems rather simple to consider possible alternative
uses for an object. Perhaps using common sense to solve this issue could be the most
accurate answer within this context. With the previous stated example, it seems as if it
would make perfect sense to use the can of air freshener to kill the bug rather than to
search for something else to serve that function but, as research shows, this is often
not the case.
717
Functional fixedness limits the ability for people to solve problems accurately by
causing one to have a very narrow way of thinking. Functional fixedness can be seen in
other types of learning behaviors as well. For instance, research has discovered the
presence of functional fixedness in many educational instances. Researchers Furio,
Calatayud, Baracenas, and Padilla stated that “... functional fixedness may be found in
learning concepts as well as in solving chemistry problems.”[23] There was more
emphasis on this function being seen in this type of subject and others.
There are several hypotheses in regards to how functional fixedness relates to problem
solving.[24] There are also many ways in which a person can run into problems while
thinking of a particular object with having this function. If there is one way in which a
person usually thinks of something rather than multiple ways then this can lead to a
constraint in how the person thinks of that particular object. This can be seen as narrow
minded thinking, which is defined as a way in which one is not able to see or accept
certain ideas in a particular context. Functional fixedness is very closely related to this
as previously mentioned. This can be done intentionally and or unintentionally, but for
the most part it seems as if this process to problem solving is done in an unintentional
way.
Functional fixedness can affect problem solvers in at least two particular ways. The first
is with regards to time, as functional fixedness causes people to use more time than
necessary to solve any given problem. Secondly, functional fixedness often causes
solvers to make more attempts to solve a problem than they would have made if they
were not experiencing this cognitive barrier. In the worst case, functional fixedness can
completely prevent a person from realizing a solution to a problem. Functional
fixedness is a commonplace occurrence, which affects the lives of many people.
Unnecessary Constraints
Unnecessary Constraints is another very common barrier that people face while
attempting to problem-solve. Like the other barriers discussed, it is common because
many people do this quite often in their tasks. This particular phenomenon occurs when
the subject, trying to solve the problem subconsciously, places boundaries on the task
at hand, which in turn forces him or her to strain to be more innovative in their thinking.
The solver hits a barrier when they become fixated on only one way to solve their
problem, and it becomes increasingly difficult to see anything but the method they have
chosen. Typically, the solver experiences this when attempting to use a method they
have already experienced success from, and they can not help but try to make it work
in the present circumstances as well, even if they see that it is counterproductive.[25]
Groupthink, or taking on the mindset of the rest of the group members, can also act as
an unnecessary constraint while trying to solve problems.[26] This is due to the fact
that with everybody thinking the same thing, stopping on the same conclusions, and
inhibiting themselves to think beyond this. This is very common, but the most wellknown example of this barrier making itself present is in the famous example of the dot
problem. In this example, there are nine dots lying in a square- three dots across, and
three dots running up and down. The solver is then asked to draw no more than four
lines, without lifting their pen or pencil from the paper. This series of lines should
connect all of the dots on the paper. Then, what typically happens is the subject
creates an assumption in their mind that they must connect the dots without letting his
or her pen or pencil go outside of the square of dots. Standardized procedures like this
can often bring these kind of mentally-invented constraints,[27] and researchers have
found a 0% correct solution rate in the time allotted for the task to be completed.[28]
The imposed constraint inhibits the solver to think beyond the bounds of the dots. It is
from this phenomenon that the expression “think outside the box” is derived.[29]
This problem can be quickly solved with a dawning of realization, or insight. A few
minutes of struggling over a problem can bring these sudden insights, where the solver
718
quickly sees the solution clearly. Problems such as this are most typically solved via
insight and can be very difficult for the subject depending on either how they have
structured the problem in their minds, how they draw on their past experiences, and
how much they juggle this information in their working memories[29] In the case of the
nine-dot example, the solver has already been structured incorrectly in their minds
because of the constraint that they have placed upon the solution. In addition to this,
people experience struggles when they try to compare the problem to their prior
knowledge, and they think they must keep their lines within the dots and not go
beyond. They do this because trying to envision the dots connected outside of the
basic square puts a strain on their working memory.[29]
Luckily, the solution to the problem becomes obvious as insight occurs following
incremental movements made toward the solution. These tiny movements happen
without the solver knowing. Then when the insight is realized fully, the “aha” moment
happens for the subject.[30] These moments of insight can take a long while to
manifest or not so long at other times, but the way that the solution is arrived at after
toiling over these barriers stays the same.
Irrelevant Information
Irrelevant information is information presented within a problem that is unrelated or
unimportant to the specific problem.[25] Within the specific context of the problem,
irrelevant information would serve no purpose in helping solve that particular problem.
Often irrelevant information is detrimental to the problem solving process. It is a
common barrier that many people have trouble getting through, especially if they are
not aware of it. Irrelevant information makes solving otherwise relatively simple
problems much harder.[31]
For example:
"Fifteen percent of the people in Topeka have unlisted telephone numbers. You select
200 names at random from the Topeka phone book. How many of these people have
unlisted phone numbers?"[32]
The people that are not listed in the phone book would not be among the 200 names
you selected. The individuals looking at this task would have naturally wanted to use
the 15% given to them in the problem. They see that there is information present and
they immediately think that it needs to be used. This of course is not true. These kinds
of questions are often used to test students taking aptitude tests or cognitive
evaluations.[33] They aren’t meant to be difficult but they are meant to require thinking
that is not necessarily common. Irrelevant Information is commonly represented in
math problems, word problems specifically, were numerical information is put for the
purpose of challenging the individual.
One reason Irrelevant Information is so effective at keeping a person off topic and
away from the relevant information, is in how it is represented.[33] The way information
is represented can make a vast difference in how difficult the problem is to be
overcome. Whether a problem is represented visually, verbally, spatially, or
mathematically, irrelevant information can have a profound effect on how long a
problem takes to be solved; or if it’s even possible. The Buddhist monk problem is a
classic example of Irrelevant Information and how it can be represented in different
ways:
A Buddhist monk begins at dawn one day walking up a mountain, reaches the
top at sunset, meditates at the top for several days until one dawn when he
begins to walk back to the foot of the mountain, which he reaches at sunset.
Making no assumptions about his starting or stopping or about his pace during
the trips, prove that there is a place on the path which he occupies at the same
hour of the day on the two separate journeys.
719
This problem is near impossible to solve because of how the information is
represented. Because it is written out in a way that represents the information verbally,
it causes us to try and create a mental image of the paragraph. This is often very
difficult to do especially with all the Irrelevant Information involved in the question. This
example is made much easier to understand when the paragraph is represented
visually. Now if the same problem was asked, but it was also accompanied by a
corresponding graph, it would be far easier to answer this question; Irrelevant
Information no longer serves as a road block. By representing the problem visually,
there are no difficult words to understand or scenarios to imagine. The visual
representation of this problem has removed the difficulty of solving it.
These types of representations are often used to make difficult problems easier.[34]
They can be used on tests as a strategy to remove Irrelevant Information, which is one
of the most common forms of barriers when discussing the issues of problem
solving.[25] Identifying crucial information presented in a problem and then being able
to correctly identify its usefulness is essential. Being aware of Irrelevant Information is
the first step in overcoming this common barrier.
720
Religion
Religion is an organized collection of beliefs, cultural systems, and world views that
relate humanity to an order of existence.[note 1] Many religions have narratives,
symbols, and sacred histories that are intended to explain the meaning of life and/or to
explain the origin of life or the Universe. From their beliefs about the cosmos and
human nature, people derive morality, ethics, religious laws or a preferred lifestyle.
According to some estimates, there are roughly 4,200 religions in the world.[1]
Many religions may have organized behaviors, clergy, a definition of what constitutes
adherence or membership, holy places, and scriptures. The practice of a religion may
also include rituals, sermons, commemoration or veneration of a deity, gods or
goddesses, sacrifices, festivals, feasts, trance, initiations, funerary services,
matrimonial services, meditation, prayer, music, art, dance, public service or other
aspects of human culture. Religions may also contain mythology.[2]
The word religion is sometimes used interchangeably with faith, belief system or
sometimes set of duties;[3] however, in the words of Émile Durkheim, religion differs
from private belief in that it is "something eminently social".[4] A global 2012 poll
reports that 59% of the world's population is religious, and 36% are not religious,
including 13% who are atheists, with a 9 percent decrease in religious belief from
2005.[5] On average, women are more religious than men.[6] Some people follow
multiple religions or multiple religious principles at the same time, regardless of
whether or not the religious principles they follow traditionally allow for
syncretism.[7][8][9]
721
Contents
1 Etymology
2 Definitions
3 Theories of religion
3.1 Origins and development
3.2 Social constructionism
3.3 Comparative religion
4 Types of religion
4.1 Categories
4.2 Interfaith cooperation
5 Religious groups
5.1 Abrahamic
5.2 Iranian
5.3 Indian
5.4 African traditional
5.5 Folk
5.6 New
6 Issues in religion
6.1 Economics
6.2 Health
6.3 Violence
6.4 Law
6.5 Science
6.6 Animal sacrifice
7 Related forms of thought
7.1 Superstition
7.2 Myth
8 Secularism and irreligion
8.1 Criticism of religion
Etymology
Religion (from O.Fr. religion "religious community," from L. religionem (nom. religio)
"respect for what is sacred, reverence for the gods,"[10] "obligation, the bond between
man and the gods"[11]) is derived from the Latin religiō, the ultimate origins of which
are obscure. One possibility is an interpretation traced to Cicero, connecting lego
"read", i.e. re (again) + lego in the sense of "choose", "go over again" or "consider
carefully". Modern scholars such as Tom Harpur and Joseph Campbell favor the
derivation from ligare "bind, connect", probably from a prefixed re-ligare, i.e. re (again)
+ ligare or "to reconnect," which was made prominent by St. Augustine, following the
interpretation of Lactantius.[12][13] The medieval usage alternates with order in
designating bonded communities like those of monastic orders: "we hear of the
'religion' of the Golden Fleece, of a knight 'of the religion of Avys'".[14]
According to the philologist Max Müller, the root of the English word "religion", the Latin
religio, was originally used to mean only "reverence for God or the gods, careful
pondering of divine things, piety" (which Cicero further derived to mean
"diligence").[15][16] Max Müller characterized many other cultures around the world,
including Egypt, Persia, and India, as having a similar power structure at this point in
history. What is called ancient religion today, they would have only called "law".[17]
722
Many languages have words that can be translated as "religion", but they may use
them in a very different way, and some have no word for religion at all. For example,
the Sanskrit word dharma, sometimes translated as "religion", also means law.
Throughout classical South Asia, the study of law consisted of concepts such as
penance through piety and ceremonial as well as practical traditions. Medieval Japan
at first had a similar union between "imperial law" and universal or "Buddha law", but
these later became independent sources of power.[18][19]
There is no precise equivalent of "religion" in Hebrew, and Judaism does not
distinguish clearly between religious, national, racial, or ethnic identities.[20] One of its
central concepts is "halakha", sometimes translated as "law"", which guides religious
practice and belief and many aspects of daily life.
The use of other terms, such as obedience to God or Islam are likewise grounded in
particular histories and vocabularies.[21]
Definitions
There are numerous definitions of religion
and only a few are stated here. The typical
dictionary definition of religion refers to a
"belief in, or the worship of, a god or
gods"[22] or the "service and worship of
God or the supernatural".[23] However,
writers and scholars have expanded upon
the "belief in god" definitions as insufficient
to capture the diversity of religious thought
and experience.
Edward Burnett Tylor defined religion as
"the belief in spiritual beings".[24] He
argued, back in 1871, that narrowing the
definition to mean the belief in a supreme
deity or judgment after death or idolatry and
so on, would exclude many peoples from
the category of religious, and thus "has the
fault of identifying religion rather with
particular developments than with the
deeper motive which underlies them". He
also argued that the belief in spiritual beings
exists in all known societies.
The anthropologist Clifford Geertz defined
religion as a "system of symbols which acts
to establish powerful, pervasive, and long-lasting moods and motivations in men by
formulating conceptions of a general order of existence and clothing these conceptions
with such an aura of factuality that the moods and motivations seem uniquely
realistic."[25] Alluding perhaps to Tylor's "deeper motive", Geertz remarked that "we
have very little idea of how, in empirical terms, this particular miracle is accomplished.
We just know that it is done, annually, weekly, daily, for some people almost hourly;
and we have an enormous ethnographic literature to demonstrate it".[26] The
theologian Antoine Vergote also emphasized the "cultural reality" of religion, which he
defined as "the entirety of the linguistic expressions, emotions and, actions and signs
that refer to a supernatural being or supernatural beings"; he took the term
"supernatural" simply to mean whatever transcends the powers of nature or human
agency.[27]
723
The sociologist Durkheim, in his seminal book The Elementary Forms of the Religious
Life, defined religion as a "unified system of beliefs and practices relative to sacred
things".[28] By sacred things he meant things "set apart and forbidden—beliefs and
practices which unite into one single moral community called a Church, all those who
adhere to them". Sacred things are not, however, limited to gods or spirits.[note 2] On
the contrary, a sacred thing can be "a rock, a tree, a spring, a pebble, a piece of wood,
a house, in a word, anything can be sacred".[29] Religious beliefs, myths, dogmas and
legends are the representations that express the nature of these sacred things, and the
virtues and powers which are attributed to them.[30]
In his book The Varieties of Religious Experience, the psychologist William James
defined religion as "the feelings, acts, and experiences of individual men in their
solitude, so far as they apprehend themselves to stand in relation to whatever they may
consider the divine".[31] By the term "divine" James meant "any object that is godlike,
whether it be a concrete deity or not"[32] to which the individual feels impelled to
respond with solemnity and gravity.[33]
Echoes of James' and Durkheim's definitions are to be found in the writings of, for
example, Frederick Ferré who defined religion as "one's way of valuing most
comprehensively and intensively".[34] Similarly, for the theologian Paul Tillich, faith is
"the state of being ultimately concerned",[35] which "is itself religion. Religion is the
substance, the ground, and the depth of man's spiritual life."[36] Friedrich
Schleiermacher in the late 18th century defined religion as das schlechthinnige
Abhängigkeitsgefühl, commonly translated as "a feeling of absolute dependence".[37]
His contemporary Hegel disagreed thoroughly, defining religion as "the Divine Spirit
becoming conscious of Himself through the finite spirit."[38]
When religion is seen in terms of "sacred", "divine", intensive "valuing", or "ultimate
concern", then it is possible to understand why scientific findings and philosophical
criticisms (e.g. Richard Dawkins) do not necessarily disturb its adherents.[39]
Theories of religion
Origins and development
The origin of religion is uncertain.
There are a number of theories
regarding the subsequent origins of
organized religious practices.
According to anthropologists John
Monaghan and Peter Just, "Many of
the great world religions appear to
have
begun
as
revitalization
movements of some sort, as the vision
of a charismatic prophet fires the
imaginations of people seeking a more
comprehensive
answer
to
their
problems than they feel is provided by
everyday beliefs. Charismatic individuals have emerged at many times and places in
the world. It seems that the key to long-term success – and many movements come
and go with little long-term effect – has relatively little to do with the prophets, who
appear with surprising regularity, but more to do with the development of a group of
supporters who are able to institutionalize the movement."[40]
The development of religion has taken different forms in different cultures. Some
religions place an emphasis on belief, while others emphasize practice. Some religions
focus on the subjective experience of the religious individual, while others consider the
activities of the religious community to be most important. Some religions claim to be
724
universal, believing their laws and cosmology to be binding for everyone, while others
are intended to be practiced only by a closely defined or localized group. In many
places religion has been associated with public institutions such as education,
hospitals, the family, government, and political hierarchies.[41]
Anthropologists John Monoghan and Peter Just state that, "it seems apparent that one
thing religion or belief helps us do is deal with problems of human life that are
significant, persistent, and intolerable. One important way in which religious beliefs
accomplish this is by providing a set of ideas about how and why the world is put
together that allows people to accommodate anxieties and deal with misfortune."[41]
Social constructionism
One modern academic theory of religion, social constructionism, says that religion is a
modern concept that suggests all spiritual practice and worship follows a model similar
to the Abrahamic religions as an orientation system that helps to interpret reality and
define human beings,[42] Among the main proponents of this theory of religion are
Daniel Dubuisson, Timothy Fitzgerald, Talal Asad, and Jason Ānanda Josephson. The
social constructionists argue that religion is a modern concept that developed from
Christianity and was then applied inappropriately to non-Western cultures.
Daniel Dubuisson, a French anthropologist, says that the idea of religion has changed
a lot over time and that one cannot fully understand its development by relying on
consistent use of the term, which "tends to minimize or cancel out the role of
history".[43] "What the West and the history of religions in its wake have objectified
under the name 'religion'", he says, " is ... something quite unique, which could be
appropriate only to itself and its own history."[43] He notes that St. Augustine's
definition of religio differed from the way we used the modern word "religion".[43]
Dubuisson prefers the term "cosmographic formation" to religion. Dubuisson says that,
with the emergence of religion as a category separate from culture and society, there
arose religious studies. The initial purpose of religious studies was to demonstrate the
superiority of the "living" or "universal" European world view to the "dead" or "ethnic"
religions scattered throughout the rest of the world, expanding the teleological project
of Schleiermacher and Tiele to a worldwide ideal religiousness.[44] Due to shifting
theological currents, this was eventually supplanted by a liberal-ecumenical interest in
searching for Western-style universal truths in every cultural tradition.[45]
According to Fitzgerald, religion is not a universal feature of all cultures, but rather a
particular idea that first developed in Europe under the influence of Christianity.[46]
Fitzgerald argues that from about the 4th century CE Western Europe and the rest of
the world diverged. As Christianity became commonplace, the charismatic authority
identified by Augustine, a quality we might today call "religiousness", exerted a
commanding influence at the local level. As the Church lost its dominance during the
Protestant Reformation and Christianity became closely tied to political structures,
religion was recast as the basis of national sovereignty, and religious identity gradually
became a less universal sense of spirituality and more divisive, locally defined, and tied
to nationality.[47] It was at this point that "religion" was dissociated with universal
beliefs and moved closer to dogma in both meaning and practice. However there was
not yet the idea of dogma as a personal choice, only of established churches. With the
Enlightenment religion lost its attachment to nationality, says Fitzgerald, but rather than
becoming a universal social attitude, it now became a personal feeling or emotion.[48]
Asad argues that before the word "religion" came into common usage, Christianity was
a disciplina, a "rule" just like that of the Roman Empire. This idea can be found in the
writings of St. Augustine (354–430). Christianity was then a power structure opposing
and superseding human institutions, a literal Kingdom of Heaven. It was the discipline
725
taught by one's family, school, church, and city authorities, rather than something
calling one to self-discipline through symbols.[49]
These ideas are developed by S. N. Balagangadhara. In the Age of Enlightenment,
Balagangadhara says that the idea of Christianity as the purest expression of
spirituality was supplanted by the concept of "religion" as a worldwide practice.[50] This
caused such ideas as religious freedom, a reexamination of classical philosophy as an
alternative to Christian thought, and more radically Deism among intellectuals such as
Voltaire. Much like Christianity, the idea of "religious freedom" was exported around the
world as a civilizing technique, even to regions such as India that had never treated
spirituality as a matter of political identity.[51]
More recently, in The Invention of Religion in Japan, Josephson has argued that while
the concept of “religion” was Christian in its early formulation, non-Europeans (such as
the Japanese) did not just acquiesce and passively accept the term's meaning. Instead
they worked to interpret "religion" (and its boundaries) strategically to meet their own
agendas and staged these new meanings for a global audience.[52] In nineteenth
century Japan, Buddhism was radically transformed from a pre-modern philosophy of
natural law into a "religion," as Japanese leaders worked to address domestic and
international political concerns. In summary, Josephson argues that the European
encounter with other cultures has led to a partial de-Christianization of the category
religion. Hence "religion" has come to refer to a confused collection of traditions with no
possible coherent definition.[53]
George Lindbeck, a Lutheran and a postliberal theologian (but not a social
constructionist), says that religion does not refer to belief in "God" or a transcendent
Absolute, but rather to "a kind of cultural and/or linguistic framework or medium that
shapes the entirety of life and thought ... it is similar to an idiom that makes possible
the description of realities, the formulation of beliefs, and the experiencing of inner
attitudes, feelings, and sentiments.”[54]
726
Comparative religion
Nicholas de Lange, Professor of Hebrew and Jewish Studies at Cambridge University,
says that "The comparative study of religions is an academic discipline which has been
developed within Christian theology faculties, and it has a tendency to force widely
differing phenomena into a kind of strait-jacket cut to a Christian pattern. The problem
is not only that other 'religions' may have little or nothing to say about questions which
are of burning importance for Christianity, but that they may not even see themselves
as religions in precisely the same way in which Christianity sees itself as a religion."[55]
Types of religion
Categories
Some scholars classify religions as either universal religions that seek worldwide
acceptance and actively look for new converts, or ethnic religions that are identified
with a particular ethnic group and do not seek converts.[56] Others reject the
distinction, pointing out that all religious practices, whatever their philosophical origin,
are ethnic because they come from a particular culture.[57][58][59]
In the 19th and 20th centuries, the academic practice of comparative religion divided
religious belief into philosophically defined categories called "world religions." However,
some recent scholarship has argued that not all types of religion are necessarily
separated by mutually exclusive philosophies, and furthermore that the utility of
ascribing a practice to a certain philosophy, or even calling a given practice religious,
rather than cultural, political, or social in nature, is limited.[51][60][61] The current state
of psychological study about the nature of religiousness suggests that it is better to
refer to religion as a largely invariant phenomenon that should be distinguished from
cultural norms (i.e. "religions").[62]
Some academics studying the subject have divided religions into three broad
categories:
1-world religions, a term which refers to transcultural, international faiths;
2-indigenous religions, which refers to smaller, culture-specific or nation-specific
religious groups; and
3-new religious movements, which refers to recently developed faiths.[63]
727
Interfaith cooperation
Because religion continues to be recognized in Western thought as a universal
impulse, many religious practitioners have aimed to band together in interfaith
dialogue, cooperation, and religious peacebuilding. The first major dialogue was the
Parliament of the World's Religions at the 1893 Chicago World's Fair, which remains
notable even today both in affirming "universal values" and recognition of the diversity
of practices among different cultures. The 20th century has been especially fruitful in
use of interfaith dialogue as a means of solving ethnic, political, or even religious
conflict, with Christian–Jewish reconciliation representing a complete reverse in the
attitudes of many Christian communities towards Jews.
Recent interfaith initiatives include "A Common Word", launched in 2007 and focused
on bringing Muslim and Christian leaders together,[64] the "C1 World Dialogue",[65]
the "Common Ground" initiative between Islam and Buddhism,[66] and a United
Nations sponsored "World Interfaith Harmony Week".[67][68]
Religious groups
The list of still-active religious movements given here is an attempt to summarize the
most important regional and philosophical influences on local communities, but it is by
no means a complete description of every religious community, nor does it explain the
most important elements of individual religiousness.
The five largest religious groups by world population, estimated to account for 5 billion
people, are Christianity, Islam, Buddhism, Hinduism (with the relative numbers for
Buddhism and Hinduism dependent on the extent of syncretism) and Chinese folk
religion.
Abrahamic
Abrahamic religions are monotheistic
religions which believe they descend
from Abraham.
-Judaism is the oldest Abrahamic
religion, originating in the people of
ancient Israel and Judea. Judaism is
based primarily on the Torah, a text
which some Jews believe was handed
down to the people of Israel through the
prophet Moses. This along with the rest
of the Hebrew Bible and the Talmud are
the central texts of Judaism. The Jewish
people were scattered after the
destruction of the Temple in Jerusalem in
70 CE. Today there are about 13 million
Jews, about 40 per cent living in Israel
728
and 40 per cent in the United States.[71]
-Christianity is based on the life and teachings of Jesus of Nazareth (1st century) as
presented in the New Testament. The Christian faith is essentially faith in Jesus as the
Christ, the Son of God, and as Savior and Lord. Almost all Christians believe in the
Trinity, which teaches the unity of Father, Son (Jesus Christ), and Holy Spirit as three
persons in one Godhead. Most Christians can describe their faith with the Nicene
Creed. As the religion of Byzantine Empire in the first millennium and of Western
Europe during the time of colonization, Christianity has been propagated throughout
the world. The main divisions of Christianity are, according to the number of adherents:
-Catholic Church, headed by the Pope in Rome, is a communion of the Western
church and 22 Eastern Catholic churches.
-Protestantism, separated from the Catholic Church in the 16th-century
Reformation and split in many denominations,
-Eastern Christianity, which include Eastern Orthodoxy, Oriental Orthodoxy, and
the Church of the East.
There are other smaller groups, such as Jehovah's Witnesses and the Latter
Day Saint movement, whose inclusion in Christianity is sometimes disputed.
-Islam is based on the Quran, one of
the holy books considered by Muslims
to be revealed by God, and on the
teachings of the Islamic prophet
Muhammad, a major political and
religious figure of the 7th century CE.
Islam is the most widely practiced
religion of Southeast Asia, North
Africa, Western Asia, and Central
Asia, while Muslim-majority countries
also exist in parts of South Asia, SubSaharan Africa, and Southeast
Europe. There are also several Islamic
republics, including Iran, Pakistan,
Mauritania, and Afghanistan.
-Sunni Islam is the largest denomination within Islam and follows the Quran, the
hadiths which record the sunnah, whilst placing emphasis on the sahabah.
-Shia Islam is the second largest denomination of Islam and its adherents
believe that Ali succeeded prophet Muhammad and further places emphasis on
prophet Muhammads family.
-Other denominations of Islam include Ahmadiyya, Nation of Islam, Ibadi,
Sufism, Quranism, Mahdavia, and non-denominational Muslims. Wahhabism is
the dominant Muslim schools of thought in the Kingdom of Saudi Arabia.
-The Bahá'í Faith is an Abrahamic religion founded in 19th century Iran and since then
has spread worldwide. It teaches unity of all religious philosophies and accepts all of
the prophets of Judaism, Christianity, and Islam as well as additional prophets
including its founder Bahá'u'lláh.
-Smaller regional Abrahamic groups, including Samaritanism (primarily in Israel and the
West Bank), the Rastafari movement (primarily in Jamaica), and Druze (primarily in
Syria and Lebanon).
729
Iranian
Iranian religions are ancient religions
whose roots predate the Islamization of
Greater Iran. Nowadays these religions
are practiced only by minorities.
-Zoroastrianism is a religion and
philosophy based on the teachings
of prophet Zoroaster in the 6th
century BC. The Zoroastrians
worship the Creator Ahura Mazda.
In Zoroastrianism good and evil
have distinct sources, with evil
trying to destroy the creation of
Mazda, and good trying to sustain
it.
-Mandaeism is a monotheistic religion with a strongly dualistic worldview.
Mandaeans are sometime labeled as the "Last Gnostics".
-Kurdish religions include the traditional beliefs of the Yazidi, Alevi, and Ahl-e
Haqq. Sometimes these are labeled Yazdânism.
Indian
Indian religions are practiced or were
founded in the Indian subcontinent. They
are sometimes classified as the dharmic
religions, as they all feature dharma, the
specific law of reality and duties expected
according to the religion.[72]
-Hinduism is a synecdoche describing the
similar philosophies of Vaishnavism,
Shaivism, and related groups practiced or
founded in the Indian subcontinent.
Concepts most of them share in common
include
karma,
caste,
reincarnation,
mantras, yantras, and darśana.[note 3]
Hinduism is the most ancient of still-active
religions,[73][74] with origins perhaps as far
back as prehistoric times.[75] Hinduism is
not a monolithic religion but a religious
category containing dozens of separate
philosophies amalgamated as Sanātana
Dharma, which is the name with whom
Hinduism has been known throughout history by its followers.
-Jainism, taught primarily by Parsva (9th century BCE) and Mahavira (6th century
BCE), is an ancient Indian religion that prescribes a path of non-violence for all forms of
living beings in this world. Jains are found mostly in India.
-Buddhism was founded by Siddhattha Gotama in the 6th century BCE. Buddhists
generally agree that Gotama aimed to help sentient beings end their suffering (dukkha)
by understanding the true nature of phenomena, thereby escaping the cycle of
suffering and rebirth (samsāra), that is, achieving Nirvana.
730
-Theravada Buddhism, which is practiced mainly in Sri Lanka and Southeast
Asia alongside folk religion, shares some characteristics of Indian religions. It is
based in a large collection of texts called the Pali Canon.
-Mahayana Buddhism (or the "Great Vehicle") under which are a multitude of
doctrines that began their development in China and are still relevant in
Vietnam, Korea, Japan and to a lesser extent in Europe and the United States.
Mahayana Buddhism includes such disparate teachings as Zen, Pure Land, and
Soka Gakkai.
-Vajrayana Buddhism first appeared in India in the 3rd century CE.[76] It is
currently most prominent in the Himalaya regions[77] and extends across all of
Asia[78] (cf. Mikkyō).
-Two notable new Buddhist sects are Hòa Hảo and the Dalit Buddhist
movement, which were developed separately in the 20th century.
-Sikhism is a monotheistic religion founded on
the teachings of Guru Nanak and ten
successive Sikh gurus in 15th century Punjab.
It is the fifth-largest organized religion in the
world,
with
approximately
30
million
Sikhs.[79][80] Sikhs are expected to embody
the qualities of a Sant-Sipāhī—a saint-soldier,
have control over one's internal vices and be
able to be constantly immersed in virtues
clarified in the Guru Granth Sahib. The
principal beliefs of Sikhi are faith in
Waheguru—represented by the phrase ik
ōarikār, meaning one God, who prevails in
everything, along with a praxis in which the
Sikh is enjoined to engage in social reform
through the pursuit of justice for all human
beings.
African traditional
African traditional religion encompasses the
traditional religious beliefs of people in Africa.
There are also notable African diasporic
religions practiced in the Americas.
North Africa: Traditional Berber religion
(Mauritania, Morocco, Algeria, Tunisia, Libya) Ancient Egyptian religion (Egypt, Sudan)
Northeast Africa: Waaq (Horn of Africa)
West Africa:
Akan religion (Ghana) Dahomey (Fon) mythology (Benin) - Efik
mythology (Nigeria, Cameroon) - Odinani of the
Igbo people (Nigeria, Cameroon) Serer
religion (Senegal, Gambia) - Yoruba religion
(Nigeria, Benin)
Central Africa: Bantu mythology (Central,
Southeast, and Southern Africa) - Bushongo
mythology (Congo) - Mbuti (Pygmy) mythology
(Congo) - Lugbara mythology (Congo) - Dinka
religion (South Sudan) - Lotuko mythology (South Sudan)
731
Southeast Africa: Bantu mythology (Central, Southeast, and Southern Africa) Akamba mythology (Kenya) - Masai mythology (Kenya, Tanzania) - Malagasy
mythology (Madagascar)
Southern Africa: Bantu mythology (Central, Southeast, and Southern Africa) - Saan
religion (South Africa) - Lozi mythology (Zambia) - Tumbuka mythology (Malawi) Zulu mythology (South Africa)
Diaspora: Santeria (Cuba) - Candomble (Brazil) - Vodun (Haiti, United States) Lucumi (Caribbean) - Umbanda (Brazil) - Macumba (Brazil)
Folk
The term folk refers to a broad category of traditional religions that includes shamanism
and elements of animism and ancestor worship, where traditional means "indigenous,
that which is aboriginal or foundational, handed down from generation to
generation…".[81] These are religions that are closely associated with a particular
group of people, ethnicity or tribe; they often have no formal creeds or sacred texts.[82]
Some faiths are syncretic, fusing diverse religious beliefs and practices.[83]
-Chinese folk religion, e.g.: those aspects of Confucianism and Taoism which
are seen as religious by outsiders, as well as some Mahayana Buddhism. New
religious movements include Falun Gong and I-Kuan Tao.
-Other folk religions in Asia-Pacific region, e.g.: Cheondoism, Korean
shamanism, Shinbutsu-shūgō and Modekngei.
-Australian Aboriginal mythology.
-Folk religions of the Americas, e.g.: Native American religion
Folk religions are often omitted as a category in surveys even in countries where they
are widely practiced, e.g. in China.[82]
New
New religious movements include:
-Shinshūkyō is a general category for a wide variety of religious movements
founded in Japan since the 19th century. These movements share almost
nothing in common except the place of their founding. The largest religious
movements centered in Japan include Soka Gakkai, Tenrikyo, and Seicho-NoIe among hundreds of smaller groups.
-Cao Đài is a syncretistic, monotheistic religion, established in Vietnam in 1926.
-Raëlism is a new religious movement founded in 1974 teaching that humans
were created by aliens. It is numerically the world's largest UFO religion.
-Hindu reform movements, such as Ayyavazhi, Swaminarayan Faith and
Ananda Marga, are examples of new religious movements within Indian
religions.
-Unitarian Universalism is a religion characterized by support for a "free and
responsible search for truth and meaning", and has no accepted creed or
theology.
-Noahidism is a Biblical-Talmudic and monotheistic ideology for non-Jews
based on the Seven Laws of Noah, and on their traditional interpretations within
Judaism.
-Scientology teaches that people are immortal beings who have forgotten their
true nature. Its method of spiritual rehabilitation is a type of counseling known
as auditing, in which practitioners aim to consciously re-experience and
understand painful or traumatic events and decisions in their past in order to
free themselves of their limiting effects.
732
-Eckankar is a pantheistic religion
with the purpose of making God an
everyday reality in one's life.
-Wicca is a neo-pagan religion first
popularised in 1954 by British civil
servant Gerald Gardner, involving
the worship of a God and Goddess.
-Druidry is a religion promoting
harmony with nature, and drawing on
the practices of the druids.
-Satanism is a broad category of
religions that, for example, worship
Satan as a deity (Theistic Satanism)
or use "Satan" as a symbol of carnality and earthly values (LaVeyan Satanism).
Sociological classifications of religious movements suggest that within any given
religious group, a community can resemble various types of structures, including
"churches", "denominations", "sects", "cults", and "institutions".
Issues in religion
Economics
While there has been much debate about how religion affects the economy of
countries, in general there is a negative correlation between religiosity and the wealth
of nations. In other words, the richer a nation is, the less religious it tends to be.[84]
However, sociologist and political economist Max Weber has argued that Protestant
countries are wealthier because of their Protestant work ethic.[85]
Health
Mayo Clinic researchers examined the association between religious involvement and
spirituality, and physical health, mental health, health-related quality of life, and other
733
health outcomes. The authors reported that: "Most studies have shown that religious
involvement and spirituality are associated with better health outcomes, including
greater longevity, coping skills, and health-related quality of life (even during terminal
illness) and less anxiety, depression, and suicide."[86]
The authors of a subsequent study concluded that the influence of religion on health is
"largely beneficial", based on a review of related literature.[87] According to academic
James W. Jones, several studies have discovered "positive correlations between
religious belief and practice and mental and physical health and longevity." [88]
An analysis of data from the 1998 US General Social Survey, whilst broadly confirming
that religious activity was associated with better health and well-being, also suggested
that the role of different dimensions of spirituality/religiosity in health is rather more
complicated. The results suggested "that it may not be appropriate to generalize
findings about the relationship between spirituality/religiosity and health from one form
of spirituality/religiosity to another, across denominations, or to assume effects are
uniform for men and women.[89]
Violence
Charles Selengut characterizes the phrase
"religion and violence" as "jarring", asserting
that "religion is thought to be opposed to
violence and a force for peace and
reconciliation. He acknowledges, however,
that "the history and scriptures of the world's
religions tell stories of violence and war as
they speak of peace and love."[90]
Hector Avalos argues that, because religions
claim divine favor for themselves, over and
against other groups, this sense of
righteousness leads to violence because
conflicting claims to superiority, based on
unverifiable appeals to God, cannot be
adjudicated objectively.[91]
Critics of religion Christopher Hitchens and
Richard Dawkins go further and argue that
religions do tremendous harm to society by
using violence to promote their goals, in ways
that are endorsed and exploited by their
leaders.[92][page needed][93][page needed]
Regina Schwartz argues that all monotheistic
religions are inherently violent because of an
exclusivism that inevitably fosters violence
against those that are considered outsiders.[94] Lawrence Wechsler asserts that
Schwartz isn't just arguing that Abrahamic religions have a violent legacy, but that the
legacy is actually genocidal in nature.[95]
Byron Bland asserts that one of the most prominent reasons for the "rise of the secular
in Western thought" was the reaction against the religious violence of the 16th and 17th
centuries. He asserts that "(t)he secular was a way of living with the religious
differences that had produced so much horror. Under secularity, political entities have a
warrant to make decisions independent from the need to enforce particular versions of
religious orthodoxy. Indeed, they may run counter to certain strongly held beliefs if
made in the interest of common welfare. Thus, one of the important goals of the
secular is to limit violence."[96]
734
Nonetheless, believers have used similar arguments when responding to atheists in
these discussions, pointing to the widespread imprisonment and mass murder of
individuals under atheist states in the twentieth century:[97][98][99]
And who can deny that Stalin and Mao, not to mention Pol Pot and a host of
others, all committed atrocities in the name of a Communist ideology that was
explicitly atheistic? Who can dispute that they did their bloody deeds by
claiming to be establishing a 'new man' and a religion-free utopia? These were
mass murders performed with atheism as a central part of their ideological
inspiration, they were not mass murders done by people who simply happened
to be atheist.
—Dinesh D'Souza[99]
In response to such a line of argument, however, author Sam Harris writes:
The problem with fascism and communism, however, is not that they are too
critical of religion; the problem is that they are too much like religions. Such
regimes are dogmatic to the core and generally give rise to personality cults that
are indistinguishable from cults of religious hero worship. Auschwitz, the gulag
and the killing fields were not examples of what happens when human beings
reject religious dogma; they are examples of political, racial and nationalistic
dogma run amok. There is no society in human history that ever suffered
because its people became too reasonable.
—Sam Harris[100]
Richard Dawkins has stated that Stalin's atrocities were influenced not by atheism but
by dogmatic Marxism,[101] and concludes that while Stalin and Mao happened to be
atheists, they did not do their deeds in the name of atheism.[102] On other occasions,
Dawkins has replied to the argument that Adolf Hitler and Josef Stalin were
antireligious with the response that Hitler and Stalin also grew moustaches, in an effort
to show the argument as fallacious.[103] Instead, Dawkins argues in The God Delusion
that "What matters is not whether Hitler and Stalin were atheists, but whether atheism
systematically influences people to do bad things. There is not the smallest evidence
that it does." Dawkins adds that Hitler in fact, repeatedly affirmed a strong belief in
Christianity,[104] but that his atrocities were no more attributable to his theism than
Stalin's or Mao's were to their atheism. In all three cases, he argues, the perpetrators'
level of religiosity was incidental.[105] D'Souza responds that an individual need not
explicitly invoke atheism in committing atrocities if it is already implied in his worldview,
as is the case in Marxism.[106]
Law
Law and Religion is a relatively new field since 1980, with several thousand scholars
involved in law schools, and academic departments of political science, religion, history
and others.[107] Scholars in the field are not only focused on strictly legal issues about
religious freedom or non establishment but also on the study of religions as they are
qualified through judicial discourses or legal understanding on religious phenomena.
Exponents look at canon law, natural law, and state law, often in comparative
perspective.[108][109] Specialists have explored themes in western history regarding
Christianity and justice and mercy, rule and equity, discipline and love.[110] Common
topics on interest include marriage and the family,[111] and human rights.[112] Moving
beyond Christianity, scholars have looked at law and religion links in in the Muslim
Middle East,[113] and pagan Rome.[114]
Important studies have appeared regarding secularization.[115][116] In particular the
issue of wearing religious symbols in public, such as headscarves that are banned in
French schools, have received scholarly attention in the context of human rights and
feminism.[117]
735
Science
Religious knowledge, according to religious practitioners, may be gained from religious
leaders, sacred texts, scriptures, or personal revelation. Some religions view such
knowledge as unlimited in scope and suitable to answer any question; others see
religious knowledge as playing a more restricted role, often as a complement to
knowledge gained through physical observation. Adherents to various religious faiths
often maintain that religious knowledge obtained via sacred texts or revelation is
absolute and infallible and thereby creates an accompanying religious cosmology,
although the proof for such is often tautological and generally limited to the religious
texts and revelations that form the foundation of their belief.
In contrast, the scientific method gains knowledge by testing hypotheses to develop
theories through elucidation of facts or evaluation by experiments and thus only
answers cosmological questions about the universe that can be observed and
measured. It develops theories of the world which best fit physically observed
evidence. All scientific knowledge is subject to later refinement, or even outright
rejection, in the face of additional evidence. Scientific theories that have an
overwhelming preponderance of favorable evidence are often treated as de facto
verities in general parlance, such as the theories of general relativity and natural
selection to explain respectively the mechanisms of gravity and evolution.
Regarding religion and science, Albert Einstein states (1940): "For science can only
ascertain what is, but not what should be, and outside of its domain value judgments of
all kinds remain necessary. Religion, on the other hand, deals only with evaluations of
human thought and action; it cannot justifiably speak of facts and relationships between
facts…Now, even though the realms of religion and science in themselves are clearly
marked off from each other, nevertheless there exist between the two strong reciprocal
relationships and dependencies. Though religion may be that which determine the
goals, it has, nevertheless, learned from science, in the broadest sense, what means
will contribute to the attainment of the goals it has set up." [118]
Animal sacrifice
Animal sacrifice is the ritual killing and offering of an animal to appease or maintain
favour with a deity. Such forms of sacrifice are practised within many religions around
the world and have appeared historically in almost all cultures.
Related forms of thought
Superstition
Superstition has been described as "the incorrect establishment of cause and effect" or
a false conception of causation.[119] Religion is more complex and includes social
institutions and morality. But religions may include superstitions or make use of magical
thinking. Adherents of one religion sometimes think of other religions as
superstition.[120][121] Some atheists, deists, and skeptics regard religious belief as
superstition.
Greek and Roman pagans, who saw their relations with the gods in political and social
terms, scorned the man who constantly trembled with fear at the thought of the gods
(deisidaimonia), as a slave might fear a cruel and capricious master. The Romans
called such fear of the gods superstitio.[122] Early Christianity was outlawed as a
superstitio Iudaica, a "Jewish superstition", by Domitian in the 80s AD. In AD 425,
when Rome had become Christian, Theodosius II outlawed pagan traditions as
superstitious.
The Roman Catholic Church considers superstition to be sinful in the sense that it
denotes a lack of trust in the divine providence of God and, as such, is a violation of the
736
first of the Ten Commandments. The Catechism of the Catholic Church states that
superstition "in some sense represents a perverse excess of religion" (para. #2110).
"Superstition," it says, "is a deviation of religious feeling and of the practices this feeling
imposes. It can even affect the worship we offer the true God, e.g., when one attributes
an importance in some way magical to certain practices otherwise lawful or necessary.
To attribute the efficacy of prayers or of sacramental signs to their mere external
performance, apart from the interior dispositions that they demand is to fall into
superstition. Cf. Matthew 23:16-22" (para. #2111)
Myth
The word myth has several meanings.
-A traditional story of ostensibly historical events that serves to unfold part of the
world view of a people or explain a practice, belief, or natural phenomenon;
-A person or thing having only an imaginary or unverifiable existence; or
-A metaphor for the spiritual potentiality in the human being.[123]
Ancient polytheistic religions, such as those of Greece, Rome, and Scandinavia, are
usually categorized under the heading of mythology. Religions of pre-industrial
peoples, or cultures in development, are similarly called "myths" in the anthropology of
religion. The term "myth" can be used pejoratively by both religious and non-religious
people. By defining another person's religious stories and beliefs as mythology, one
implies that they are less real or true than one's own religious stories and beliefs.
Joseph Campbell remarked, "Mythology is often thought of as other people's religions,
and religion can be defined as mis-interpreted mythology."[124]
In sociology, however, the term myth has a non-pejorative meaning. There, myth is
defined as a story that is important for the group whether or not it is objectively or
provably true. Examples include the death and resurrection of Jesus, which, to
Christians, explains the means by which they are freed from sin and is also ostensibly
a historical event. But from a mythological outlook, whether or not the event actually
occurred is unimportant. Instead, the symbolism of the death of an old "life" and the
start of a new "life" is what is most significant. Religious believers may or may not
accept such symbolic interpretations.
Secularism and irreligion
The terms "atheist" (lack of belief in any gods)
and "agnostic" (belief in the unknowability of the
existence of gods), though specifically contrary
to theistic (e.g. Christian, Jewish, and Muslim)
religious teachings, do not by definition mean
the opposite of "religious". There are religions
(including Buddhism and Taoism), in fact, that
classify some of their followers as agnostic,
atheistic, or nontheistic. The true opposite of
"religious" is the word "irreligious". Irreligion
describes an absence of any religion;
antireligion describes an active opposition or
aversion toward religions in general.
As religion became a more personal matter in
Western culture, discussions of society became
more focused on political and scientific
meaning, and religious attitudes (dominantly
Christian) were increasingly seen as irrelevant
for the needs of the European world. On the
737
political side, Ludwig Feuerbach recast Christian beliefs in light of humanism, paving
the way for Karl Marx's famous characterization of religion as "the opium of the
people". Meanwhile, in the scientific community, T.H. Huxley in 1869 coined the term
"agnostic," a term—subsequently adopted by such figures as Robert Ingersoll—that,
while directly conflicting with and novel to Christian tradition, is accepted and even
embraced in some other religions. Later, Bertrand Russell told the world Why I Am Not
a Christian, which influenced several later authors to discuss their breakaway from their
own religious uprbringings from Islam to Hinduism.
Some atheists also construct parody religions, for example, the Church of the
SubGenius or the Flying Spaghetti Monster, which parodies the equal time argument
employed by intelligent design Creationism.[125] Parody religions may also be
considered a post-modern approach to religion. For instance, in Discordianism, it may
be hard to tell if even these "serious" followers are not just taking part in an even bigger
joke. This joke, in turn, may be part of a greater path to enlightenment, and so on ad
infinitum.
Criticism of religion
Religious criticism has a long history, going back at least as far as the 5th century BCE.
During classical times, there were religious critics in ancient Greece, such as Diagoras
"the atheist" of Melos, and in the 1st century BCE in Rome, with Titus Lucretius Carus's
De Rerum Natura.
During the Middle Ages and continuing into the Renaissance, potential critics of religion
were persecuted and largely forced to remain silent. There were notable critics like
Giordano Bruno, who was burned at the stake for disagreeing with religious
authority.[126]
In the 17th and 18th century with the Enlightenment, thinkers like David Hume and
Voltaire criticized religion.
In the 19th century, Charles Darwin and the theory of evolution led to increased
skepticism about religion. Thomas Huxley, Jeremy Bentham, Karl Marx, Charles
Bradlaugh, Robert Ingersol, and Mark Twain were noted 19th-century and early-20thcentury critics. In the 20th century, Bertrand Russell, Siegmund Freud, and others
continued religious criticism.
Sam Harris, Daniel Dennett, Richard Dawkins, Victor J. Stenger, and the late
Christopher Hitchens were active critics during the late 20th century and early 21st
century.
Critics consider religion to be outdated, harmful to the individual (e.g. brainwashing of
children, faith healing, female genital mutilation, circumcision), harmful to society (e.g.
holy wars, terrorism, wasteful distribution of resources), to impede the progress of
science, to exert social control, and to encourage immoral acts (e.g. blood sacrifice,
discrimination against homosexuals and women, and certain forms of sexual violence
such as marital rape).[127][128][129] A major criticism of many religions is that they
require beliefs that are irrational, unscientific, or unreasonable, because religious
beliefs and traditions lack scientific or rational foundations.
Some modern-day critics, such as Bryan Caplan, hold that religion lacks utility in
human society; they may regard religion as irrational.[130] Nobel Peace Laureate
Shirin Ebadi has spoken out against undemocratic Islamic countries justifying
"oppressive acts" in the name of Islam.[131]
738
Evolutionary origin of religions
The evolutionary origin of religions theorizes about the emergence of religious behavior
during the course of human evolution.
Contents
1 Nonhuman religious behaviour
2 Setting the stage for human religion
2.1 Increased brain size
2.2 Tool use
2.3 Development of language
2.4 Morality and group living
3 Evolutionary psychology of religion
4 Prehistoric evidence of religion
4.1 Paleolithic burials
4.2 The use of symbolism
4.3 Origins of organized religion
4.4 Invention of writing
Nonhuman religious behaviour
Humanity’s closest living relatives are
common chimpanzees and bonobos. These
primates share a common ancestor with
humans who lived between four and six
million years ago. It is for this reason that
chimpanzees and bonobos are viewed as
the best available surrogate for this common
ancestor. Barbara King argues that while
non-human primates are not religious, they
do exhibit some traits that would have been
necessary for the evolution of religion.
These traits include high intelligence, a
capacity for symbolic communication, a
sense of social norms, realization of "self"
and a concept of continuity.[1][2][3] There is
inconclusive
evidence
that
Homo
neanderthalensis may have buried their
dead which is evidence of the use of ritual.
The use of burial rituals is evidence of
religious activity, but there is no other
evidence that religion existed in human
culture before humans reached behavioral
modernity.[4]
Marc Bekoff, Professor Emeritus of Ecology
and Evolutionary Biology at the University of
Colorado, Boulder, argues that many species grieve death and loss.[5]
739
Setting the stage for human religion
Increased brain size
In this set of theories, the religious mind is one consequence of a brain that is large
enough to formulate religious and philosophical ideas.[6] During human evolution, the
hominid brain tripled in size, peaking 500,000 years ago. Much of the brain's expansion
took place in the neocortex. This part of the brain is involved in processing higher order
cognitive functions that are connected with human religiosity. The neocortex is
associated with self-consciousness, language and emotion[citation needed]. According
to Dunbar's theory, the relative neocortex size of any species correlates with the level
of social complexity of the particular species. The neocortex size correlates with a
number of social variables that include social group size and complexity of mating
behaviors. In chimpanzees the neocortex occupies 50% of the brain, whereas in
modern humans it occupies 80% of the brain.
Robin Dunbar argues that the critical event in the evolution of the neocortex took place
at the speciation of archaic homo sapiens about 500,000 years ago. His study indicates
that only after the speciation event is the neocortex large enough to process complex
social phenomena such as language and religion. The study is based on a regression
analysis of neocortex size plotted against a number of social behaviors of living and
extinct hominids.[7]
Stephen Jay Gould suggests that religion may have grown out of evolutionary changes
which favored larger brains as a means of cementing group coherence among
savannah hunters, after that larger brain enabled reflection on the inevitability of
personal mortality.[8]
Tool use
Lewis Wolpert argues that causal beliefs that emerged from tool use played a major
role in the evolution of belief. The manufacture of complex tools requires creating a
mental image of an object which does not exist naturally before actually making the
artifact. Furthermore, one must understand how the tool would be used, that requires
an understanding of causality.[9] Accordingly, the level of sophistication of stone tools
is a useful indicator of causal beliefs.[10] Wolpert contends use of tools composed of
more than one component, such as hand axes, represents an ability to understand
cause and effect. However, recent studies of other primates indicate that causality may
not be a uniquely human trait. For example, chimpanzees have been known to escape
from pens closed with multiple latches, which was previously thought could only have
been figured out by humans who understood causality. Chimpanzees are also known
to mourn the dead, and notice things that have only aesthetic value, like sunsets, both
of which may be considered to be components of religion or spirituality.[11] The
difference between the comprehension of causality by humans and chimpanzees is
one of degree. The degree of comprehension in an animal depends upon the size of
the prefrontal cortex: the greater the size of the prefrontal cortex the deeper the
comprehension.[12]
Development of language
Religion requires a system of symbolic communication, such as language, to be
transmitted from one individual to another. Philip Lieberman states "human religious
thought and moral sense clearly rest on a cognitive-linguistic base".[13] From this
premise science writer Nicholas Wade states:
"Like most behaviors that are found in societies throughout the world, religion
must have been present in the ancestral human population before the dispersal
from Africa 50,000 years ago. Although religious rituals usually involve dance
740
and music, they are also very verbal, since the sacred truths have to be stated.
If so, religion, at least in its modern form, cannot pre-date the emergence of
language. It has been argued earlier that language attained its modern state
shortly before the exodus from Africa. If religion had to await the evolution of
modern, articulate language, then it too would have emerged shortly before
50,000 years ago."[14]
Another view distinguishes individual religious belief from collective religious belief.
While the former does not require prior development of language, the latter does. The
individual human brain has to explain a phenomenon in order to comprehend and
relate to it. This activity predates by far the emergence of language and may have
caused it. The theory is, belief in the supernatural emerges from hypotheses arbitrarily
assumed by individuals to explain natural phenomena that cannot be explained
otherwise. The resulting need to share individual hypotheses with others leads
eventually to collective religious belief. A socially accepted hypothesis becomes
dogmatic backed by social sanction.
Morality and group living
Frans de Waal and Barbara King both view human morality as having grown out of
primate sociality. Though morality awareness may be a unique human trait, many
social animals, such as primates, dolphins and whales, have been known to exhibit
pre-moral sentiments. According to Michael Shermer, the following characteristics are
shared by humans and other social animals, particularly the great apes:
"attachment and bonding, cooperation and mutual aid, sympathy and empathy,
direct and indirect reciprocity, altruism and reciprocal altruism, conflict resolution
and peacemaking, deception and deception detection, community concern and
caring about what others think about you, and awareness of and response to
the social rules of the group".[15]
De Waal contends that all social animals have had to restrain or alter their behavior for
group living to be worthwhile. Pre-moral sentiments evolved in primate societies as a
method of restraining individual selfishness and building more cooperative groups. For
any social species, the benefits of being part of an altruistic group should outweigh the
benefits of individualism. For example, lack of group cohesion could make individuals
more vulnerable to attack from outsiders. Being part of a group may also improve the
chances of finding food. This is evident among animals that hunt in packs to take down
large or dangerous prey.
All social animals have hierarchical societies in which each member knows its own
place. Social order is maintained by certain rules of expected behavior and dominant
group members enforce order through punishment. However, higher order primates
also have a sense of reciprocity and fairness. Chimpanzees remember who did them
favors and who did them wrong. For example, chimpanzees are more likely to share
food with individuals who have previously groomed them.[16]
Chimpanzees live in fission-fusion groups that average 50 individuals. It is likely that
early ancestors of humans lived in groups of similar size. Based on the size of extant
hunter-gatherer societies, recent Paleolithic hominids lived in bands of a few hundred
individuals. As community size increased over the course of human evolution, greater
enforcement to achieve group cohesion would have been required. Morality may have
evolved in these bands of 100 to 200 people as a means of social control, conflict
resolution and group solidarity. According to Dr. de Waal, human morality has two extra
levels of sophistication that are not found in primate societies. Humans enforce their
society’s moral codes much more rigorously with rewards, punishments and reputation
building. Humans also apply a degree of judgment and reason not otherwise seen in
the animal kingdom.
741
Psychologist Matt J. Rossano argues that religion emerged after morality and built
upon morality by expanding the social scrutiny of individual behavior to include
supernatural agents. By including ever-watchful ancestors, spirits and gods in the
social realm, humans discovered an effective strategy for restraining selfishness and
building more cooperative groups.[17] The adaptive value of religion would have
enhanced group survival.[18] [19] Rossano is referring here to collective religious belief
and the social sanction that institutionalized morality. According to Rossano's teaching,
individual religious belief is thus initially epistemological, not ethical, in nature.
Evolutionary psychology of religion
There is general agreement among cognitive scientists that religion is an outgrowth of
brain architecture that evolved early in human history. However, there is disagreement
on the exact mechanisms that drove the evolution of the religious mind. The two main
schools of thought hold that either religion evolved due to natural selection and has
selective advantage, or that religion is an evolutionary byproduct of other mental
adaptations.[20] Stephen Jay Gould, for example, believed that religion was an
exaptation or a spandrel, in other words that religion evolved as byproduct of
psychological mechanisms that evolved for other reasons.[21][22][23]
Such mechanisms may include the ability to infer the presence of organisms that might
do harm (agent detection), the ability to come up with causal narratives for natural
events (etiology), and the ability to recognize that other people have minds of their own
with their own beliefs, desires and intentions (theory of mind). These three adaptations
(among others) allow human beings to imagine purposeful agents behind many
observations that could not readily be explained otherwise, e.g. thunder, lightning,
movement of planets, complexity of life, etc.[24] The emergence of collective religious
belief identified the agents as deities that standardized the explanation.
Some scholars have suggested that religion is genetically "hardwired" into the human
condition. One controversial hypothesis, the God gene hypothesis, states that some
variants of a specific gene, the VMAT2 gene, predispose to spirituality.[25]
Another view is based on the concept of the triune brain: the reptilian brain, the limbic
system, and the neocortex, proposed by Paul D. MacLean. Collective religious belief
draws upon the emotions of love, fear, and gregariousness and is deeply embedded in
the limbic system through sociobiological conditioning and social sanction. Individual
religious belief utilizes reason based in the neocortex and often varies from collective
religion. The limbic system is much older in evolutionary terms than the neocortex and
is, therefore, stronger than it much in the same way as the reptilian is stronger than
both the limbic system and the neocortex. Reason is pre-empted by emotional drives.
The religious feeling in a congregation is emotionally different from individual spirituality
even though the congregation is composed of individuals. Belonging to a collective
religion is culturally more important than individual spirituality though the two often go
hand in hand. This is one of the reasons why religious debates are likely to be
inconclusive.[citation needed]
Yet another view is that the behaviour of people who participate in a religion makes
them feel better and this improves their fitness, so that there is a genetic selection in
favor of people who are willing to believe in religion. Specifically, rituals, beliefs, and
the social contact typical of religious groups may serve to calm the mind (for example
by reducing ambiguity and the uncertainty due to complexity) and allow it to function
better when under stress.[26] This would allow religion to be used as a powerful
survival mechanism, particularly in facilitating the evolution of hierarchies of warriors,
which if true, may be why many modern religions tend to promote fertility and kinship.
Still another view is that human religion was a product of an increase in dopaminergic
functions in the human brain and a general intellectual expansion beginning around 80
742
kya. [27][28] Dopamine promotes an emphasis on distant space and time, which is
critical for the establishment of religious experience.[29] While the earliest shamanic
cave paintings date back around 40 kya, the use of ochre for rock art predates this and
there is clear evidence for abstract thinking along the coast of South Africa by 80 kya.
Prehistoric evidence of religion
When humans first became religious remains unknown, but there is credible evidence
of religious behavior from the Middle Paleolithic era (300–500 thousand years
ago)[citation needed] and possibly earlier.
Paleolithic burials
The earliest evidence of religious thought is based on the ritual treatment of the dead.
Most animals display only a casual interest in the dead of their own species.[30] Ritual
burial thus represents a significant change in human behavior. Ritual burials represent
an awareness of life and death and a possible belief in the afterlife. Philip Lieberman
states "burials with grave goods clearly signify religious practices and concern for the
dead that transcends daily life."[13]
The earliest evidence for treatment of the dead comes from Atapuerca in Spain. At this
location the bones of 30 individuals believed to be Homo heidelbergensis have been
found in a pit.[31] Neanderthals are also contenders for the first hominids to
intentionally bury the dead. They may have placed corpses into shallow graves along
with stone tools and animal bones. The presence of these grave goods may indicate an
emotional connection with the deceased and possibly a belief in the afterlife.
Neanderthal burial sites include Shanidar in Iraq and Krapina in Croatia and Kebara
Cave in Israel.[32][33][33][34]
The earliest known burial of modern humans is from a cave in Israel located at Qafzeh.
Human remains have been dated to 100,000 years ago. Human skeletons were found
stained with red ochre. A variety of grave goods were found at the burial site. The
mandible of a wild boar was found placed in the arms of one of the skeletons.[35] Philip
Lieberman states:
"Burial rituals incorporating grave goods may have been invented by the
anatomically modern hominids who emigrated from Africa to the Middle East
roughly 100,000 years ago".[35]
Matt Rossano suggests that the period in between 80,000–60,000 years after humans
retreated from the Levant to Africa was a crucial period in the evolution of religion.[36]
743
The use of symbolism
The use of symbolism in religion is a universal established phenomenon. Archeologist
Steven Mithen contends that it is common for religious practices to involve the creation
of images and symbols to represent supernatural beings and ideas. Because
supernatural beings violate the principles of the natural world, there will always be
difficulty in communicating and sharing supernatural concepts with others. This
problem can be overcome by anchoring these supernatural beings in material form
through representational art. When translated into material form, supernatural concepts
become easier to communicate and understand.[37] Due to the association of art and
religion, evidence of symbolism in the fossil record is indicative of a mind capable of
religious thoughts. Art and symbolism demonstrates a capacity for abstract thought and
imagination necessary to construct religious ideas. Wentzel van Huyssteen states that
the translation of the non-visible through symbolism enabled early human ancestors to
hold beliefs in abstract terms.[38]
Some of the earliest evidence of symbolic behavior is associated with Middle Stone
Age sites in Africa. From at least 100,000 years ago, there is evidence of the use of
pigments such as red ochre. Pigments are of little practical use to hunter gatherers,
thus evidence of their use is interpreted as symbolic or for ritual purposes. Among
extant hunter gatherer populations around the world, red ochre is still used extensively
for ritual purposes. It has been argued that it is universal among human cultures for the
color red to represent blood, sex, life and death.[39]
The use of red ochre as a proxy for symbolism is often criticized as being too indirect.
Some scientists, such as Richard Klein and Steven Mithen, only recognize
unambiguous forms of art as representative of abstract ideas. Upper paleolithic cave
art provides some of the most unambiguous evidence of religious thought from the
paleolithic. Cave paintings at Chauvet depict creatures that are half human and half
animal.
Origins of organized religion
Organized religion traces its roots to the neolithic revolution that began 11,000 years
ago in the Near East but may have occurred independently in several other locations
around the world. The invention of agriculture transformed many human societies from
a hunter-gatherer lifestyle to a sedentary lifestyle. The consequences of the neolithic
revolution included a population explosion and an acceleration in the pace of
technological development. The transition from foraging bands to states and empires
precipitated more specialized and developed forms of religion that reflected the new
social and political environment. While bands and small tribes possess supernatural
beliefs, these beliefs do not serve to justify a central authority, justify transfer of wealth
or maintain peace between unrelated individuals. Organized religion emerged as a
means of providing social and economic stability through the following ways:
744
-Justifying the central authority, which in turn possessed the right to collect
taxes in return for providing social and security services.
-Bands and tribes consist of small number of related individuals. However,
states and nations are composed of many thousands of unrelated individuals.
Jared Diamond argues that organized religion served to provide a bond
between unrelated individuals who would otherwise be more prone to enmity. In
his book Guns, Germs, and Steel he argues that the leading cause of death
among hunter-gatherer societies is murder.[40]
-Religions that revolved around moralizing gods may have facilitated the rise of
large, cooperative groups of unrelated individuals.[41]
The states born out of the Neolithic revolution, such as those of Ancient Egypt and
Mesopotamia, were theocracies with chiefs, kings and emperors playing dual roles of
political and spiritual leaders.[15] Anthropologists have found that virtually all state
societies and chiefdoms from around the world have been found to justify political
power through divine authority. This suggests that political authority co-opts collective
religious belief to bolster itself.
Invention of writing
Following the neolithic revolution, the pace of technological development (cultural
evolution) intensified due to the invention of writing 5000 years ago. Symbols that
became words later on made effective communication of ideas possible. Printing
invented only over a thousand years ago increased the speed of communication
exponentially and became the main spring of cultural evolution. Writing is thought to
have been first invented in either Sumeria or Ancient Egypt and was initially used for
accounting. Soon after, writing was used to record myth. The first religious texts mark
the beginning of religious history. The Pyramid Texts from ancient Egypt are one of the
oldest known religious texts in the world, dating to between 2400–2300
BCE.[42][43][44] Writing played a major role in sustaining and spreading organized
religion. In pre-literate societies, religious ideas were based on an oral tradition, the
contents of which were articulated by shamans and remained limited to the collective
memories of the society's inhabitants. With the advent of writing, information that was
not easy to remember could easily be stored in sacred texts that were maintained by a
select group (clergy). Humans could store and process large amounts of information
with writing that otherwise would have been forgotten. Writing therefore enabled
religions to develop coherent and comprehensive doctrinal systems that remained
independent of time and place.[45] Writing also brought a measure of objectivity to
human knowledge. Formulation of thoughts in words and the requirement for validation
made mutual exchange of ideas and the sifting of generally acceptable from not
acceptable ideas possible. The generally acceptable ideas became objective
knowledge reflecting the continuously evolving framework of human awareness of
reality that Karl Popper calls 'verisimilitude' – a stage on the human journey to truth.[46]
745
746
Relationship between religion and science
The relationship between religion and science has been a subject of study since
Classical antiquity, addressed by philosophers, theologians, scientists, and others.
Perspectives from different geographical regions, cultures and historical epochs are
diverse, with some characterizing the relationship as one of conflict, others describing it
as one of harmony, and others proposing little interaction. The extent to which science
and religion may attempt to understand and describe similar phenomena is sometimes
referred to as a part of the demarcation problem.
Science and religion generally pursue knowledge of the universe using different
methodologies. Science acknowledges reason, empiricism, and evidence, while
religions include revelation, faith and sacredness. Despite these differences, most
scientific and technical innovations prior to the Scientific revolution were achieved by
societies organized by religious traditions. Much of the scientific method was pioneered
first by Islamic scholars, and later by Christians. Hinduism has historically embraced
reason and empiricism, holding that science brings legitimate, but incomplete
knowledge of the world. Confucian thought has held different views of science over
time. Most Buddhists today view science as complementary to their beliefs.
Events in Europe such as the Galileo affair, associated with the Scientific revolution
and the Age of Enlightenment, led scholars such as John William Draper to postulate a
conflict thesis, holding that religion and science conflict methodologically, factually and
politically. This thesis is advanced by contemporary scientists such as Richard
Dawkins, Steven Weinberg and Carl Sagan, and proposed by many creationists. While
the conflict thesis remains popular for the public, it has lost favor among most
contemporary historians of science.[1][2][3][4]
Many theologians, philosophers and scientists in history have found no conflict
between their faith and science. Biologist Stephen Jay Gould, other scientists, and
some contemporary theologians hold that religion and science are non-overlapping
magisteria, addressing fundamentally separate forms of knowledge and aspects of life.
Scientists Francisco Ayala, Kenneth R. Miller and Francis Collins see no necessary
747
conflict between religion and science. Some theologians or historians of science,
including John Lennox, Thomas Berry, Brian Swimme and Ken Wilber propose an
interconnection between them.
Public acceptance of scientific facts may be influenced by religion; many in the United
States reject the idea of evolution by natural selection, especially regarding human
beings. Nevertheless, the American National Academy of Sciences has written that
"the evidence for evolution can be fully compatible with religious faith," a view officially
endorsed by many religious denominations globally.[5]
Contents
1 Perspectives
1.1 Incompatibility
1.2 Conflict thesis
1.3 Independence
1.3.1 Parallels in method
1.4 Dialogue
1.5 Cooperative
2 Bahá'í
3 Buddhism
4 Christianity
4.1 Perspectives on evolution
4.2 Reconciliation in Britain in the early 20th century
4.3 Roman Catholicism
4.4 Influence of a biblical world view on early modern science
5 Confucianism and traditional Chinese religion
6 Hinduism
7 Islam
8 Jainism
9 Perspectives from the scientific community
9.1 History
9.2 Studies on scientists' beliefs
10 Public perceptions of science
Perspectives
The kinds of interactions that might arise between science and religion have been
categorized, according to theologian, Anglican priest and physicist John Polkinghorne
are: 1) conflict between the disciplines, 2) independence of the disciplines, 3) dialogue
between the disciplines where they overlap, and 4) integration of both into one field.[6]
This typology is similar to ones used by theologians Ian Barbour[7] and John
Haught.[8] More typologies that categorize this relationship can be found among the
works of other science and religion scholars such as theologian and biochemist Arthur
Peacocke.[9]
748
Incompatibility
According to Jerry Coyne, views on evolution and levels of religiosity in some
countries, along with the existence of books explaining reconciliation between evolution
and religion, indicate that people have trouble in believing both at the same time, thus
implying incompatibility.[10] According to Lawrence Krauss, compatibility or
incompatibility is a theological concern, not a scientific concern.[10] In Lisa Randall's
view, questions of incompatibility or otherwise are not answerable since by accepting
revelations one is abandoning rules of logic which are needed to identify if there are
indeed contradictions between holding certain beliefs.[10] Daniel Dennett holds that
incompatibility exists because religion is not problematic to a certain point before it
collapses into a number of excuses for keeping certain beliefs, in light of evolutionary
implications.[10]
To Neil Degrasse Tyson, the central
difference between the nature of science and
religion is that the claims of science rely on
experimental verification, while the claims of
religions rely on faith, and these are
irreconcilable approaches to knowing.
Because of this both are incompatible as
currently practiced and the debate of
compatibility or incompatibility will be
eternal.[11][12] Philosopher and physicist
Victor J. Stenger's view is that science and
religion are incompatible due to conflicts
between approaches of knowing and the
availability of alternative plausible natural
explanations for phenomena that is usually
explained
in
religious
contexts.[13]
Neuroscientist and author Sam Harris views
science and religion as being in competition,
with religion now "losing the argument with
modernity".[14] However, Harris disagrees
with Jerry Coyne and Daniel Dennett's
narrow view of the debate and argues that it
is very easy for people to reconcile science
and religion because some things are above
strict reason, scientific expertise or domains
do not spill over to religious expertise or
domains necessarily, and mentions "There
simply IS no conflict between religion and
science." [10]
According to Richard Dawkins, he is hostile
to fundamentalist religion because it actively
debauches the scientific enterprise. According to him, religion "subverts science and
saps the intellect".[15] He believes that when science teachers attempt to expound on
evolution, there is hostility aimed towards them by parents who are skeptical because
they believe it conflicts with their religious beliefs, that even some textbooks have had
the word 'evolution' systematically removed.[16]
Others such as Francis Collins, Kenneth R. Miller, and George Coyne argue for
compatibility since they do not agree that science is incompatible with religion and vice
versa. They argue that science provides many opportunities to look for and find God in
nature and to reflect on their beliefs.[17] According to Kenneth Miller, he disagrees with
Jerry Coyne's assessment and argues that since significant portions of scientists are
religious and the proportion of Americans believing in evolution is much higher, it
749
implies that both are indeed compatible.[10] Karl Giberson argues that when discussing
compatibility, some scientific intellectuals often ignore the viewpoints of intellectual
leaders in theology and instead argue against less informed masses, thereby, defining
religion by non intellectuals and slanting the debate unjustly. He argues that leaders in
science sometimes trump older scientific baggage and that leaders in theology do the
same, so once theological intellectuals are taken into account, people who represent
extreme positions like Ken Ham and Eugene Scott will become irrelevant.[10]
Conflict thesis
The conflict thesis, which holds that religion and science have been in conflict
continuously throughout history, was popularized in the 19th century by John William
Draper's and Andrew Dickson White's accounts. It was in the 19th century that
relationship between science and religion became an actual formal topic of discourse,
while before this no one had pitted science against religion or vice versa, though
occasional complex interactions had been expressed before the 19th century.[18] Most
contemporary historians of science now reject the conflict thesis in its original form and
no longer support it.[1][2][3][19] Instead, it has been superseded by subsequent
historical research which has resulted in a more nuanced understanding:[20][21]
Historian of science, Gary Ferngren, has stated "Although popular images of
controversy continue to exemplify the supposed hostility of Christianity to new scientific
theories, studies have shown that Christianity has often nurtured and encouraged
scientific endeavour, while at other times the two have co-existed without either tension
or attempts at harmonization. If Galileo and the Scopes trial come to mind as examples
of conflict, they were the exceptions rather than the rule." [22]
Most historians today have moved away from a conflict model, which is based mainly
on two historical episodes (Galileo and Darwin) for a "complexity" model, because
religious figures were on both sides of each dispute and there was no overall aim by
any party involved in discrediting religion.[23]
An often cited example of conflict was the Galilio affair, whereby interpretations of the
Bible were used to attack ideas by Copernicus on Heliocentrism. By 1616 Galileo went
to Rome to try to persuade Catholic Church authorities not to ban Copernicus' ideas. In
the end, a decree of the Congregation of the Index was issued, declaring that the ideas
that the Sun stood still and that the Earth moved were "false" and "altogether contrary
to Holy Scripture", and suspending Copernicus's De Revolutionibus until it could be
corrected. Galileo was found "vehemently suspect of heresy", namely of having held
the opinions that the Sun lies motionless at the center of the universe, that the Earth is
not at its centre and moves. He was required to "abjure, curse and detest" those
opinions.[24] However, before all this, Pope Urban VIII had personally asked Galileo to
give arguments for and against heliocentrism in a book, and to be careful not to
advocate heliocentrism as physically proven yet. Pope Urban VIII asked that his own
views on the matter be included in Galileo's book. Only the latter was fulfilled by
Galileo. Whether unknowingly or deliberately, Simplicio, the defender of the
Aristotelian/Ptolemaic geocentric view in Dialogue Concerning the Two Chief World
Systems, was often portrayed as an unlearned fool who lacked mathematical training.
Although the preface of his book claims that the character is named after a famous
Aristotelian philosopher (Simplicius in Latin, Simplicio in Italian), the name "Simplicio"
in Italian also has the connotation of "simpleton".[25] Unfortunately for his relationship
with the Pope, Galileo put the words of Urban VIII into the mouth of Simplicio. Most
historians agree Galileo did not act out of malice and felt blindsided by the reaction to
his book.[26] However, the Pope did not take the suspected public ridicule lightly, nor
the physical Copernican advocacy. Galileo had alienated one of his biggest and most
powerful supporters, the Pope, and was called to Rome to defend his writings.[27]
750
Independence
A modern view, described by Stephen Jay Gould as "non-overlapping magisteria"
(NOMA), is that science and religion deal with fundamentally separate aspects of
human experience and so, when each stays within its own domain, they co-exist
peacefully.[28] While Gould spoke of independence from the perspective of science,
W. T. Stace viewed independence from the perspective of the philosophy of religion.
Stace felt that science and religion, when each is viewed in its own domain, are both
consistent and complete.[29]
The National Academy of Science supports the view that science and religion are
independent.[30]
Science and religion are based on different aspects of human experience. In
science, explanations must be based on evidence drawn from examining the
natural world. Scientifically based observations or experiments that conflict with
an explanation eventually must lead to modification or even abandonment of
that explanation. Religious faith, in contrast, does not depend only on empirical
evidence, is not necessarily modified in the face of conflicting evidence, and
typically involves supernatural forces or entities. Because they are not a part of
nature, supernatural entities cannot be investigated by science. In this sense,
science and religion are separate and address aspects of human understanding
in different ways. Attempts to pit science and religion against each other create
controversy where none needs to exist.[30]
According to the Archbishop John Habgood, both science and religion represent
distinct ways of approaching experience and these differences are sources of debate.
He views science as descriptive and religion as prescriptive. He stated that science
and mathematics concentrates on what the world ought to be, like in the way that
religion does, may lead to improperly ascribing properties to the natural world as
happened among the followers of Pythagoras in the sixth century B.C.[31] In contrast,
proponents of a normative moral science take issue with the idea that science has no
way of guiding "oughts". Habgood also stated that he believed that the reverse
situation, where religion attempts to be descriptive, can also lead to inappropriately
assigning properties to the natural world. A notable example is the now defunct belief in
the Ptolemy planetary model that held sway until changes in scientific and religious
thinking were brought about by Galileo and proponents of his views.[31]
Parallels in method
According to Ian Barbour, Thomas S. Kuhn asserted that science is made up of
paradigms that arise from cultural traditions, which is similar to the secular perspective
on religion.[32]
Michael Polanyi asserted that it is merely a commitment to universality that protects
against subjectivity and has nothing at all to do with personal detachment as found in
many conceptions of the scientific method. Polanyi further asserted that all knowledge
is personal and therefore the scientist must be performing a very personal if not
necessarily subjective role when doing science.[32] Polanyi added that the scientist
often merely follows intuitions of "intellectual beauty, symmetry, and 'empirical
agreement'".[32] Polanyi held that science requires moral commitments similar to those
found in religion.[32]
Two physicists, Charles A. Coulson and Harold K. Schilling, both claimed that "the
methods of science and religion have much in common."[32] Schilling asserted that
both fields—science and religion—have "a threefold structure—of experience,
theoretical interpretation, and practical application."[32] Coulson asserted that science,
like religion, "advances by creative imagination" and not by "mere collecting of facts,"
while stating that religion should and does "involve critical reflection on experience not
751
unlike that which goes on in science."[32] Religious language and scientific language
also show parallels (cf. Rhetoric of science).
Dialogue
The religion and science community
consists of those scholars who involve
themselves with what has been called
the "religion-and-science dialogue" or
the
"religion-and-science
field."[33][34] The community belongs
to neither the scientific nor the
religious community, but is said to be
a third overlapping community of
interested and involved scientists,
priests, clergymen, theologians, and
engaged non-professionals.[34][not in
citation given] Institutions interested in
the intersection between science and religion include the Center for Theology and the
Natural Sciences, the Institute on Religion in an Age of Science, the Ian Ramsey
Centre,[35] and the Faraday Institute. Journals addressing the relationship between
science and religion include Theology and Science and Zygon: Journal of Religion &
Science. Eugenie Scott has written that the "science and religion" movement is, overall,
composed mainly of theists who have a healthy respect for science and may be
beneficial to the public understanding of science. She contends that the "Christian
scholarship" movement is not a problem for science, but that the "Theistic science"
movement, which proposes abandoning methodological materialism, does cause
problems in understanding of the nature of science.[36]
The modern dialogue between religion and science is rooted in Ian Barbour's 1966
book Issues in Science and Religion.[37] Since that time it has grown into a serious
academic field, with academic chairs in the subject area, and two dedicated academic
journals, Zygon: Journal of Religion & Science and Theology and Science.[37] Articles
are also sometimes found in mainstream science journals such as American Journal of
Physics[38] and Science.[39][40]
Philosopher Alvin Plantinga has argued that there is superficial conflict but deep
concord between science and religion, and that there is deep conflict between science
and naturalism.[41] Plantinga, in his book Where the Conflict Really Lies: Science,
Religion, and Naturalism, heavily contests the linkage of naturalism with science, as
conceived by Richard Dawkins, Daniel Dennett and like-minded thinkers; while Daniel
Dennett thinks that Plantinga stretches science to an unacceptable extent.[42]
Philosopher Maarten Boudry, in reviewing the book, has commented that he resorts to
creationism and fails to "stave off the conflict between theism and evolution."[43]
Cognitive scientist Justin L. Barrett, by contrast, reviews the same book and writes that
"those most needing to hear Plantinga’s message may fail to give it a fair hearing for
rhetorical rather than analytical reasons."[44]
Cooperative
As a general view, this holds that while interactions are complex between influences of
science, theology, politics, social, and economic concerns, the productive
engagements between science and religion throughout history should be duly stressed
as the norm.
Scientific and theological perspectives often coexist peacefully. Christians and some
Non-Christian religions have historically integrated well with scientific ideas, as in the
752
ancient Egyptian technological mastery applied to monotheistic ends, the flourishing of
logic and mathematics under Hinduism and Buddhism, and the scientific advances
made by Muslim scholars during the Ottoman empire. Even many 19th-century
Christian communities welcomed scientists who claimed that science was not at all
concerned with discovering the ultimate nature of reality.[31] According to Lawrence M.
Principe, the Johns Hopkins University Drew Professor of the Humanities, from a
historical perspective this points out that much of the current-day clashes occur
between limited extremists—both religious and scientistic fundamentalists—over a very
few topics, and that the movement of ideas back and forth between scientific and
theological thought has been more usual.[45] To Principe, this perspective would point
to the fundamentally common respect for written learning in religious traditions of
rabbinical literature, christian theology, and the Islamic Golden Age, including a
Transmission of the Classics from Greek to Islamic to Christian traditions which helped
spark the Renaissance. Religions have also given key participation in development of
modern universities and libraries; centers of learning & scholarship were coincident
with religious institutions - whether pagan, Muslim, or Christian.[46]
Bahá'í
A fundamental principle of the Bahá'í Faith is the harmony of religion and science.
Bahá'í scripture asserts that true science and true religion can never be in conflict.
`Abdu'l-Bahá, the son of the founder of the religion, stated that religion without science
is superstition and that science without religion is materialism. He also admonished that
true religion must conform to the conclusions of science.[47][48][49]
Buddhism
Theories of Buddhism and science have been regarded to compatible by numerous
sources.[50] Some philosophic and psychological teachings within Buddhism share
commonalities with modern Western scientific and philosophic thought. For example,
Buddhism encourages the impartial investigation of nature (an activity referred to as
Dhamma-Vicaya in the Pali Canon)—the principal object of study being oneself. A
reliance on causality. philosophical principles shared between Buddhism and science.
However, Buddhism doesn't focus on materialism.[51]
Tenzin Gyatso, the 14th Dalai Lama, spends a lot of time with scientists. In his book,
"The Universe in a Single Atom" he wrote, "My confidence in venturing into science lies
in my basic belief that as in science, so in Buddhism, understanding the nature of
reality is pursued by means of critical investigation." and "If scientific analysis were
conclusively to demonstrate certain claims in Buddhism to be false," he says, "then we
must accept the findings of science and abandon those claims."[52][53]
Christianity
Most sources of knowledge available to early Christians were connected to pagan
world-views. There were various opinions on how Christianity should regard pagan
learning, which included its ideas about nature. For instance, among early Christian
teachers, Tertullian (c. 160–220) held a generally negative opinion of Greek
philosophy, while Origen (c. 185–254) regarded it much more favorably and required
his students to read nearly every work available to them.[54]
Earlier attempts at reconciliation of Christianity with Newtonian mechanics appear quite
different from later attempts at reconciliation with the newer scientific ideas of evolution
or relativity.[31] Many early interpretations of evolution polarized themselves around a
struggle for existence. These ideas were significantly countered by later findings of
universal patterns of biological cooperation. According to John Habgood, all man really
753
knows here is that the universe seems to be a mix of good and evil, beauty and pain,
and that suffering may somehow be part of the process of creation. Habgood holds that
Christians should not be surprised that suffering may be used creatively by God, given
their faith in the symbol of the Cross.[31] Robert John Russell has examined
consonance and dissonance between modern physics, evolutionary biology, and
Christian theology.[55][56]
Christian philosophers Augustine of
Hippo
(354-430)
and
Thomas
Aquinas[57] held that scriptures can
have multiple interpretations on certain
areas where the matters were far
beyond their reach, therefore one
should leave room for future findings
to shed light on the meanings. The
"Handmaiden" tradition, which saw
secular studies of the universe as a
very important and helpful part of
arriving at a better understanding of
scripture, was adopted throughout
Christian history from early on.[58]
Also the sense that God created the
world as a self operating system is what motivated many Christians throughout the
Middle Ages to investigate nature.[59]
A degree of concord between science
and religion can be seen in religious
belief and empirical science. The belief
that God created the world and
therefore humans, can lead to the view
that he arranged for humans to know
the world. This is underwritten by the
doctrine of imago dei. In the words of
Thomas Aquinas, "Since human beings
are said to be in the image of God in
virtue of their having a nature that
includes an intellect, such a nature is
most in the image of God in virtue of
being most able to imitate God".[60]
During the Enlightenment, a period
"characterized by dramatic revolutions
in science" and the rise of Protestant
challenges to the authority of the
Catholic Church via individual liberty,
the authority of Christian scriptures
became strongly challenged. As
science advanced, acceptance of a
literal version of the Bible became
"increasingly untenable" and some in
that period presented ways of
interpreting scripture according to its
spirit on its authority and truth.[61]
Many well-known historical figures who
influenced Western science considered
themselves Christian such as Copernicus,[62] Galileo,[63] Kepler,[64] Newton[65] and
Boyle,[66] although Newton would rather fit the term "heretic".[67]
754
Perspectives on evolution
In recent history, the theory of evolution has been at the center of some controversy
between Christianity and science. Christians who accept a literal interpretation of the
biblical creation account find incompatibility between Darwinian evolution and their own
interpretation of the Christian faith.[68] Creation science or scientific creationism[69] is
a branch of creationism that attempts to provide scientific support for the Genesis
creation narrative in the Book of Genesis and disprove generally accepted scientific
facts, theories and scientific paradigms about the history of the Earth, cosmology and
biological evolution.[70][71] It began in the 1960s as a fundamentalist Christian effort in
the United States to prove Biblical inerrancy and nullify the scientific evidence for
evolution.[72] It has since developed a sizable religious following in the United States,
with creation science ministries branching worldwide.[73] In 1925, Tennessee passed a
statute called the Butler Act, which prohibited the teaching of the theory of evolution in
all schools in the state. Later that year, a similar law was passed in Mississippi, and
likewise, Arkansas in 1927. In 1968, these "anti-monkey" laws were struck down by the
Supreme Court of the United States as unconstitutional, "because they established a
religious doctrine violating both the First and Fourth Amendments to the
Constitution.[74]
Most scientists have rejected creation science for multiple reasons such as its claims
not referring to natural causes and not being testable. In 1987, the United States
Supreme Court ruled that creationism is religion, not science, and cannot be advocated
in public school classrooms.[75]
Another perspective on evolution has been Theistic evolution takes into account
religious beliefs with scientific findings on the age of the Earth and the process of
evolution. It includes a range of beliefs, including views described as evolutionary
creationism and some forms of old earth creationism, all of which embrace the findings
of modern science and uphold classical religious teachings about God and creation in
Christian context.[76]
Reconciliation in Britain in the early 20th century
In Reconciling Science and Religion: The Debate in Early-twentieth-century Britain,
historian of biology Peter J. Bowler argues that in contrast to the conflicts between
science and religion in the U.S. in the 1920s (most famously the Scopes Trial), during
this period Great Britain experienced a concerted effort at reconciliation, championed
by intellectually conservative scientists, supported by liberal theologians but opposed
by younger scientists and secularists and conservative Christians. These attempts at
reconciliation fell apart in the 1930s due to increased social tensions, moves towards
neo-orthodox theology and the acceptance of the modern evolutionary synthesis.[77]
In the 20th century, several ecumenical organizations promoting a harmony between
science and Christianity were founded, most notably the American Scientific Affiliation,
The Biologos Foundation, Christians in Science, The Society of Ordained Scientists,
and The Veritas Forum.[78]
Roman Catholicism
While refined and clarified over the centuries, the Roman Catholic position on the
relationship between science and religion is one of harmony, and has maintained the
teaching of natural law as set forth by Thomas Aquinas. For example, regarding
scientific study such as that of evolution, the church's unofficial position is an example
of theistic evolution, stating that faith and scientific findings regarding human evolution
are not in conflict, though humans are regarded as a special creation, and that the
existence of God is required to explain both monogenism and the spiritual component
755
of human origins. Catholic schools have included all manners of scientific study in their
curriculum for many centuries.[79]
Galileo once stated "The intention of the Holy Spirit is to teach us how to go to heaven,
not how the heavens go."[80] In 1981 John Paul II, former pope of the Roman Catholic
Church, spoke of the relationship this way: "The Bible itself speaks to us of the origin of
the universe and its make-up, not in order to provide us with a scientific treatise, but in
order to state the correct relationships of man with God and with the universe. Sacred
Scripture wishes simply to declare that the world was created by God, and in order to
teach this truth it expresses itself in the terms of the cosmology in use at the time of the
writer".[81]
Influence of a biblical world view on early modern science
According to Andrew Dickson White's A History of the Warfare of Science with
Theology in Christendom from the 19th century, a biblical world view affected
negatively the progress of science through time. Few early Christians were willing to
accept early scientific discoveries and new ideas by the Greeks that contradicted
scripture and many Christians through the ages rejected the sphericity of the earth due
to their interpretations of scripture.[citation needed] Dickinson also argues that
immediately following the Reformation matters were even worse. The interpretations of
Scripture by Luther and Calvin became as sacred to their followers as the Scripture
itself. For instance, when Georg Calixtus ventured, in interpreting the Psalms, to
question the accepted belief that "the waters above the heavens" were contained in a
vast receptacle upheld by a solid vault, he was bitterly denounced as heretical.[82]
Today, much of the scholarship in which the conflict thesis was originally based is
considered to be inaccurate. For instance, the claim that early Christians rejected
scientific findings by Greco-Romans is false since the "handmaiden" view of learning
secular studies to shed light on theology was widely adapted throughout the early
medieval period and beyond by theologians (such as Augustine) which ultimately
resulted in keeping interest in knowledge about nature through time.[83] Also, the claim
that people of the Middle Ages widely believed that the Earth was flat was first
propagated in the same period that originated the conflict thesis[84] and is still very
common in popular culture. Modern scholars regard this claim as mistaken, as the
contemporary historians of science David C. Lindberg and Ronald L. Numbers write:
"there was scarcely a Christian scholar of the Middle Ages who did not acknowledge
[earth's] sphericity and even know its approximate circumference."[84][85] From the fall
of Rome to the time of Columbus, all major scholars and many vernacular writers
interested in the physical shape of the earth held a spherical view with the exception of
Lactantius and Cosmas.[86]
H. Floris Cohen argued for a biblical Protestant, but not excluding Catholicism,
influence on the early development of modern science.[87] He presented Dutch
historian R. Hooykaas' argument that a biblical world-view holds all the necessary
antidotes for the hubris of Greek rationalism: a respect for manual labour, leading to
more experimentation and empiricism, and a supreme God that left nature and open to
emulation and manipulation.[87] It supports the idea early modern science rose due to
a combination of Greek and biblical thought.[88][89]
Oxford historian Peter Harrison is another who has argued that a biblical worldview
was significant for the development of modern science. Harrison contends that
Protestant approaches to the book of scripture had significant, if largely unintended,
consequences for the interpretation of the book of nature.[90][page needed] Harrison
has also suggested that literal readings of the Genesis narratives of the Creation and
Fall motivated and legitimated scientific activity in seventeenth-century England. For
many of its seventeenth-century practitioners, science was imagined to be a means of
756
restoring a human dominion over nature that had been lost as a consequence of the
Fall.[91][page needed]
Historian and professor of religion Eugene M. Klaaren holds that "a belief in divine
creation" was central to an emergence of science in seventeenth-century England. The
philosopher Michael Foster has published analytical philosophy connecting Christian
doctrines of creation with empiricism. Historian William B. Ashworth has argued against
the historical notion of distinctive mind-sets and the idea of Catholic and Protestant
sciences.[92] Historians James R. Jacob and Margaret C. Jacob have argued for a
linkage between seventeenth century Anglican intellectual transformations and
influential English scientists (e.g., Robert Boyle and Isaac Newton).[93] John
Dillenberger and Christopher B. Kaiser have written theological surveys, which also
cover additional interactions occurring in the 18th, 19th, and 20th centuries.[94][95]
Philosopher of Religion, Richard Jones, has written a philosophical critique of the
"dependency thesis" which assumes that modern science emerged from Christian
sources and doctrines. Though he acknowledges that modern science emerged in a
religious framework, that Christinaity greatly elevated the importance of science by
sanctioning and religiously legitimizing it in medieval period, and that Christianity
created a favorable social context for it to grow; he argues that direct Christian beliefs
or doctrines were not primary source of scientific pursuits by natural philosophers, nor
was Christianity, in and of itself, exclusively or directly necessary in developing or
practicing modern science.[23]
Oxford University historian and theologian John Hedley Brooke wrote that "when
natural philosophers referred to laws of nature, they were not glibly choosing that
metaphor. Laws were the result of legislation by an intelligent deity. Thus the
philosopher René Descartes (1596-1650) insisted that he was discovering the "laws
that God has put into nature." Later Newton would declare that the regulation of the
solar system presupposed the "counsel and dominion of an intelligent and powerful
Being."[96] Historian Ronald L. Numbers stated that this thesis "received a boost" from
mathematician and philosopher Alfred North Whitehead's Science and the Modern
World (1925). Numbers has also argued, "Despite the manifest shortcomings of the
claim that Christianity gave birth to science—most glaringly, it ignores or minimizes the
contributions of ancient Greeks and medieval Muslims—it too, refuses to succumb to
the death it deserves."[97] The sociologist Rodney Stark of Baylor University, a
Southern Baptist institution, argued in contrast that "Christian theology was essential
for the rise of science."[98]
Confucianism and traditional Chinese religion
The historical process of Confucianism has largely been antipathic towards scientific
discovery. However the religio-philosophical system itself is more neutral on the subject
than such an analysis might suggest. In his writings On Heaven, Xunzi espoused a
proto-scientific world view.[99] However during the Han Synthesis the more antiempirical Mencius was favored and combined with Daoist skepticism regarding the
nature of reality. Likewise, during the Medieval period, Zhu Xi argued against technical
investigation and specialization proposed by Chen Liang.[100] After contact with the
West, scholars such as Wang Fuzhi would rely on Buddhist/Daoist skepticism to
denounce all science as a subjective pursuit limited by humanity's fundamental
ignorance of the true nature of the world.[101] After the May Fourth Movement,
attempts to modernize Confucianism and reconcile it with scientific understanding were
attempted by many scholars including Feng Youlan and Xiong Shili. Given the close
relationship that Confucianism shares with Buddhism, many of the same arguments
used to reconcile Buddhism with science also readily translate to Confucianism.
However, modern scholars have also attempted to define the relationship between
science and Confucianism on Confucianism's own terms and the results have usually
757
led to the conclusion
compatible.[102]
that
Confucianism
and
science
are
fundamentally
Hinduism
In Hinduism, the dividing line between
objective sciences and spiritual knowledge
(adhyatma
vidya)
is
a
linguistic
paradox.[103] Hindu scholastic activities
and ancient Indian scientific advancements
were so interconnected that many Hindu
scriptures are also ancient scientific
manuals and vice-versa. Hindu sages
maintained that logical argument and
rational proof using Nyaya is the way to
obtain correct knowledge.[103] From a
Hindu perspective, modern science is a
legitimate, but incomplete, step towards
knowing
and
understanding
reality.
Hinduism views that science only offers a
limited view of reality, but all it offers is right
and correct.[104] Hinduism offers methods
to correct and transform itself in course of
time. For instance, Hindu views on the
development of life include a range of
viewpoints in regards to evolution,
creationism, and the origin of life within the
traditions of Hinduism. For instance, it has
been suggested that Wallace-Darwininan
evolutionary thought was a part of Hindu
thought centuries before modern times.[105]
Samkhya, the oldest school of Hindu philosophy prescribes a particular method to
analyze knowledge. According to Samkhya, all knowledge is possible through three
means of valid knowledge[106][107] –
1 Pratyaksa or Dristam – direct sense perception,
2 Anumāna – logical inference and
3 Śabda or Āptavacana – verbal testimony.
Nyaya, the Hindu school of logic, accepts all these 3 means and in addition accepts
one more - Upamāna (comparison).
The accounts of the emergence of life within the universe vary in description, but
classically the deity called Brahma, from a Trimurti of three deities also including
Vishnu and Shiva, is described as performing the act of 'creation', or more specifically
of 'propagating life within the universe' with the other two deities being responsible for
'preservation' and 'destruction' (of the universe) respectively.[108] In this respect some
Hindu schools do not treat the scriptural creation myth literally and often the creation
stories themselves do not go into specific detail, thus leaving open the possibility of
incorporating at least some theories in support of evolution. Some Hindus find support
for, or foreshadowing of evolutionary ideas in scriptures, namely the Vedas.[109]
The incarnations of Vishnu (Dashavatara) is almost identical to the scientific
explanation
of
the
sequence
of
biological
evolution
of
man
and
animals.[110][111][112][113] The sequence of avatars starts from an aquatic organism
(Matsya), to an amphibian (Kurma), to a land-animal (Varaha), to a humanoid
(Narasimha), to a dwarf human (Vamana), to 5 forms of well developed human beings
758
(Parashurama, Rama, Balarama/Buddha, Krishna, Kalki) who showcase an increasing
form of complexity (Axe-man, King, Plougher/Sage, wise Statesman, mighty
Warrior).[110][113] In India, the home country of Hindus; educated Hindus widely
accept the theory of biological evolution. In a survey of 909 people, 77% of
respondents in India agreed with Charles Darwin's Theory of Evolution, and 85 per cent
of God-believing people said they believe in evolution as well.[114][115] Although
International Society for Krishna Consciousness (ISKCON), they certainly don't reject
the Darwin's theory, but regards Hindu Creationism to be ideal.
As per Vedas, another explanation for the creation, is based on the five elements:
earth, water, fire, air and aether. [116][117]
Islam
From an Islamic standpoint, science, the study of nature, is considered to be linked to
the concept of Tawhid (the Oneness of God), as are all other branches of
knowledge.[118] In Islam, nature is not seen as a separate entity, but rather as an
integral part of Islam's holistic outlook on God, humanity, and the world. The Islamic
view of science and nature is continuous with that of religion and God. This link implies
a sacred aspect to the pursuit of scientific knowledge by Muslims, as nature itself is
viewed in the Qur'an as a compilation of signs pointing to the Divine.[119] It was with
this understanding that science was studied and understood in Islamic civilizations,
specifically during the eighth to sixteenth centuries, prior to the colonization of the
Muslim world.[120]
According to most historians, the modern scientific method was first developed by
Islamic scientists, pioneered by Ibn Al-Haytham, known to the west as "Alhazen".[121]
Robert Briffault, in The Making of Humanity, asserts that the very existence of science,
as it is understood in the modern sense, is rooted in the scientific thought and
knowledge that emerged in Islamic civilizations during this time.[122]
With the decline of Islamic Civilizations in the late Middle Ages and the rise of Europe,
the Islamic scientific tradition shifted into a new period. Institutions that had existed for
centuries in the Muslim world looked to the new scientific institutions of European
powers.[citation needed] This changed the practice of science in the Muslim world, as
Islamic scientists had to confront the western approach to scientific learning, which was
based on a different philosophy of nature.[118] From the time of this initial upheaval of
the Islamic scientific tradition to the present day, Muslim scientists and scholars have
developed a spectrum of viewpoints on the place of scientific learning within the
context of Islam, none of which are universally accepted or practiced.[123] However,
most maintain the view that the acquisition of knowledge and scientific pursuit in
general is not in disaccord with Islamic thought and religious belief.[118][123]
Jainism
Jainism does not support belief in a creator deity. According to Jain doctrine, the
universe and its constituents - soul, matter, space, time, and principles of motion have
always existed (a static universe similar to that of Epicureanism and steady state
cosmological model). All the constituents and actions are governed by universal natural
laws. It is not possible to create matter out of nothing and hence the sum total of matter
in the universe remains the same (similar to law of conservation of mass). Similarly, the
soul of each living being is unique and uncreated and has existed since beginningless
time.[a][124]
The Jain theory of causation holds that a cause and its effect are always identical in
nature and hence a conscious and immaterial entity like God cannot create a material
759
entity like the universe. Furthermore, according to the Jain concept of divinity, any soul
who destroys its karmas and desires, achieves liberation. A soul who destroys all its
passions and desires has no desire to interfere in the working of the universe. Moral
rewards and sufferings are not the work of a divine being, but a result of an innate
moral order in the cosmos; a self-regulating mechanism whereby the individual reaps
the fruits of his own actions through the workings of the karmas.
Through the ages, Jain philosophers have adamantly rejected and opposed the
concept of creator and omnipotent God and this has resulted in Jainism being labeled
as nastika darsana or atheist philosophy by the rival religious philosophies. The theme
of non-creationism and absence of omnipotent God and divine grace runs strongly in
all the philosophical dimensions of Jainism, including its cosmology, karma, moksa and
its moral code of conduct. Jainism asserts a religious and virtuous life is possible
without the idea of a creator god.[125]
Perspectives from the scientific community
History
Further information: List of Jewish scientists and philosophers, List of Christian thinkers
in science, List of Muslim scientists, and List of atheists (science and technology)
In the 17th century, founders of the Royal Society largely held conventional and
orthodox religious views, and a number of them were prominent Churchmen.[126]
While theological issues that had the potential to be divisive were typically excluded
from formal discussions of the early Society, many of its fellows nonetheless believed
that their scientific activities provided support for traditional religious belief.[127]
Clerical involvement in the Royal Society remained high until the mid-nineteenth
century, when science became more professionalised.[128]
Albert Einstein supported the compatibility of some interpretations of religion with
science. In "Science, Philosophy and Religion, A Symposium" published by the
Conference on Science, Philosophy and Religion in Their Relation to the Democratic
Way of Life, Inc., New York in 1941, Einstein stated:
Accordingly, a religious person is devout in the sense that he has no doubt of
the significance and loftiness of those superpersonal objects and goals which
neither require nor are capable of rational foundation. They exist with the same
necessity and matter-of-factness as he himself. In this sense religion is the ageold endeavor of mankind to become clearly and completely conscious of these
values and goals and constantly to strengthen and extend their effect. If one
conceives of religion and science according to these definitions then a conflict
between them appears impossible. For science can only ascertain what is, but
not what should be, and outside of its domain value judgments of all kinds
remain necessary. Religion, on the other hand, deals only with evaluations of
human thought and action: it cannot justifiably speak of facts and relationships
between facts. According to this interpretation the well-known conflicts between
religion and science in the past must all be ascribed to a misapprehension of
the situation which has been described.[129]
Einstein thus expresses views of ethical non-naturalism (contrasted to ethical
naturalism).
Prominent modern scientists who are atheists include evolutionary biologist Richard
Dawkins and Nobel prize winning physicist Stephen Weinberg. Prominent scientists
advocating religious belief include Nobel prize winning physicist, and United Church of
Christ member Charles Townes, evangelical Christian, and past head of the Human
Genome Project Francis Collins, and climatologist John T. Houghton.[39]
760
Studies on scientists' beliefs
Many studies have been conducted in the United States and have generally found that
scientists are less likely to believe in God than are the rest of the population. Precise
definitions and statistics vary, but generally about 1/3 of scientists are atheists, 1/3
agnostic, and 1/3 have some belief in God (although some might be deistic, for
example).[39][130][131] This is in contrast to the more than roughly 3/4 of the general
population that believe in some God in the United States. Belief also varies slightly by
field. Two surveys on physicists, geoscientists, biologists, mathematicians, and
chemists have noted that, from those specializing in these fields, physicists had lowest
percentage of belief in God (29%) while chemists had highest (41%).[130][132]
In 1916, 1,000 leading American scientists were randomly chosen from American Men
of Science and 41.8% believed God existed, 41.5% disbelieved, and 16.7% had
doubts/did not know; however when the study was replicated 80 years later using
American Men and Women of Science in 1996, results were very much the same with
39.3% believing God exists, 45.3% disbelieved, and 14.5% had doubts/did not
know.[39][130] In the same 1996 survey, scientists in the fields of biology,
mathematics, and physics/astronomy, belief in a god that is "in intellectual and affective
communication with humankind" was most popular among mathematicians (about
45%) and least popular among physicists (about 22%). In total, in terms of belief
toward a personal god and personal immortality, about 60% of United States scientists
in these fields expressed either disbelief or agnosticism and about 40% expressed
belief.[130] This compared with 58% in 1914 and 67% in 1933.[citation needed]
Among members of the National Academy of Sciences, only 7.0% expressed personal
belief, while 72.2% expressed disbelief and another 20.8% were agnostic concerning
the existence of a personal god who answers prayer.[133]
A survey conducted between 2005 and 2007 by Elaine Howard Ecklund of University at
Buffalo, The State University of New York on 1,646 natural and social science
professors at 21 elite US research universities found that, in terms of belief in God or a
higher power, more than 60% expressed either disbelief or agnosticism and more than
30% expressed belief. More specifically, nearly 34% answered "I do not believe in
God" and about 30% answered "I do not know if there is a God and there is no way to
find out." [134] In the same study, 28% said they believed in God and 8% believed in a
higher power that was not God.[135] Ecklund stated that scientists were often able to
consider themselves spiritual without religion or belief in god.[136] Ecklund and
Scheitle concluded, from their study, that the individuals from non-religious
backgrounds disproportionately had self-selected into scientific professions and that
the assumption that becoming a scientist necessarily leads to loss of religion is
untenable since the study did not strongly support the idea that scientists had dropped
religious identities due to their scientific training.[137] Instead, factors such as
upbringing, age, and family size were significant influences on religious identification
since those who had religious upbringing were more likely to be religious and those
who had a non-religious upbringing were more likely to not be religious.[134][137] The
authors also found little difference in religiosity between social and natural
scientists.[138]
Farr Curlin, a University of Chicago Instructor in Medicine and a member of the
MacLean Center for Clinical Medical Ethics, noted in a study that doctors tend to be
science-minded religious people. He helped author a study that "found that 76 percent
of doctors believe in God and 59 percent believe in some sort of afterlife." and "90
percent of doctors in the United States attend religious services at least occasionally,
compared to 81 percent of all adults." He reasoned, "The responsibility to care for
those who are suffering and the rewards of helping those in need resonate throughout
most religious traditions."[139]
761
Another study conducted by the Pew Research Center found that members of the
American Association for the Advancement of Science (AAAS) were "much less
religious than the general public," with 51% believing in some form of deity or higher
power. Specifically, 33% of those polled believe in God, 18% believe in a universal
spirit or higher power, and 41% did not believe in either God or a higher power.[140]
48% say they have a religious affiliation, equal to the number who say they are not
affiliated with any religious tradition. 17% were atheists, 11% were agnostics, 20%
were nothing in particular, 8% were Jewish, 10% were Catholic, 16% were Protestant,
4% were Evangelical, 10% were other religion. The survey also found younger
scientists to be "substantially more likely than their older counterparts to say they
believe in God". Among the surveyed fields, chemists were the most likely to say they
believe in God.[132]
Physicians in the United States, by contrast, are much more religious than scientists,
with 76% stating a belief in God.[139]
Religious beliefs of US professors were recently examined using a nationally
representative sample of more than 1,400 professors. They found that in the social
sciences: 23.4% did not believe in God, 16% did not know if God existed, 42.5%
believed God existed, and 16% believed in a higher power. Out of the natural sciences:
19.5% did not believe in God, 32.9% did not know if God existed, 43.9% believed God
existed, and 3.7% believed in a higher power.[141]
In terms of perceptions, most social and natural scientists from 21 American elite
universities did not perceive conflict between science and religion, while 36.6% did.
However, in the study, scientists who had experienced limited exposure to religion
tended to perceive conflict.[142] In the same study they found that nearly one in five
atheist scientists who are parents (17%) are part of religious congregations and have
attended a religious service more than once in the past year. Some of the reasons for
doing so are their scientific identity (wishing to expose their children to all sources of
knowledge so they can make up their own minds), spousal influence, and desire for
community.[143]
Public perceptions of science
According to a 2007 poll by the Pew Forum, "while large majorities of Americans
respect science and scientists, they are not always willing to accept scientific findings
that squarely contradict their religious beliefs." [144] The Pew Forum states that
specific factual disagreements are "not common today", though 40% to 50% of
Americans do not accept the evolution of humans and other living things, with the
"strongest opposition" coming from evangelical Christians at 65% saying life did not
evolve.[144] 51% of the population believes humans and other living things evolved:
26% through natural selection only, 21% somehow guided, 4% don't know.[144] In the
U.S., biological evolution is the only concrete example of conflict where a significant
portion of the American public denies scientific consensus for religious
reasons.[144][145] In terms of advanced industrialized nations, the United States is the
most religious.[144]
Creationism is not an exclusively American phenomenon. A poll on adult Europeans
revealed that only 40% believed in naturalistic evolution, 21% in theistic evolution, 20%
in special creation, and 19% are undecided; with the highest concentrations of young
earth creationists in Switzerland (21%), Austria (20.4%), Germany (18.1%).[146] Other
countries such as Netherlands, Britain, and Australia have experienced growth in such
views as well.[146]
Research on perceptions of science among the American public conclude that most
religious groups see no general epistemological conflict with science and they have no
differences with nonreligious groups in the propensity of seeking out scientific
762
knowledge, although there may be subtle epistemic or moral conflicts when scientists
make counterclaims to religious tenets.[147][148] Findings from the Pew Center note
similar findings and also note that the majority of Americans (80-90%) show strong
support for scientific research, agree that science makes society and individual's lives
better, and 8 in 10 Americans would be happy if their children were to become
scientists.[149] Even strict creationists tend to have very favorable views on
science.[145] A study on a national sample of US college students examined whether
these students viewed the science / religion relationship as reflecting primarily conflict,
collaboration, or independence. The study concluded that the majority of
undergraduates in both the natural and social sciences do not see conflict between
science and religion. Another finding in the study was that it is more likely for students
to move away from a conflict perspective to an independence or collaboration
perspective than towards a conflict view.[150]
In the US, people who had no religious affiliation were no more likely than the religious
population to have New Age beliefs and practices.[151][relevant? – discuss]
A study conducted on adolescents from Christian schools in Northern Ireland, noted a
positive relationship between attitudes towards Christianity and science once attitudes
towards scientism and creationism were accounted for.[152]
Cross-national studies, which have pooled data on religion and science from 19812001, have noted that countries with high religiosity also have stronger faith in science,
while less religious countries have more skepticism of the impact of science and
technology.[153] The United States is noted there as distinctive because of greater
faith in both God and scientific progress. Other research cites the National Science
Foundation's finding that America has more favorable public attitudes towards science
than Europe, Russia, and Japan despite differences in levels of religiosity in these
cultures.[145]
763
764
Spirituality
The term spirituality lacks a definitive definition,[1][2] although social scientists have
defined spirituality as the search for "the sacred," where "the sacred" is broadly defined
as that which is set apart from the ordinary and worthy of veneration.[3]
The use of the term "spirituality" has changed throughout the ages.[4] In modern times,
spirituality is often separated from Abrahamic religions,[5] and connotes a blend of
humanistic psychology with mystical and esoteric traditions and eastern religions aimed
at personal well-being and personal development.[6] The notion of "spiritual
experience" plays an important role in modern spirituality, but has a relatively recent
origin.[7]
Contents
1 Definition
2 Etymology
3 Development of the meaning of spirituality
3.1 Classical, medieval and early modern periods
3.2 Modern spirituality
3.2.1 Transcendentalism and Unitarian Universalism
3.2.2 Neo-Vedanta
3.2.3 Theosophy, Anthroposophy, and the Perennial Philosophy
3.2.4 "Spiritual but not religious"
4 Traditional spirituality
4.1 Abrahamic faiths
4.1.1 Judaism
4.1.2 Christianity
4.1.3 Islam
4.1.3.1 Five pillars
4.1.3.2 Sufism
4.1.3.3 Jihad
4.2 Asian traditions
4.2.1 Buddhism
4.2.2 Hinduism
4.2.2.1 Four paths
4.2.2.2 Schools and spirituality
4.2.3 Sikhism
4.3 African spirituality
5 Contemporary spirituality
5.1 Characteristics
5.2 Spiritual experience
5.3 Spiritual practices
6 Science
6.1 Antagonism
6.2 Holism
6.3 Scientific research
765
Definition
There is no single, widely-agreed definition of spirituality.[1][2][note 1] Social scientists
have defined spirituality as the search for the sacred, for that which is set apart from
the ordinary and worthy of veneration, "a transcendent dimension within human
experience...discovered in moments in which the individual questions the meaning of
personal existence and attempts to place the self within a broader ontological
context."[8]
According to Waaijman, the traditional meaning of spirituality is a process of reformation which "aims to recover the original shape of man, the image of God. To
accomplish this, the re-formation is oriented at a mold, which represents the original
shape: in Judaism the Torah, in Christianity Christ, in Buddhism Buddha, in the Islam
Muhammad."[note 2] In modern times spirituality has come to mean the internal
experience of the individual. It still denotes a process of transformation, but in a context
separate from organized religious institutions: "spiritual but not religious."[5] Houtman
and Aupers suggest that modern spirituality is a blend of humanistic psychology,
mystical and esoteric traditions and eastern religions.[6]
Waaijman points out that "spirituality" is only one term of a range of words which
denote the praxis of spirituality.[10] Some other terms are "Hasidism, contemplation,
kabbala, asceticism, mysticism, perfection, devotion and piety".[10]
Spirituality can be sought not only through traditional organized religions, but also
through movements such as liberalism, feminist theology, and green politics.
Spirituality is also now associated with mental health, managing substance abuse,
marital functioning, parenting, and coping. It has been suggested that spirituality also
leads to finding purpose and meaning in life.[3]
Etymology
The term spirit means "animating or vital principle in man and animals".[web 1] It is
derived from the Old French espirit,[web 1] which comes from the Latin word spiritus
"soul, courage, vigor, breath",[web 1] and is related to spirare, "to breathe".[web 1] In
the Vulgate the Latin word spiritus is used to translate the Greek pneuma and Hebrew
ruah.[web 1]
The term spiritual, matters "concerning the spirit",[web 2] is derived from Old French
spirituel (12c.), which is derived from Latin spiritualis, which comes from "spiritus" or
"spirit".[web 2]
The term spirituality is derived from Middle French spiritualite,[web 3] from Late Latin
"spiritualitatem" (nominative spiritualitas),[web 3] which is also derived from Latin
"spiritualis".[web 3]
Development of the meaning of spirituality
Classical, medieval and early modern periods
Words translatable as 'spirituality' first began to arise in the 5th century and only
entered common use toward the end of the Middle Ages.[11] In a Bibilical context the
term means being animated by God,[12] to be driven by the Holy Spirit, as opposed to
a life which rejects this influence.[13]
In the 11th century this meaning changed. Spirituality began to denote the mental
aspect of life, as opposed to the material and sensual aspects of life, "the ecclesiastical
sphere of light against the dark world of matter".[14][note 3] In the 13th century
"spirituality" acquired a social and psychological meaning. Socially it denoted the
territory of the clergy: "The ecclesiastical against the temporary possessions, the
ecclesiastical against the secular authority, the clerical class against the secular
766
class"[15][note 4] Psychologically, it denoted the realm of the inner life: "The purity of
motives, affections, intentions, inner dispositions, the psychology of the spiritual life, the
analysis of the feelings".[16][note 5]
In the 17th and 18th century a distinction was made between higher and lower forms of
spirituality: "A spiritual man is one who is Christian 'more abundantly and deeper than
others'."[16][note 6] The word was also associated with mysticism and quietism, and
acquired a negative meaning.[citation needed]
Modern spirituality
Transcendentalism and Unitarian Universalism
Ralph Waldo Emerson (1803–1882) was a pioneer of the idea of spirituality as a
distinct field.[17] He was one of the major figures in Transcendentalism, an early 19thcentury liberal Protestant movement, which was rooted in English and German
Romanticism, the Biblical criticism of Herder and Schleiermacher, and the skepticism of
Hume.[web 4] The Transcendentalists emphasised an intuitive, experiential approach
of religion.[web 5] Following Schleiermacher,[18] an individual's intuition of truth was
taken as the criterium for truth.[web 5] In the late 18th and early 19th century, the first
translations of Hindu texts appeared, which were also read by the Transcendentalists,
and influenced their thinking.[web 5] They also endorsed universalist and Unitarianist
ideas, leading to Unitarian Universalism, the idea that there must be truth in other
religions as well, since a loving God would redeem all living beings, not just
Christians.[web 5][web 6]
Neo-Vedanta
An important influence on western spirituality was Neo-Vedanta, also called neoHinduism[19] and Hindu Universalism,[web 7] a modern interpretation of Hinduism
which developed in response to western colonialism and orientalism, and aims to
present Hinduism as a "homogenized ideal of Hinduism"[20] with Advaita Vedanta as
its central doctrine.[21] Due to the colonisation of Asia by the western world, since the
19th century an exchange of ideas has been taking place between the western world
and Asia, which also influenced western religiosity.[22] Unitarianism, and the idea of
Universalism, was brought to India by missionaries, and had a major influence on neoHinduism via Ram Mohan Roy's Brahmo Samaj and Brahmoism. Roy attempted to
modernise and reform Hinduism, taking over Christian social ideas and the idea of
Universalism.[23] This universalism was further popularised, and brought back to the
west as neo-Vedanta, by Swami Vivekananda.[23]
Theosophy, Anthroposophy, and the Perennial Philosophy
Another major influence on modern spirituality was the Theosophical Society, which
searched for 'secret teachings' in Asian religions.[22] It has been influential on
modernist streams in several Asian religions, notably Neo-Vedanta, the revival of
Theravada Buddhism, and Buddhist modernism, which have taken over modern
western notions of personal experience and universalism and integrated them in their
religious concepts.[22] A second, related influence was Anthroposophy, whose
founder, Rudolf Steiner, was particularly interested in developing a genuine Western
spirituality, and in the ways that such a spirituality could transform practical institutions
such as education, agriculture, and medicine.[24]
The influence of Asian traditions on western modern spirituality was also furthered by
the Perennial Philosophy, whose main proponent Aldous Huxley was deeply influenced
by Vivekanda's Neo-Vedanta and Universalism,[25] and the spread of social welfare,
education and mass travel after World War Two.
767
Important early 20th century western writers who studied the phenomenon of
spirituality, and their works, include William James, The Varieties of Religious
Experience (1902), and Rudolph Otto, especially The Idea of the Holy (1917). James'
notions of "spiritual experience" had a further influence on the modernist streams in
Asian traditions, making them even further recognisable for a western audience.[18]
"Spiritual but not religious"
After the Second World War spirituality and religion became disconnected.[16] A new
discourse developed, in which (humanistic) psychology, mystical and esoteric traditions
and eastern religions are being blended, to reach the true self by self-disclosure, free
expression and meditation.[6]
The distinction between the spiritual and the religious became more common in the
popular mind during the late 20th century with the rise of secularism and the advent of
the New Age movement. Authors such as Chris Griscom and Shirley MacLaine
explored it in numerous ways in their books. Paul Heelas noted the development within
New Age circles of what he called "seminar spirituality":[26] structured offerings
complementing consumer choice with spiritual options.
Among other factors, declining membership of organized religions and the growth of
secularism in the western world have given rise to this broader view of spirituality.[27]
The term "spiritual" is now frequently used in contexts in which the term "religious" was
formerly employed.[28] Both theists and atheists have criticized this
development.[29][30]
Traditional spirituality
Abrahamic faiths
Judaism
Rabbinic Judaism (or in some Christian traditions, Rabbinism) (Hebrew: "Yahadut
Rabanit" - ‫ )תינבר תודהי‬has been the mainstream form of Judaism since the 6th century
CE, after the codification of the Talmud. It is characterised by the belief that the Written
Torah ("Law" or "Instruction") cannot be correctly interpreted without reference to the
Oral Torah and by the voluminous literature specifying what behavior is sanctioned by
the law (called halakha, "the way").
Judaism knows a variety of religious observances: ethical rules, prayers, religious
clothing, holidays, shabbat, pilgrimages, Torah reading, dietary laws.
Kabbalah (literally "receiving"), is an esoteric method, discipline and school of thought
of Judaism. Its definition varies according to the tradition and aims of those following
it,[31] from its religious origin as an integral part of Judaism, to its later Christian, New
Age, or Occultist syncretic adaptations. Kabbalah is a set of esoteric teachings meant
to explain the relationship between an unchanging, eternal and mysterious Ein Sof (no
end) and the mortal and finite universe (his creation). While it is heavily used by some
denominations, it is not a religious denomination in itself. Inside Judaism, it forms the
foundations of mystical religious interpretation. Outside Judaism, its scriptures are read
outside the traditional canons of organised religion. Kabbalah seeks to define the
nature of the universe and the human being, the nature and purpose of existence, and
various other ontological questions. It also presents methods to aid understanding of
these concepts and to thereby attain spiritual realisation.
Hasidic Judaism, meaning "piety" (or "loving kindness"), is a branch of Orthodox
Judaism that promotes spirituality through the popularisation and internalisation of
Jewish mysticism as the fundamental aspect of the faith. It was founded in 18th-century
Eastern Europe by Rabbi Israel Baal Shem Tov as a reaction against overly legalistic
768
Judaism. His example began the characteristic veneration of leadership in Hasidism as
embodiments and intercessors of Divinity for the followers.[citation needed] Opposite to
this, Hasidic teachings cherished the sincerity and concealed holiness of the unlettered
common folk, and their equality with the scholarly elite. The emphasis on the Immanent
Divine presence in everything gave new value to prayer and deeds of kindness,
alongside Rabbinic supremacy of study, and replaced historical mystical (kabbalistic)
and ethical (musar) asceticism and admonishment with optimism,[citation needed]
encouragement, and daily fervour. This populist emotional revival accompanied the
elite ideal of nullification to paradoxical Divine Panentheism, through intellectual
articulation of inner dimensions of mystical thought.
Christianity
Catholic spirituality is the spiritual
practice of living out a personal act
of faith (fides qua creditur) following
the acceptance of faith (fides quae
creditur). Although all Catholics are
expected to pray together at Mass,
there are many different forms of
spirituality and private prayer which
have developed over the centuries.
Each of the major religious orders
of the Catholic Church and other
lay groupings have their own
unique spirituality - its own way of
approaching God in prayer and in
living out the Gospel.
Christian mysticism refers to the
development of mystical practices
and theory within Christianity. It has
often been connected to mystical
theology, especially in the Catholic
and Eastern Orthodox traditions.
The attributes and means by which
Christian mysticism is studied and
practiced are varied and range
from ecstatic visions of the soul's mystical union with God to simple prayerful
contemplation of Holy Scripture (i.e., Lectio Divina).
Islam
Five pillars
The Pillars of Islam (arkan al-Islam; also arkan ad-din, "pillars of religion") are five basic
acts in Islam, considered obligatory for all believers. The Quran presents them as a
framework for worship and a sign of commitment to the faith. They are (1) the
shahadah (creed), (2) daily prayers (salat), (3) almsgiving (zakah), (4) fasting during
Ramadan and (5) the pilgrimage to Mecca (hajj) at least once in a lifetime. The Shia
and Sunni sects both agree on the essential details for the performance of these
acts.[32]
769
Sufism
The best known form of Islamic mystic spirituality is the Sufi tradition (famous through
Rumi and Hafiz) in which a spiritual master or pir transmits spiritual discipline to
students.[33]
Sufism or ta awwuf (Arabic: ‫فّوصت‬‎) is defined by its adherents as the inner, mystical
dimension of Islam.[34][35][36] A practitioner of this tradition is generally known as a
Sūfī (‫)ّيِفوُص‬. Sufis believe they are practicing ihsan (perfection of worship) as
revealed by Gabriel to Muhammad,
Worship and serve Allah as you are seeing Him and while you see Him not yet
truly He sees you.
Sufis consider themselves as the original true proponents of this pure original form of
Islam. They are strong adherents to the principal of tolerance, peace and against any
form of violence. The Sufi have suffered severe persecution by their coreligionist
brothers the Wahhabi and the Salafist. In 1843 the Senussi Sufi were forced to flee
Mecca and Medina and head to the Sudan and Libya.[37]
Classical Sufi scholars have defined Sufism as "a science whose objective is the
reparation of the heart and turning it away from all else but God".[38] Alternatively, in
the words of the Darqawi Sufi teacher Ahmad ibn Ajiba, "a science through which one
can know how to travel into the presence of the Divine, purify one's inner self from filth,
and beautify it with a variety of praiseworthy traits".[39]
Jihad
Jihad is a religious duty of Muslims. In Arabic, the word jihād translates as a noun
meaning "struggle". There are two commonly accepted meanings of jihad: an inner
spiritual struggle and an outer physical struggle.[40] The "greater jihad" is the inner
struggle by a believer to fulfill his religious duties.[40][41] This non-violent meaning is
stressed by both Muslim[42] and non-Muslim[43] authors.
Al-Khatib al-Baghdadi, an 11th-century Islamic scholar, referenced a statement by the
companion of Muhammad Jabir ibn Abd-Allah:
The Prophet [...] returned from one of his battles, and thereupon told us, 'You
have arrived with an excellent arrival, you have come from the Lesser Jihad to
the Greater Jihad—the striving of a servant (of Allah) against his desires (holy
war)."[unreliable source?][44][45][note 7]
Asian traditions
Buddhism
Buddhist practices are known as Bhavana, which literally means "development" or
"cultivating"[46] or "producing"[47][48] in the sense of "calling into existence."[49] It is
an important concept in Buddhist praxis (Patipatti). The word bhavana normally
appears in conjunction with another word forming a compound phrase such as cittabhavana (the development or cultivation of the heart/mind) or metta-bhavana (the
development/cultivation of lovingkindness). When used on its own bhavana signifies
'spiritual cultivation' generally.
Various Buddhist Paths to liberation developed throughout the ages. Best-known is the
Noble Eightfold Path, but others include the Bodhisattva Path and Lamrim.
770
Hinduism
Three of four paths of spirituality in Hinduism
Hinduism has no traditional ecclesiastical order, no
centralized religious authorities, no governing body, no
prophet(s) nor any binding holy book; Hindus can choose
to be polytheistic, pantheistic, monistic, or atheistic.[50]
Within this diffuse and open structure, spirituality in Hindu
philosophy is an individual experience, and referred to as
ksaitrajña [51]). It defines spiritual practice as one’s
journey towards moksha, awareness of self, the discovery
of higher truths, true nature of reality, and a
consciousness that is liberated and content.[52][53]
Four paths
Hinduism identifies four ways - mārga[54] or yoga[55] - of
spiritual practice.[56] The first way is Jñāna yoga, the way
of knowledge. The second way is Bhakti yoga, the way of
devotion. The third way is Karma yoga, the way of works.
The fourth way is Rāja yoga, the way of contemplation
and meditation.
Jñāna marga is a path often assisted by a guru (teacher)
in one’s spiritual practice.[57] Bhakti marga is a path of
faith and devotion to deity or deities; the spiritual practice
often includes chanting, singing and music - such as in
kirtans - in front of idols, or images of one or more deity,
or a devotional symbol of the holy.[58] Karma marga is
the path of one’s work, where diligent practical work or
vartta (Sanskrit: profession) becomes in itself a spiritual
practice, and work in daily life is perfected as a form of
spiritual liberation and not for its material rewards.[59][60]
Rāja marga is the path of cultivating necessary virtues,
self-discipline, tapas (meditation), contemplation and selfreflection sometimes with isolation and renunciation of
the world, to a pinnacle state called samādhi.[61][62] This
state of samādhi has been compared to peak
experience.[63]
There is a rigorous debate in Indian literature on relative
merits of these theoretical spiritual practices. For
example, Chandogyopanishad suggests that those who
engage in ritualistic offerings to gods and priests will fail
in their spiritual practice, while those who engage in
tapas will succeed; Svetasvataropanishad suggests that
a successful spiritual practice requires a longing for truth,
but warns of becoming ‘false ascetic’ who go through the
mechanics of spiritual practice without meditating on the
nature of Self and universal Truths.[64] In the practice of
Hinduism, suggest modern era scholars such as
Vivekananda, the choice between the paths is up to the
individual and a person’s proclivities.[53][65] Other
scholars[66] suggest that these Hindu spiritual practices
are not mutually exclusive, but overlapping. These four
paths of spirituality are also known in Hinduism outside India, such as in Balinese
Hinduism, where it is called Catur Marga (literally: four paths).[67]
771
Schools and spirituality
Different schools of Hinduism encourage different spiritual practices. In Tantric school
for example, the spiritual practice has been referred to as sādhanā. It involves initiation
into the school, undergoing rituals, and achieving moksha liberation by experiencing
union of cosmic polarities.[68] The Hare Krishna school emphasizes bhakti yoga as
spiritual practice.[69] In Advaita Vedanta school, the spiritual practice emphasizes
jñāna yoga in stages: samnyasa (cultivate virtues), sravana (hear, study), manana
(reflect) and dhyana (nididhyasana, contemplate).[70]
Sikhism
Sikhism considers spiritual life and
secular life to be intertwined:[71] "In the
Sikh
Weltanschauung...the
temporal
world is part of the Infinite Reality and
partakes of its characteristics."[72] Guru
Nanak described living an "active,
creative,
and
practical
life"
of
"truthfulness, fidelity, self-control and
purity" as being higher than a purely
contemplative life.[73]
The 6th Sikh Guru Guru Hargobind reaffirmed that the political/temporal (Miri)
and spiritual (Piri) realms are mutually
coexistent.[74] According to the 9th Sikh
Guru, Tegh Bahadhur, the ideal Sikh
should have both Shakti (power that
resides in the temporal), and Bhakti (spiritual meditative qualities). This was developed
into the concept of the Saint Soldier by the 10th Sikh Guru, Gobind Singh.[75]
According to Guru Nanak, the goal is to attain the "attendant balance of separationfusion, self-other, action-inaction, attachment-detachment, in the course of daily
life",[76] the polar opposite to a self-centered existence.[76] Nanak talks further about
the one God or Akal (timelessness) that permeates all life[77]).[78][79][80] and which
must be seen with 'the inward eye', or the 'heart', of a human being.[81]
In Sikhism there is no dogma,[82] priests, monastics or yogis.
African spirituality
In some African contexts, spirituality is considered a belief system that guides the
welfare of society and the people therein, and eradicates sources of unhappiness
occasioned by evil.
Contemporary spirituality
The term "spiritual" is now frequently used in contexts in which the term "religious" was
formerly employed.[28] Contemporary spirituality is also called "post-traditional
spirituality" and "New Age spirituality".[83] Hanegraaf makes a distinction between two
"New Age" movements: New Age in a restricted sense, which originated primarily in
mid-twentieth century England and had its roots in Theosophy and Anthroposophy, and
"New Age in a general sense, which emerged in the later 1970s
...when increasing numbers of people [...] began to perceive a broad similarity
between a wide variety of "alternative ideas" and pursuits, and started to think
of them as part of one "movement"".[84]
772
Those who speak of spirituality outside of religion often define themselves as spiritual
but not religious and generally believe in the existence of different "spiritual paths,"
emphasizing the importance of finding one's own individual path to spirituality.
According to one 2005 poll, about 24% of the United States population identifies itself
as spiritual but not religious.[web 8]
Characteristics
Modern spirituality is centered on the "deepest values and meanings by which people
live."[85] It embraces the idea of an ultimate or an alleged immaterial reality.[86] It
envisions an inner path enabling a person to discover the essence of his/her being.
Not all modern notions of spirituality embrace transcendental ideas. Secular spirituality
emphasizes humanistic ideas on moral character (qualities such as love, compassion,
patience, tolerance, forgiveness, contentment, responsibility, harmony, and a concern
for others).[87]:22 These are aspects of life and human experience which go beyond a
purely materialist view of the world without necessarily accepting belief in a
supernatural reality or divine being.
Personal well-being, both physical and psychological, is an important aspect of modern
spirituality. Contemporary authors suggest that spirituality develops inner peace and
forms a foundation for happiness. Meditation and similar practices may help any
practitioner cultivate his or her inner life and character.[88][unreliable source?] [89]
Ellison and Fan (2008) assert that spirituality causes a wide array of positive health
outcomes, including "morale, happiness, and life satisfaction."[90] Spirituality has
played a central role in self-help movements such as Alcoholics Anonymous:
...if an alcoholic failed to perfect and enlarge his spiritual life through work and
self-sacrifice for others, he could not survive the certain trials and low spots
ahead....[91]
Spiritual experience
"Spiritual experience" plays a central role in modern spirituality.[92] This notion has
been popularised by both western and Asian authors.[93][94]
William James popularized the use of the term "religious experience" in his The
Varieties of Religious Experience.[93] It has also influenced the understanding of
mysticism as a distinctive experience which supplies knowledge.[web 4]
Wayne Proudfoot traces the roots of the notion of "religious experience" further back to
the German theologian Friedrich Schleiermacher (1768–1834), who argued that
religion is based on a feeling of the infinite. The notion of "religious experience" was
used by Schleiermacher to defend religion against the growing scientific and secular
citique. It was adopted by many scholars of religion, of which William James was the
most influential.[95]
Major Asian influences were Vivekananda[96] and D.T. Suzuki.[92] Swami
Vivekananda popularised a modern syncretitistic Hinduism,[97][94] in which the
authority of the scriptures was replaced by an emphasis on personal experience.
[94][98] D.T. Suzuki had a major influence on the popularisation of Zen in the west and
popularized the idea of enlightenment as insight into a timeless, transcendent
reality.[web 9][web 10][22] Another example can be seen in Paul Brunton's A Search in
Secret India, which introduced Ramana Maharshi to a western audience.
Spiritual experiences can include being connected to a larger reality, yielding a more
comprehensive self; joining with other individuals or the human community; with nature
or the cosmos; or with the divine realm.[99]
773
Spiritual practices
Waaijman discerns four forms of spiritual practices:[100]
1-Somatic practices, especially deprivation and diminishment. The deprivation
purifies the body. Diminishment concerns the repulsement of ego-oriented
impulses. Examples are fasting and poverty.[100]
2-Psychological practices, for example meditation.[101]
3- Social practices. Examples are the practice of obedience and communal
ownership reform ego-orientedness into other-orientedness.[101]
4-Spiritual. All practices aim at purifying the ego-centeredness, and direct the
abilities at the divine reality.[101]
Spiritual practices may include meditation, mindfulness, prayer, the contemplation of
sacred texts, ethical development,[87] and the use of psychoactive substances
(entheogens). Love and/or compassion are often described as the mainstay of spiritual
development.[87]
Within spirituality is also found "a common emphases on the value of thoughtfulness,
tolerance for breadth and practices and beliefs, and appreciation for the insights of
other religious communities, as well as other sources of authority within the social
sciences."[102]
Science
Antagonism
Since the scientific revolution, the relationship of science to religion and spirituality has
developed in complex ways.[103][104] Historian John Hedley Brooke describes wide
variations:
The natural sciences have been invested with religious meaning, with antireligious
implications and, in many contexts, with no religious significance at all."
The popular notion of antagonisms between science and religion[105][106] has
historically originated with "thinkers with a social or political axe to grind" rather than
with the natural philosophers themselves.[104] Though physical and biological
scientists today avoid supernatural explanations to describe reality[107][108][109][note
8], many scientists continue to consider science and spirituality to be complementary,
not contradictory.[110][111]
Holism
During the twentieth century the relationship between science and spirituality has been
influenced both by Freudian psychology, which has accentuated the boundaries
between the two areas by accentuating individualism and secularism, and by
developments in particle physics, which reopened the debate about complementarity
between scientific and religious discourse and rekindled for many an interest in holistic
conceptions of reality.[104]:322 These holistic conceptions were championed by New
Age spiritualists in a type of quantum mysticism that they claim justifies their spiritual
beliefs,[112][113] though quantum physicists themselves on the whole reject such
attempts as being pseudoscientific.[114][115]
Scientific research
Neuroscientists are trying to learn more about how the brain functions during reported
spiritual experiences.[116][117]
The psychology of religion uses a variety of metrics to measure spirituality.[118]
In keeping with a general increase in interest in spirituality and complementary and
alternative treatments, prayer has garnered attention among some behavioral
scientists. Masters and Spielmans[119] have conducted a meta-analysis of the effects
of distant intercessory prayer, but detected no discernible effects.
774
Metaphysics
Metaphysics is a traditional branch of philosophy concerned with explaining the
fundamental nature of being and the world that encompasses it,[1] although the term is
not easily defined.[2] Traditionally, metaphysics attempts to answer two basic
questions in the broadest possible terms:[3]
What is ultimately there?
What is it like?
A person who studies metaphysics is called a metaphysicist [4] or a metaphysician.[5]
The metaphysician attempts to clarify the fundamental notions by which people
understand the world, e.g., existence, objects and their properties, space and time,
cause and effect, and possibility. A central branch of metaphysics is ontology, the
investigation into the basic categories of being and how they relate to each other.
Another central branch of metaphysics is cosmology, the study of the origin (if it has
had one), fundamental structure, nature, and dynamics of the universe. Some include
Epistemology as another central tenet of metaphysics but this can be questioned.
Prior to the modern history of science, scientific questions were addressed as a part of
metaphysics known as natural philosophy. Originally, the term "science" (Latin scientia)
simply meant "knowledge". The scientific method, however, transformed natural
philosophy into an empirical activity deriving from experiment unlike the rest of
philosophy. By the end of the 18th century, it had begun to be called "science" to
distinguish it from philosophy. Thereafter, metaphysics denoted philosophical enquiry
of a non-empirical character into the nature of existence.[6] Some philosophers of
science, such as the neo-positivists, say that natural science rejects the study of
metaphysics, while other philosophers of science strongly disagree.
775
Contents
1 Etymology
2 Origins and nature of metaphysics
3 Central questions
3.1 Being, existence and reality
3.2 Empirical and conceptual objects
3.2.1 Objects and their properties
3.3 Cosmology and cosmogony
3.4 Determinism and free will
3.5 Identity and change
3.6 Mind and matter
3.7 Necessity and possibility
3.8 Religion and spirituality
3.9 Space and time
4 Styles and methods of metaphysics
5 History and schools of metaphysics
5.1 Pre-Socratic metaphysics in Greece
5.2 Socrates and Plato
5.3 Aristotle
5.4 Scholasticism and the Middle Ages
5.5 Rationalism and Continental Rationalism
5.6 British empiricism
5.7 Kant
5.8 Early analytical philosophy and positivism
5.9 Continental philosophy
5.10 Later analytical philosophy
6 Rejections of metaphysics
7 Metaphysics in science
Etymology
The word "metaphysics" derives from the Greek words μετά (metá, "beyond", "upon" or
"after") and φυσικά (physiká, "physics").[7] It was first used as the title for several of
Aristotle's works, because they were usually anthologized after the works on physics in
complete editions. The prefix meta- ("beyond") indicates that these works come "after"
the chapters on physics. However, Aristotle himself did not call the subject of these
books "Metaphysics": he referred to it as "first philosophy." The editor of Aristotle's
works, Andronicus of Rhodes, is thought to have placed the books on first philosophy
right after another work, Physics, and called them τ μετ τ φυσικ βιβλία (ta meta
ta physika biblia) or "the books that come after the [books on] physics". This was
misread by Latin scholiasts, who thought it meant "the science of what is beyond the
physical". However, once the name was given, the commentators sought to find
intrinsic reasons for its appropriateness. For instance, it was understood to mean "the
776
science of the world beyond nature" (physis in Greek), that is, the science of the
immaterial. Again, it was understood to refer to the chronological or pedagogical order
among our philosophical studies, so that the "metaphysical sciences" would mean
"those that we study after having mastered the sciences that deal with the physical
world" (St. Thomas Aquinas, "In Lib, Boeth. de Trin.", V, 1).
There is a widespread use of the term in current popular literature which replicates this
error, i.e. that metaphysical means spiritual non-physical: thus, "metaphysical healing"
means healing by means of remedies that are not physical.[8]
Origins and nature of metaphysics
Although the word "metaphysics" goes back to Aristotelean philosophy, Aristotle
himself credited earlier philosophers with dealing with metaphysical questions. The first
known philosopher, according to Aristotle, is Thales of Miletus, who taught that all
things derive from a single first cause or Arche.
Metaphysics as a discipline was a central part of academic inquiry and scholarly
education even before the age of Aristotle, who considered it "the Queen of Sciences."
Its issues were considered[by whom?] no less important than the other main formal
subjects of physical science, medicine, mathematics, poetics and music. Since the
beginning of modern philosophy during the seventeenth century, problems that were
not originally considered within the bounds of metaphysics have been added to its
purview, while other problems considered metaphysical for centuries are now typically
subjects of their own separate regions in philosophy, such as philosophy of religion,
philosophy of mind, philosophy of perception, philosophy of language, and philosophy
of science.
Central questions
Most positions that can be taken with regards to any of the following questions are
endorsed by one or another notable philosopher. It is often difficult to frame the
questions in a non-controversial manner.
Being, existence and reality
The nature of Being is a perennial topic in metaphysics. For instance, Parmenides
taught that reality was a single unchanging Being. The 20th century philosopher
Heidegger thought previous philosophers had lost sight of the question of Being (qua
Being) in favour of the questions of beings (existing things), so that a return to the
Parmenidean approach was needed. An ontological catalogue is an attempt to list the
fundamental constituents of reality. The question of whether or not existence is a
predicate has been discussed since the Early Modern period, not least in relation to the
ontological argument for the existence of God. Existence, that something is, has been
contrasted with essence, the question of what something is. Reflections on the nature
of the connection and distinction between existence and essence dates back to
Aristotle's Metaphysics, and later found one of its most influential interpretations in the
ontology of the eleventh century metaphysician Avicenna (Ibn Sina).[9] Since existence
without essence seems blank, it is associated with nothingness by philosophers such
as Hegel.
Empirical and conceptual objects
Objects and their properties
The world seems to contain many individual things, both physical, like apples, and
abstract, such as love and the number 3; the former objects are called particulars.
777
Particulars are said to have attributes, e.g. size, shape, color, location, and two
particulars may have some such attributes in common. Such attributes are also termed
Universals or Properties; the nature of these, and whether they have any real existence
and if so of what kind, is a long-standing issue, realism and nominalism representing
opposing views.
Metaphysicians concerned with questions about universals or particulars are interested
in the nature of objects and their properties, and the relationship between the two.
Some, e.g. Plato, argue that properties are abstract objects, existing outside of space
and time, to which particular objects bear special relations. David Armstrong holds that
universals exist in time and space but only at their instantiation and their discovery is a
function of science. Others maintain that particulars are a bundle or collection of
properties (specifically, a bundle of properties they have).
Biological literature contains abundant references to taxa (singular "taxon"), groups like
the mammals or the poppies. Some authors claim (or at least presuppose) that taxa
are real entities, that to say that an animal is included in Mammalia (the scientific name
for the mammal group) is to say that it bears a certain relation to Mammalia, an
abstract object.[10] Advocates of phylogenetic nomenclature, a more nominalistic view,
oppose this reading; in their opinion, calling an animal a mammal is a shorthand way of
saying that it is descended from the last common ancestor of, say, humans and
platypuses.[11]
Cosmology and cosmogony
Metaphysical Cosmology is the branch of metaphysics that deals with the world as the
totality of all phenomena in space and time. Historically, it has had quite a broad scope,
and in many cases was founded in religion. The ancient Greeks drew no distinction
between this use and their model for the cosmos. However, in modern times it
addresses questions about the Universe which are beyond the scope of the physical
sciences. It is distinguished from religious cosmology in that it approaches these
questions using philosophical methods (e.g. dialectics). Cosmogony deals specifically
with the origin of the universe.
Modern metaphysical cosmology and cosmogony try to address questions such as:
-What is the origin of the Universe? What is its first cause? Is its existence
necessary? (see monism, pantheism, emanationism and creationism)
-What are the ultimate material components of the Universe? (see mechanism,
dynamism, hylomorphism, atomism)
-What is the ultimate reason for the existence of the Universe? Does the
cosmos have a purpose? (see teleology)
Determinism and free will
Determinism is the philosophical proposition that every event, including human
cognition, decision and action, is causally determined by an unbroken chain of prior
occurrences. It holds that no random, spontaneous, stochastic, intrinsically mysterious,
or miraculous events occur. The principal consequence of the deterministic claim is
that it poses a challenge to the existence of free will.
The problem of free will is the problem of whether rational agents exercise control over
their own actions and decisions. Addressing this problem requires understanding the
relation between freedom and causation, and determining whether the laws of nature
are causally deterministic. Some philosophers, known as Incompatibilists, view
determinism and free will as mutually exclusive. If they believe in determinism, they will
therefore believe free will to be an illusion, a position known as Hard Determinism.
Proponents range from Baruch Spinoza to Ted Honderich.
778
Others, labeled Compatibilists (or "Soft Determinists"), believe that the two ideas can
be coherently reconciled. Adherents of this view include Thomas Hobbes and many
modern philosophers such as John Martin Fischer.
Incompatibilists who accept free will but reject determinism are called Libertarians, a
term not to be confused with the political sense. Robert Kane and Alvin Plantinga are
modern defenders of this theory.
Identity and change
The Greeks took some extreme positions on the nature of change: Parmenides denied
that change occurs at all, while Heraclitus thought change was ubiquitous: "[Y]ou
cannot step into the same river twice."
Identity, sometimes called Numerical Identity, is the relation that a "thing" bears to
itself, and which no "thing" bears to anything other than itself (cf. sameness). According
to Leibniz, if some object x is identical to some object y, then any property that x has, y
will have as well. However, it seems, too, that objects can change over time. If one
were to look at a tree one day, and the tree later lost a leaf, it would seem that one
could still be looking at that same tree. Two rival theories to account for the relationship
between change and identity are Perdurantism, which treats the tree as a series of
tree-stages, and Endurantism, which maintains that the tree—the same tree—is
present at every stage in its history.
Mind and matter
The nature of matter was a problem in its own right in early philosophy. Aristotle
himself introduced the idea of matter in general to the Western world, adapting the term
hyle, which originally meant "lumber." Early debates centered on identifying a single
underlying principle. Water was claimed by Thales, air by Anaximenes, Apeiron (the
Boundless) by Anaximander, fire by Heraclitus. Democritus, in conjunction with his
mentor, Leucippus, conceived of an atomic theory many centuries before it was
accepted by modern science. It is worth noting, however, that the grounds necessary to
ensure validity to the proposed theory's veridical nature were not scientific, but just as
philosophical as those traditions espoused by Thales and Anaximander.
The nature of the mind and its relation to the body has been seen as more of a problem
as science has progressed in its mechanistic understanding of the brain and body.
Proposed solutions often have ramifications about the nature of mind as a whole. René
Descartes proposed substance dualism, a theory in which mind and body are
essentially different, with the mind having some of the attributes traditionally assigned
to the soul, in the seventeenth century. This creates a conceptual puzzle about how the
two interact (which has received some strange answers, such as occasionalism).
Evidence of a close relationship between brain and mind, such as the Phineas Gage
case, have made this form of dualism increasingly unpopular.
Another proposal discussing the mind-body problem is idealism, in which the material
is sweepingly eliminated in favor of the mental. Idealists, such as George Berkeley,
claim that material objects do not exist unless perceived and only as perceptions. The
"German idealists" such as Fichte, Hegel and Schopenhauer took Kant as their
starting-point, although it is debatable how much of an idealist Kant himself was.
Idealism is also a common theme in Eastern philosophy. Related ideas are
panpsychism and panexperientialism, which say everything has a mind rather than
everything exists in a mind. Alfred North Whitehead was a twentieth-century exponent
of this approach.
Idealism is a monistic theory which holds that there is a single universal substance or
principle. Neutral monism, associated in different forms with Baruch Spinoza and
779
Bertrand Russell, seeks to be less extreme than idealism, and to avoid the problems of
substance dualism. It claims that existence consists of a single substance that in itself
is neither mental nor physical, but is capable of mental and physical aspects or
attributes – thus it implies a dual-aspect theory.
For the last one hundred years, the dominant metaphysics has without a doubt been
materialistic monism. Type identity theory, token identity theory, functionalism,
reductive physicalism, nonreductive physicalism, eliminative materialism, anomalous
monism, property dualism, epiphenomenalism and emergence are just some of the
candidates for a scientifically informed account of the mind. (It should be noted that
while many of these positions are dualisms, none of them are substance dualism.)
Prominent recent philosophers of mind include David Armstrong, Ned Block, David
Chalmers, Patricia and Paul Churchland, Donald Davidson, Daniel Dennett, Fred
Dretske, Douglas Hofstadter, Jerry Fodor, David Lewis, Thomas Nagel, Hilary Putnam,
John Searle, John Smart, Ludwig Wittgenstein, and Fred Alan Wolf.
Necessity and possibility
Metaphysicians investigate questions about the ways the world could have been. David
Lewis, in "On the Plurality of Worlds," endorsed a view called Concrete Modal realism,
according to which facts about how things could have been are made true by other
concrete worlds, just like ours, in which things are different. Other philosophers, such
as Gottfried Leibniz, have dealt with the idea of possible worlds as well. The idea of
necessity is that any necessary fact is true across all possible worlds. A possible fact is
true in some possible world, even if not in the actual world. For example, it is possible
that cats could have had two tails, or that any particular apple could have not existed.
By contrast, certain propositions seem necessarily true, such as analytic propositions,
e.g. "All bachelors are unmarried." The particular example of analytic truth being
necessary is not universally held among philosophers. A less controversial view might
be that self-identity is necessary, as it seems fundamentally incoherent to claim that for
any x, it is not identical to itself; this is known as the law of identity, a putative "first
principle". Aristotle describes the principle of non-contradiction, "It is impossible that the
same quality should both belong and not belong to the same thing . . . This is the most
certain of all principles . . . Wherefore they who demonstrate refer to this as an ultimate
opinion. For it is by nature the source of all the other axioms."
Religion and spirituality
Theology is the study of a god or gods and the nature of the divine. Whether there is a
god (monotheism), many gods (polytheism) or no gods (atheism), or whether it is
unknown or unknowable whether any gods exist (agnosticism; apophatic theology),
and whether a divine entity directly intervenes in the world (theism), or its sole function
is to be the first cause of the universe (deism); these and whether a god or gods and
the world are different (as in panentheism and dualism), or are identical (as in
pantheism), are some of the primary metaphysical questions concerning philosophy of
religion.
Within the standard Western philosophical tradition, theology reached its peak under
the medieval school of thought known as scholasticism, which focused primarily on the
metaphysical aspects of Christianity. The work of the scholastics is still an integral part
of modern philosophy,[12] with key figures such as Thomas Aquinas still playing an
important role in the philosophy of religion.[13]
780
Space and time
In Book XI of the Confessions, Saint Augustine of Hippo asked the fundamental
question about the nature of time. A traditional realist position in ontology is that time
and space have existence apart from the human mind. Idealists, including Kant, claim
that space and time are mental constructs used to organize perceptions, or are
otherwise surreal.
Suppose that one is sitting at a table, with an apple in front of him or her; the apple
exists in space and in time, but what does this statement indicate? Could it be said, for
example, that space is like an invisible three-dimensional grid in which the apple is
positioned? Suppose the apple, and all physical objects in the universe, were removed
from existence entirely. Would space as an "invisible grid" still exist? René Descartes
and Leibniz believed it would not, arguing that without physical objects, "space" would
be meaningless because space is the framework upon which we understand how
physical objects are related to each other. Newton, on the other hand, argued for an
absolute "container" space. The pendulum swung back to relational space with Einstein
and Ernst Mach.
While the absolute/relative debate, and the realism debate are equally applicable to
time and space, time presents some special problems of its own. The flow of time has
been denied in ancient times by Parmenides and more recently by J. M. E. McTaggart
in his paper The Unreality of Time.
The direction of time, also known as "time's arrow", is also a puzzle, although physics
is now driving the debate rather than philosophy. It appears that fundamental laws are
time-reversible and the arrow of time must be an "emergent" phenomenon, perhaps
explained by a statistical understanding of thermodynamic entropy.
Common sense tells us that objects persist across time, that there is some sense in
which you are the same person you were yesterday, in which the oak is the same as
the acorn, in which you perhaps even can step into the same river twice. Philosophers
have developed two rival theories for how this happens, called "endurantism" and
"perdurantism". Broadly speaking, endurantists hold that a whole object exists at each
moment of its history, and the same object exists at each moment. Perdurantists
believe that objects are four-dimensional entities made up of a series of temporal parts
like the frames of a movie.
Styles and methods of metaphysics
-Rational versus empirical. Rationalism is a method or a theory "in which the criterion of
the truth is not sensory but intellectual and deductive" (Bourke 263). Rationalist
metaphysicians aim to deduce the nature of reality by armchair, a priori reasoning.
Empiricism holds that the senses are the primary source of knowledge about the world.
-Analytical versus systemic. The "system building" style of metaphysics attempts to
answer all the important questions in a comprehensive and coherent way, providing a
theory of everything or complete picture of the world. The contrasting approach is to
deal with problems piecemeal.
-Dogmatic versus critical. Under the scholastic approach of the Middle Ages, a number
of themes and ideas were not open to be challenged. Kant and others thought this
"dogmatism" should be replaced by a critical approach.
-Individual versus collective. Scholasticism and Analytical philosophy are examples of
collaborative approaches to philosophy. Many other philosophers expounded individual
visions.
-Parsimonious versus Adequate. Should a metaphysical system posit as little as
possible, or as much as needed?
781
-Descriptive versus revisionary. Peter Strawson makes the distinction between
descriptive metaphysics, which sets out to investigate our deepest assumptions, and
revisionary metaphysics, which sets out to improve or rectify them.[14]
History and schools of metaphysics
Pre-Socratic metaphysics in Greece
The first known philosopher, according to Aristotle, is Thales of Miletus. Rejecting
mythological and divine explanations, he sought a single first cause or Arche (origin or
beginning) under which all phenomena could be explained, and concluded that this first
cause was in fact moisture or water. Thales also taught that the world is harmonious,
has a harmonious structure, and thus is intelligible to rational understanding. Other
Miletians, such as Anaximander and Anaximenes, also had a monistic conception of
the first cause.
Another school was the Eleatics, Italy. The group was founded in the early fifth century
BCE by Parmenides, and included Zeno of Elea and Melissus of Samos.
Methodologically, the Eleatics were broadly rationalist, and took logical standards of
clarity and necessity to be the criteria of truth. Parmenides' chief doctrine was that
reality is a single unchanging and universal Being. Zeno used reductio ad absurdum, to
demonstrate the illusory nature of change and time in his paradoxes.
Heraclitus of Ephesus, in contrast, made change central, teaching that "all things flow".
His philosophy, expressed in brief aphorisms, is quite cryptic. For instance, he also
taught the unity of opposites.
Democritus and his teacher Leucippus, are known for formulating an atomic theory for
the cosmos.[15] They are considered forerunners of the scientific method.
Socrates and Plato
Socrates is known for his dialectic or questioning approach to philosophy rather than a
positive metaphysical doctrine. His pupil, Plato is famous for his theory of forms (which
he confusingly places in the mouth of Socrates in the dialogues he wrote to expound
it). Platonic realism (also considered a form of idealism[16]) is considered to be a
solution to the problem of universals; i.e., what particular objects have in common is
that they share a specific Form which is universal to all others of their respective kind.
The theory has a number of other aspects:
-Epistemological: knowledge of the Forms is more certain than mere sensory
data.
-Ethical: The Form of the Good sets an objective standard for morality.
-Time and Change: The world of the Forms is eternal and unchanging. Time
and change belong only to the lower sensory world. "Time is a moving image of
Eternity".
-Abstract objects and mathematics: Numbers, geometrical figures, etc., exist
mind-independently in the World of Forms.
Platonism developed into Neoplatonism, a philosophy with a monotheistic and mystical
flavour that survived well into the early Christian era.
Aristotle
Plato's pupil Aristotle wrote widely on almost every subject, including metaphysics. His
solution to the problem of universals contrasts with Plato's. Whereas Platonic Forms
exist in a separate realm, and can exist uninstantiated in visible things, Aristotelean
essences "indwell" in particulars.
782
Potentiality and Actuality[17] are principles of a dichotomy which Aristotle used
throughout his philosophical works to analyze motion, causality and other issues.
The Aristotelean theory of change and causality stretches to four causes: the material,
formal, efficient and final. The efficient cause corresponds to what is now known as a
cause simpliciter. Final causes are explicitly teleological, a concept now regarded as
controversial in science. The Matter/Form dichotomy was to become highly influential
in later philosophy as the substance/essence distinction.
Scholasticism and the Middle Ages
Between about 1100 and 1500, philosophy as a discipline took place as part of the
Catholic church's teaching system, known as scholasticism. Scholastic philosophy took
place within an established framework blending Christian theology with Aristotelean
teachings. Although fundamental orthodoxies could not be challenged, there were
nonetheless deep metaphysical disagreements, particularly over the problem of
universals, which engaged Duns Scotus and Pierre Abelard. William of Ockham is
remembered for his principle of ontological parsimony.
Rationalism and Continental Rationalism
In the early modern period (17th and 18th centuries), the system-building scope of
philosophy is often linked to the rationalist method of philosophy, that is the technique
of deducing the nature of the world by pure reason. The scholastic concepts of
substance and accident were employed.
-Leibniz proposed in his Monadology a plurality of non-interacting substances.
-Descartes is famous for his Dualism of material and mental substances.
-Spinoza believed reality was a single substance of God-or-nature.
British empiricism
British empiricism marked something of a reaction to rationalist and system-building
philosophy, or speculative metaphysics as it was pejoratively termed. The sceptic
David Hume famously declared that most metaphysics should be consigned to the
flames (see below). Hume was notorious among his contemporaries as one of the first
philosophers to openly doubt religion, but is better known now for his critique of
causality. John Stuart Mill, Thomas Reid and John Locke were less sceptical,
embracing a more cautious style of metaphysics based on realism, common sense and
science. Other philosophers, notably George Berkeley were led from empiricism to
idealistic metaphysics.
Kant
Immanuel Kant attempted a grand synthesis and revision of the trends already
mentioned: scholastic philosophy, systematic metaphysics, and skeptical empiricism,
not to forget the burgeoning science of his day. Like the systems builders, he had an
overarching framework in which all questions were to be addressed. Like Hume, who
famously woke him from his 'dogmatic slumbers', he was suspicious of metaphysical
speculation, and also places much emphasis on the limitations of the human mind.
Kant saw rationalist philosophers as aiming for a kind of metaphysical knowledge he
defined as the synthetic apriori — that is knowledge that does not come from the
senses (it is a priori) but is nonetheless about reality (synthetic). Inasmuch as it is
about reality, it is unlike abstract mathematical propositions (which he terms analytical
apriori), and being apriori it is distinct from empirical, scientific knowledge (which he
783
terms synthetic aposteriori). The only synthetic apriori knowledge we can have is of
how our minds organise the data of the senses; that organising framework is space
and time, which for Kant have no mind-independent existence, but nonetheless operate
uniformly in all humans. Apriori knowledge of space and time is all that remains of
metaphysics as traditionally conceived. There is a reality beyond sensory data or
phenomena, which he calls the realm of noumena; however, we cannot know it as it is
in itself, but only as it appears to us. He allows himself to speculate that the origins of
God, morality, and free will might exist in the noumenal realm, but these possibilities
have to be set against its basic unknowability for humans. Although he saw himself as
having disposed of metaphysics, in a sense, he has generally been regarded in
retrospect as having a metaphysics of his own.
19th Century philosophy was overwhelmingly influenced by Kant and his successors.
Schopenhauer, Schelling, Fichte and Hegel all purveyed their own panoramic versions
of German Idealism, Kant's own caution about metaphysical speculation, and refutation
of idealism, having fallen by the wayside. The idealistic impulse continued into the early
20th century with British idealists such as F. H. Bradley and J. M. E. McTaggart.
Followers of Karl Marx took Hegel's dialectic view of history and re-fashioned it as
materialism.
Early analytical philosophy and positivism
During the period when idealism was dominant in philosophy, science had been
making great advances. The arrival of a new generation of scientifically minded
philosophers led to a sharp decline in the popularity of idealism during the 1920s.
Analytical philosophy was spearheaded by Bertrand Russell and G. E. Moore. Russell
and William James tried to compromise between idealism and materialism with the
theory of neutral monism.
The early to mid 20th century philosophy also saw a trend to reject metaphysical
questions as meaningless. The driving force behind this tendency was the philosophy
of Logical Positivism as espoused by the Vienna Circle.
At around the same time, the American pragmatists were steering a middle course
between materialism and idealism. System-building metaphysics, with a fresh
inspiration from science, was revived by A. N. Whitehead and Charles Hartshorne.
Continental philosophy
The forces that shaped analytical philosophy — the break with idealism, and the
influence of science — were much less significant outside the English speaking world,
although there was a shared turn toward language. Continental philosophy continued in
a trajectory from post Kantianism.
The phenomenology of Husserl and others was intended as a collaborative project for
the investigation of the features and structure of consciousness common to all humans,
in line with Kant's basing his synthetic apriori on the uniform operation of
consciousness. It was officially neutral with regards to ontology, but was nonetheless to
spawn a number of metaphysical systems. Brentano's concept of intentionality would
become widely influential, including on analytical philosophy.
Heidegger, author of Being and Time, saw himself as re-focusing on Being-qua-being,
introducing the novel concept of Dasein in the process. Classing himself an
existentialist, Sartre wrote an extensive study of "Being and Nothingness.
The speculative realism movement marks a return to full blooded realism.
784
Later analytical philosophy
While early analytic philosophy tended to reject metaphysical theorizing, under the
influence of logical positivism, it was revived in the second half of the twentieth century.
Philosophers such as David K. Lewis and David Armstrong developed elaborate
theories on a range of topics such as universals, causation, possibility and necessity
and abstract objects. However, the focus of analytical philosophy is generally away
from the construction of all-encompassing systems and towards close analysis of
individual ideas.
Among the developments that led to the revival of metaphysical theorizing were
Quine's attack on the analytic-synthetic distinction, which was generally taken to
undermine Carnap's distinction between existence questions internal to a framework
and those external to it.[18]
The philosophy of fiction, the problem of empty names, and the debate over existence's
status as a property have all risen out of relative obscurity to become central concerns,
while perennial issues such as free will, possible worlds, and the philosophy of time
have had new life breathed into them.[19][20]
Rejections of metaphysics
A number of individuals have suggested that much of metaphysics should be rejected.
In the 18th century, David Hume took an extreme position, arguing that all genuine
knowledge involves either mathematics or matters of fact and that metaphysics, which
goes beyond these, is worthless. He concludes his Enquiry Concerning Human
Understanding with the statement:
If we take in our hand any volume; of divinity or school metaphysics, for
instance; let us ask, Does it contain any abstract reasoning concerning quantity
or number? No. Does it contain any experimental reasoning concerning matter
of fact and existence? No. Commit it then to the flames: for it can contain
nothing but sophistry and illusion.[21]
In the 1930s, A. J. Ayer and Rudolf Carnap endorsed Hume's position; Carnap quoted
the passage above.[22] They argued that metaphysical statements are neither true nor
false but meaningless since, according to their verifiability theory of meaning, a
statement is meaningful only if there can be empirical evidence for or against it. Thus,
while Ayer rejected the monism of Spinoza, noted above, he avoided a commitment to
pluralism, the contrary position, by holding both views to be without meaning.[23]
Carnap took a similar line with the controversy over the reality of the external world.[24]
33 years after Hume's Enquiry appeared, Immanuel Kant published his Critique of Pure
Reason. Though he followed Hume in rejecting much of previous metaphysics, he
argued that there was still room for some synthetic a priori knowledge, concerned with
matters of fact yet obtainable independent of experience. These included fundamental
structures of space, time, and causality. He also argued for the freedom of the will and
the existence of "things in themselves", the ultimate (but unknowable) objects of
experience.
Metaphysics in science
Much recent work has been devoted to analyzing the role of metaphysics in scientific
theorizing. Alexandre Koyré led this movement, declaring in his book Metaphysics and
Measurement, "It is not by following experiment, but by outstripping experiment, that
the scientific mind makes progress."[25] Imre Lakatos maintained that all scientific
theories have a metaphysical "hard core" essential for the generation of hypotheses
and theoretical assumptions.[26] Thus, according to Lakatos, "scientific changes are
connected with vast cataclysmic metaphysical revolutions."[27]
785
An example from biology of Lakatos' thesis: David Hull has argued that changes in the
ontological status of the species concept have been central in the development of
biological thought from Aristotle through Cuvier, Lamarck, and Darwin. Darwin's
ignorance of metaphysics made it more difficult for him to respond to his critics
because he could not readily grasp the ways in which their underlying metaphysical
views differed from his own.[28]
In physics, new metaphysical ideas have arisen in connection with quantum
mechanics, where subatomic particles arguably do not have the same sort of
individuality as the particulars with which philosophy has traditionally been
concerned.[29] Also, adherence to a deterministic metaphysics in the face of the
challenge posed by the quantum-mechanical uncertainty principle led physicists like
Albert Einstein to propose alternative theories that retained determinism.[30]
In chemistry, Gilbert Newton Lewis addressed the nature of motion, arguing that an
electron should not be said to move when it has none of the properties of motion.[31]
Katherine Hawley notes that the metaphysics even of a widely accepted scientific
theory may be challenged if it can be argued that the metaphysical presuppositions of
the theory make no contribution to its predictive success.[32]
786
Mysticism
Mysticism is "a constellation of distinctive practices, discourses, texts, institutions,
traditions, and experiences aimed at human transformation, variously defined in
different traditions."[web 1]
The term "mysticism" has western origins, with various, historical determined
meanings.[web 2][web 1] Derived from the Greek μυω, meaning "to conceal",[web 1] it
referred to the biblical, the liturgical and the spiritual or contemplative dimensions in
early and medieval Christianity,[1] and became associated with "extraordinary
experiences and states of mind" in the early modern period.[2]
In modern times, "mysticism" has acquired a limited definition,[web 2] but a broad
application,[web 2] as meaning the aim at the "union with the Absolute, the Infinite, or
God".[web 2] This limited definition has been applied to include a worldwide range of
religious traditions and practices.[web 2]
Since the 1960s, a scholarly debate has been going in the scientific research of
"mystical experiences" between perennial and constructionist approaches.[3][4]
787
Contents
1 Etymology
2 Definition
2.1 Spiritual life and re-formation
2.2 Enlightenment
2.3 Mystical experience and union with the Divine
3 Development
3.1 Early Christianity
3.2 Medieval meaning
3.3 Early modern meaning
3.4 Contemporary meaning
4 Mystical experience
4.1 Induction of mystical experiences
4.2 Origins of the term "mystical experience"
4.3 Freud and the Oceanic feeling
4.4 Scientific research of "mystical experiences"
4.4.1 Perenialism versus constructionism
4.4.2 W. James – The Varieties of Religious experience
4.4.3 Zaehner – Natural and religious mysticism
4.4.4 Stace – extrovertive and introvertive mysticism
4.4.5 Katz – constructionism
4.4.6 Newberg & d'Aquili – Why God Won't Go Away
4.5 Criticism
5 Forms of mysticism within world religions
6 Western mysticism
6.1 Mystery religions
6.2 Christian mysticism
6.3 Jewish mysticism
6.4 Islamic mysticism
7 Eastern mysticism
7.1 Buddhism
7.1.1 Enlightenment
7.1.2 Buddhahood
7.1.3 Absolute and relative
7.1.4 Zen
7.2 Indian mystcism
7.2.1 Hindu mysticism
7.2.1.1 Yoga
7.2.1.2 Vedanta
7.2.2 Tantra
7.2.3 Sikh mysticism
8 Modern mysticism
8.1 Perennial philosophy
8.2 Transcendentalism and Unitarian Universalism
8.3 Theosophical Society
8.4 New Thought
8.5 Orientalism and the "pizza effect"
8.6 The Fourth Way
9 Skepticism
9.1 Schopenhauer
9.2 Marvin Minsky
788
Etymology
"Mysticism" is derived from the Greek μυω, meaning "I conceal",[web 1] and its
derivative μυστικός, mystikos, meaning 'an initiate'.
Definition
Parson warns that "what might at times seem to be a straightforward phenomenon
exhibiting an unambiguous commonality has become, at least within the academic
study of religion, opaque and controversial on multiple levels".[5] The definition, or
meaning, of the term "mysticism" has changed throughout the ages.[web 2]
Spiritual life and re-formation
According to Evelyn Underhill, mysticism is "the science or art of the spiritual life."[6] It
is
...the expression of the innate tendency of the human spirit towards complete
harmony with the transcendental order; whatever be the theological formula
under which that order is understood.[7][note 1][note 2]
Parson stresses the importance to distinguish between
...episodic experience and mysticism as a process that, though surely
punctuated by moments of visionary, unitive, and transformative encounters, is
ultimately inseparable from its embodied relation to a total religious matrix:
liturgy, scripture, worship, virtues, theology, rituals, practice and the arts.[8]
According to Gellmann,
Typically, mystics, theistic or not, see their mystical experience as part of a
larger undertaking aimed at human transformation (See, for example, Teresa of
Avila, Life, Chapter 19) and not as the terminus of their efforts. Thus, in general,
‘mysticism’ would best be thought of as a constellation of distinctive practices,
discourses, texts, institutions, traditions, and experiences aimed at human
transformation, variously defined in different traditions.[web 1][note 3]
McGinn argues that "presence" is more accurate than "union", since not all mystics
spoke of union with God, and since many visions and miracles were not necessarily
related to union. He also argues that we should speak of "consciousness" of God's
presence, rather than of "experience", since mystical activity is not simply about the
sensation of God as an external object, but more broadly about
...new ways of knowing and loving based on states of awareness in which
God becomes present in our inner acts.[11]
Related to this idea of "presence" instead of "experience" is the transformation that
occurs through mystical activity:
This is why the only test that Christianity has known for determining the
authenticity of a mystic and her or his message has been that of personal
transformation, both on the mystic's part and—especially—on the part of those
whom the mystic has affected.[11]
Belzen and Geels also note that mysticism is
...a way of life and a 'direct consciousness of the presence of God' [or] 'the
ground of being' or similar expressions.[12]
Enlightenment
Some authors emphasize that mystical experience involves intuitive understanding and
the resolution of life problems. According to Larson,
789
A mystical experience is an intuitive understanding and realization of the
meaning of existence – an intuitive understanding and realization which is
intense, integrating, self-authenticating, liberating – i.e., providing a sense of
release from ordinary self-awareness – and subsequently determinative – i.e., a
primary criterion – for interpreting all other experience whether cognitive,
conative, or affective.[13]
And James R. Horne notes:
[M]ystical illumination is interpreted as a central visionary experience in a
psychological and behavioural process that results in the resolution of a personal or
religious problem. This factual, minimal interpretation depicts mysticism as an extreme
and intense form of the insight seeking process that goes in activities such as solving
theoretical problems or developing new inventions.[3][note 4][note 6]
Mystical experience and union with the Divine
William James, who popularized the use of the term "religious experience"[note 7] in
his The Varieties of Religious Experience,[17][18][web 1] influenced the understanding
of mysticism as a distinctive experience which supplies knowledge of the
transcendental.[19][web 1] He considered the "personal religion"[20] to be "more
fundamental than either theology or ecclesiasticism",[20] and states:
In mystic states we both become one with the Absolute and we become
aware of our oneness. This is the everlasting and triumphant mystical tradition,
hardly altered by differences of clime or creed. In Hinduism, in Neoplatonism, in
Sufism, in Christian mysticism, in Whitmanism, we find the same recurring note,
so that there is about mystical utterances an eternal unanimity which ought to
make a critic stop and think, and which bring it about that the mystical classics
have, as been said, neither birthday not native land.[21]
According to McClenon, mysticism is
The doctrine that special mental states or events allow an understanding of
ultimate truths. Although it is difficult to differentiate which forms of experience
allow such understandings, mental episodes supporting belief in "other kinds of
reality" are often labeled mystical [...] Mysticism tends to refer to experiences
supporting belief in a cosmic unity rather than the advocation of a particular
religious ideology.[web 3]
According to Blakemore and Jennett,
Mysticism is frequently defined as an experience of direct communion with God,
or union with the Absolute,[note 8] but definitions of mysticism (a relatively
modern term) are often imprecise and usually rely on the presuppositions of the
modern study of mysticism — namely, that mystical experiences involve a set of
intense and usually individual and private psychological states [...] Furthermore,
mysticism is a phenomenon said to be found in all major religious
traditions.[web 4][note 9]
Development
Early Christianity
In the Hellenistic world, 'mystical' referred to "secret" religious rituals[web 1] The use of
the word lacked any direct references to the transcendental.[23] A "mystikos" was an
initiate of a mystery religion.
In early Christianity the term "mystikos" referred to three dimensions, which soon
became intertwined, namely the biblical, the liturgical and the spiritual or
contemplative.[1] The biblical dimension refers to "hidden" or allegorical interpretations
790
of Scriptures.[web 1][1] The liturgical dimension refers to the liturgical mystery of the
Eucharist, the presence Christ at the Eucharist.[web 1][1] The third dimension is the
contemplative or experiential knowledge of God.[1]
The link between mysticism and the vision of the Divine was introduced by the early
Church Fathers, who used the term as an adjective, as in mystical theology and
mystical contemplation.[23]
Medieval meaning
This threefold meaning of "mystical" continued in the Middle Ages.[1] Under the
influence of Pseudo-Dionysius the Areopagite the mystical theology came to denote
the investigation of the allegorical truth of the Bible.[1] Pseudo-Dionysius' Apophatic
theology, or "negative theology", exerted a great influence on medieval monastic
religiosity, although it was mostly a male religiosity, since woman were not allowed to
study.[24] It was influenced by Neo-Platonism, and very influential in Eastern Orthodox
Christian theology. In western Christianity it was a counter-current to the prevailing
Cataphatic theology or "positive theology". It is best known nowadays in the western
world from Meister Eckhart and John of the Cross.
Early modern meaning
In the sixteenth and seventeenth century mysticism came to be used as a
substantive.[23] This shift was linked to a new discourse,[23] in which science and
religion were separated.[25]
Luther dismissed the allegorical interpretation of the bible, and condemned Mystical
theology, which he saw as more Platonic than Christian.[26] "The mystical", as the
search for the hidden meaning of texts, became secularised, and also associated with
literature, as opposed to science and prose.[27]
Science was also distantiated form religion. By the middle of the 17th century, "the
mystical" is increasingly applied exclusively to the religious realm, separating religion
and "natural philosophy" as two distinct approaches to the discovery of the hidden
meaning of God's universe.[28] The traditional hagiographies and writings of the saints
791
became designated as "mystical", shifting from the virtues and miracles to
extraordinary experiences and states of mind, thereby creating a newly coined
"mystical tradition".[2] A new understanding developed of the Divine as residing within
human, a core essence beyond the varieties of religious expressions.[23]
Contemporary meaning
In the 19th century the meaning of mysticism was considerably narrowed:[web 2]
The competition between the perspectives of theology and science resulted in a
compromise in which most varieties of what had traditionally been called
mysticism were dismissed as merely psychological phenomena and only one
variety, which aimed at union with the Absolute, the Infinite, or God—and
thereby the perception of its essential unity or oneness—was claimed to be
genuinely mystical. The historical evidence, however, does not support such a
narrow conception of mysticism.[web 2]
Under the influence of Perennialism, which was popularised in both the west and the
east by Unitarianism, Transcendentalists and Theosophy, mysticism has acquired a
broader meaning, in which all sorts of esotericism and religious traditions and practices
are joined together.[29][30][18]
The term mysticism has been extended to comparable phenomena in non-Christian
religions,[web 2] where it influenced Hindu and Buddhist responses to colonialism,
resulting in Neo-Vedanta and Buddhist modernism.[30][31]
In the contemporary usage "mysticism" has become an umbrella term for all sorts of
non-rational world views.[32] William Harmless even states that mysticism has become
"a catch-all for religious weirdness".[33] Within the academic study of religion the
apparent "unambiguous commonality" has become "opaque and controversial".[23]
The term "mysticism" is being used in different ways in different traditions.[23] Some
call to attention the conflation of mysticism and linked terms, such as spirituality and
esotericism, and point at the differences between various traditions.[34]
Mystical experience
Many religious and mystical traditions see religious experiences (particularly that
knowledge that comes with them) as revelations caused by divine agency rather than
ordinary natural processes. They are considered real
encounters with God or gods, or real contact with
higher-order realities of which humans are not
ordinarily aware.[35] Nevertheless, the notion of
"religious experience" or "mystical experience" as
marking insight into religious truth is a modern
development.[36]
Induction of mystical experience
Various religious practices include:
-Mantras and yantras[note 10]
-Meditation[38]
-Praying[39]
-Music[40]
-Dance, such as:
Sufi whirling[41]
-Yoga, consisting of postures (Asanas), controlled
breathing (Pranayama), and other practices.[42]
-Extreme pain, such as:
Mortification of the flesh[43]
792
-Profound sexual activity,[44]
-Use of Entheogens, such as:
Ayahuasca (Dimethyltryptamine) [45]
Salvia divinorum (Salvinorin A)[46]
Peyote (Mescaline)[47]
Psilocybe cubensis (Psilocybin)[48]
Amanita muscaria (Muscimol)[49]
cannabis (THC and other compounds)[50]
-Psychological or neurophysiological anomalies, such as:
Profound depression,[51]
bipolar,
schizophrenia or other conditions manifesting
symptoms.[52]
Temporal lobe epilepsy[53]
Stroke[54]
-Near-death experience[55]
psychotic
spectrum
Origins of the term "mystical experience"
The term "mystical experience" has become synonymous with the terms "religious
experience", spiritual experience and sacred experience.[16] A "religious experience" is
a subjective experience which is interpreted within a religious framework.[16] The
concept originated in the 19th century, as a defense against the growing rationalism of
western society.[18] William James popularized the use of the term "religious
experience" in his The Varieties of Religious Experience.[17][18] It has also influenced
the understanding of mysticism as a distinctive experience which supplies knowledge
of the transcendental.[web 1]
Wayne Proudfoot traces the roots of the notion of "religious experience" further back to
the German theologian Friedrich Schleiermacher (1768–1834), who argued that
religion is based on a feeling of the infinite. The notion of "religious experience" was
used by Schleiermacher to defend religion against the growing scientific and secular
critique. It was adopted by many scholars of religion, of which William James was the
most influential.[56]
A broad range of western and eastern movements have incorporated and influenced
the emergence of the modern notion of "mystical experience", such as the Perennial
philosophy, Transcendentalism, Universalism, the Theosophical Society, New Thought,
Neo-Vedanta and Buddhist modernism.[57][58]
Freud and the Oceanic feeling
The understanding of "mysticism" as an experience of unity with the divine is reflected
in a famous comment by Freud on the "oceanic feeling". In response to The Future of
an Illusion (1927) Romain Rolland wrote to Sigmund Freud:
By religious feeling, what I mean—altogether independently of any dogma, any
Credo, any organization of the Church, any Holy Scripture, any hope for
personal salvation, etc.—the simple and direct fact of a feeling of 'the eternal'
(which may very well not be eternal, but simply without perceptible limits, and as
if oceanic). This feeling is in truth subjective in nature. It is a contact.[web 5]
Rolland derived the notion of an "oceanic feeling" from various sources. He was
influenced by the writings of Baruch Spinoza, who criticized religion but retained "the
intellectual love of God". Rolland was also influenced by Indian mysticism, on which he
wrote The Life of Ramakrishna (1929/1931) and The Life of Vivekananda and the
Universal Gospel (1930/1947).[web 5]
793
In the first chapter of Civilization and Its Discontents (1929/1930) Freud describes this
notion, and then remarks that he doesn't know this feeling himself.[59] He then goes on
to locate this feeling within primary narcissism and the ego ideal. This feeling is later
reduced to a "shrunken residue" under the influence of reality.[web 5]
Ken Wilber argues that Freud had erred, by confusing pre-ego states with trans-ego
states.[citation needed]
Scientific research of "mystical experiences"
Perenialism versus constructionism
In the 19th century perennialism gained popularity as a model for perceiving similarities
across a broad range of religious traditions.[30] William James, in his The Varieties of
Religious Experience, was highly influential in further popularising this perennial
approach and the notion of personal experience as a validation of religious truths.[19]
Since the 1960s a continues debate has been going on "the question of whether
mysticism is a human experience that is the same in all times and places but explained
in many ways, or a family of similar experiences that includes many different kinds, as
represented by the many kinds of religious and secular mystical reports".[3] The first
stance is perennialism or essentialism,[60] while the second stance is social
constructionism or contextualism.[60]
The essentialist model argues that mystical experience is independent of the
sociocultural, historical and religious context in which it occurs, and regards all mystical
experience in its essence to be the same.[60] According to this "common corethesis",[61] different descriptions can mask quite similar if not identical experiences:[62]
[P]eople can differentiate experience from interpretation, such that different
interpretations may be applied to otherwise identical experiences".[63]
The contextualist model states that mystical experiences are shaped by the concepts
"which the mystic brings to, and which shape, his experience".[60] What is being
experienced is being determined by the expectations and the conceptual background
of the mystic.[64] Critics of the "common-core thesis" argue that
[N]o unmediated experience is possible, and that in the extreme, language is
not simply used to interpret experience but in fact constitutes experience.[63]
Principal representants of the perennialist position are Walter Terence Stace,[65] who
distinguishes extroverted and introverted mysticism, in response to R. C. Zaehner's
distinction between theistic and monistic mysticism;[4] Huston Smith;[66][67] and Ralph
W. Hood,[68] who conducted empirical research using the "Mysticism Scale", which is
based on Stace's model.[68][note 11] The principal representant of the construction
position is Steven T. Katz, who, in a series of publications,[note 12] has made a highly
influential and compelling case for the constructionist approach.[69]
The perennial position is "largely dismissed by scholars",[70] but "has lost none of its
popularity".[71]
William James – The Varieties of Religious experience
William James' The Varieties of Religious Experience is the classic study on religious
or mystical experience, which influenced deeply both the academic and popular
understanding of "religious experience".[17][18][19][web 1] He popularized the use of
the term "religious experience"[note 13] in his "Varieties",[17][18][web 1] and influenced
the understanding of mysticism as a distinctive experience which supplies knowledge
of the transcendental:[19][web 1]
Under the influence of William James' The Varieties of Religious Experience,
heavily centered on people's conversion experiences, most philosophers'
794
interest in mysticism has been in distinctive, allegedly knowledge-granting
“mystical experiences.”"[web 1]
James emphasized the personal experience of individuals, and describes a broad
variety of such experiences in his The Varieties of Religious experience.[21] He
considered the "personal religion"[20] to be "more fundamental than either theology or
ecclesiasticism",[20][note 14] and defines religion as
...the feelings, acts, and experiences of individual men in their solitude , so far
as they apprehend themselves to stand in relation to whatever they may
consider the divine.[72]
According to James, mystical experiences have four defining qualities:[73]
1) Ineffability. According to Jamesm the mystical experience "defies expression,
that no adequate report of its content can be given in words".[73]
2) Noetic quality. Mystics stress that their experiences give them "insight into
depths of truth unplumbed by the discursive intellect."[73] James referred to this
as the "noetic" (or intellectual) "quality" of the mystical.[73]
3) Transiency. James notes that most mystical experiences have a short
occurrence, but their effect persists.[73]
4) Passivity. According to James, mystics come to their peak experience not as
active seekers, but as passive recipients.[73]
William James recognised the broad variety of mystical schools and conflicting
doctrines both within and between religions.[21] Nevertheless,
...he shared with thinkers of his era the conviction that beneath the variety
could be carved out a certain mystical unanimity, that mystics shared certain
common perceptions of the divine, however different their religion or historical
epoch,[21]
According to Harmless, "for James there was nothing inherently theological in or about
mystical experience",[74] and felt it legitimate to separate the mystic's experience from
theological claims.[74] Harmless notes that James "denies the most central fact of
religion",[75] namely that religion is practiced by people in groups, and often in
public.[75] he also ignores ritual, the historicity of religious traditions,[75] and theology,
instead emphasizing "feeling" as central to religion.[75]
Zaehner – Natural and religious mysticism
R. C. Zaehner distinguishes three fundamental types of mysticism, namely theistic,
monistic and panenhenic ("all-in-one") or natural mysticism.[4] The theistic category
includes most forms of Jewish, Christian and Islamic mysticism and occasional Hindu
examples such as Ramanuja and the Bhagavad Gita.[4] The monistic type, which
according to Zaehner is based upon an experience of the unity of one's soul,[4][note
15] includes Buddhism and Hindu schools such as Samhya and Advaita vedanta.[4]
Nature mysticism seems to refer to examples that do not fit into one of these two
categories.[4]
Zaehner considers theistic mysticism to be superior to the other two categories,
because of its appreciation of God, but also because of its strong moral imperative.[4]
Zaehner is directly opposing the views Aldous Huxley. Natural mystical experiences
are in Zaehner's view of less value because they do not lead as directly to the virtues of
charity and compassion. Zaehner is generally critical of what he sees as narcissistic
tendencies in nature mysticism.[note 16]
Zaehner has been criticised by a number of scholars for the "theological violence"[4]
which his approach does to non-theistic traditions, "forcing them into a framework
which privileges Zaehner's own liberal Cathilicism."[4]
795
Stace – extrovertive and introvertive mysticism
Zaehner has also been criticised by Walter Terence Stace in his book Mysticism and
philosophy (1960) on similar grounds.[4] Stace argues that doctrinal differences
between religious traditions are inappropriate criteria when making cross-cultural
comparisons of mystical experiences.[4]
Stace distinguished two types of mystical experience, namely extrovertive and
introvertive mysticism.[4][76] Extrovertive mysticism is an experience of unity within the
world, whereas introvertive mysticism is "an experience of unity devoid of perceptual
objects; it is literally an experience of 'no-thing-ness'".[76] The unity in extrovertive
mysticism is with the totality of objects of perception; the unity in introvertive mysticism
is with a pure conscousness, devoid of objects of perception.[77] Stace's categories of
"introvertive mysticism" and "extrovertive mysticism" are derived from Rudolf Otto's
"mysticism of introspection" and "unifying vision".[77]
According to Hood, the introvertive mystical experience may be a common core to
mysticism independent of both culture and person, forming the basis of a "perennial
psychology".[78] According to Hood,
[E]mpirically, there is strong support to claim that as operationalized from
Stace's criteria, mystical experience is identical as measured across diverse
samples, whether expressed in "neytral language" or with either "God" or
"Christ" references.[79]
According to Hood,
...it seems fair to conclude that the perennialist view has strong empirical support,
insofar as regardless of the language used in the M Scale, the basic structure of the
experience remains constant across diverse samples and cultures. This is a way of
stating the perennialist thesis in measurable terms.[80]
Katz – constructionism
Katz rejects the discrimination between experiences and their interpretations.[4] Katz
argues that it is not the description, but the experience itself which is conditioned by the
cultural and religious background of the mystic.[4] According to katz, it is not possible
to have pure or unmediated experience.[4][81] In an often-cited quote he states:
There are NO pure (i.e. unmediated) experiences. Neither mystical experience nor
more ordinary forms of experience give any indication, or any ground for believing, that
they are unmediated [...] The notion of unmediated experience seems, if not selfcontradictory, at best empty. This epistemological fact seems to me to be true, because
of the sort of beings we are, even with regard to the experiences of those ultimate
objects of concern with which mystics have had intercourse, e.g., God, Being, Nirvana,
etc.[82][note 17]
Newberg & d'Aquili – Why God Won't Go Away
Andrew B. Newberg and Eugene G. d'Aquili, in their book Why God Won't Go Away:
Brain Science and the Biology of Belief, take a perennial stance, describing their
insights into the relationship between religious experience and brain function.[83]
d'Aquili describes his own meditative experiences as "allowing a deeper, simpler part of
him to emerge", which he believes to be "the truest part of who he is, the part that
never changes."[83] Not contend with personal and subjective descriptions like these,
Newman and d'Aquili have studied the brain-correlates to such experiences. The
scanned the brain blood flow patterns during such moments of mystical transcendence,
using SPECT-scans, to detect which brain areas show heightened activity.[84] Their
scans showed unusual activity in the top rear section of the brain, the "posterior
796
superior parietal lobe", or the "orientation association area (OAA)" in their own
words.[85] This area creates a consistent cognition of the physical limits of the self.[86]
This OAA shows a sharply reduced activity during meditative states, refecting a block
in the incoming flow of sensory information, resulting in a perceived lack of physical
boundaries.[87] According to Newman and d'Aquili,
This is exactly how Robert and generations of Eastern mystics before him have
described their peak meditative, spiritual and mystical moments.[87]
Newman and d'Aquili conclude that mystical experience correlates to observable
neurological events, which are not outside the range of normal brain function.[88] They
also believe that
...our research has left us no choice but to conclude that the mystics may be on
to something, that the mind’s machinery of transcendence may in fact be a
window through which we can glimpse the ultimate realness of something that
is truly divine.[89][note 18]
Why God Won't Away "received very little attention from professional scholars of
religion".[91][note 19][note 20] According to Bulkeley, "Newberg and D'Aquili seem
blissfully unaware of the past half century of critical scholarship questioning
universalistic claims about human nature and experience".[note 21] Matthew Day also
notes that the discovery of a neurological substrate of a "religious experience" is an
isolated finding which "doesn't even come close to a robust theory of religion".[93]
Criticism
The notion of "experience" has been criticised.[36][94][95] Robert Sharf points out that
"experience" is a typical Western term, which has found its way into Asian religiosity via
western influences.[36][note 22] The notion of "experience" introduces a false notion of
duality between "experiencer" and "experienced", whereas the essence of kensho is
the realisation of the "non-duality" of observer and observed.[97][98] "Pure experience"
does not exist; all experience is mediated by intellectual and cognitive activity.[99][100]
The specific teachings and practices of a specific tradition may even determine what
"experience" someone has, which means that this "experience" is not the proof of the
teaching, but a result of the teaching.[16] A pure consciousness without concepts,
reached by "cleaning the doors of perception",[note 23] would be an overwhelming
chaos of sensory input without coherence.[102]
Other critics point out that the stress on "experience" is accompanied with favoring the
atomic individual, instead of the shared life on the community. It also fails to distinguish
between episodic experience, and mysticism as a process, that is embedded in a total
religious matrix of liturgy, scripture, worship, virtues, theology, rituals and
practices.[103]
Richard King also points to disjunction between "mystical experience" and social
justice:[104]
The privatisation of mysticism – that is, the increasing tendency to locate the
mystical in the psychological realm of personal experiences – serves to exclude it from
political issues as social justice. Mysticism thus becomes seen as a personal matter of
cultivating inner states of tranquility and equanimity, which, rather than seeking to
transform the world, serve to accommodate the individual to the status quo through the
alleviation of anxiety and stress.[104]
Forms of mysticism within world religions
The following table briefly summarizes the major forms[citation needed] of
mysticism[citation needed] within world religions and their basic concepts. Inclusion is
based on various definitions of mysticism, namely mysticism as a way of
797
transformation, mysticism as "enlightenment" or insight, and mysticism as an
experience of union.
Western mysticism
Mystery religions
The Eleusinian Mysteries, (Greek:
λευσίνια Μυστήρια) were annual initiation
ceremonies in the cults of the goddesses Demeter and Persephone, held in secret at
Eleusis (near Athens) in ancient Greece.[119] The mysteries began in about 1600 B.C.
in the Mycenean period and continued for two thousand years, becoming a major
festival during the Hellenic era, and later spreading to Rome.[120]
Christian mysticism
The
Apophatic
theology,
or
"negative
theology",of
PseudoDionysius the Areopagite exerted a
great
influence
on
medieval
monastic religiosity.[24]
The High Middle Ages saw a
flourishing of mystical practice and
theorization corresponding to the
flourishing of new monastic orders,
with such figures as Guigo II,
Hildegard of Bingen, Bernard of
Clairvaux, the Victorines, all coming
from different orders, as well as the
first real flowering of popular piety
among the laypeople.
The Late Middle Ages saw the clash
between the Dominican and Franciscan schools of thought, which was also a conflict
798
between two different mystical theologies: on the one hand that of Dominic de Guzmán
and on the other that of Francis of Assisi, Anthony of Padua, Bonaventure, and Angela
of Foligno. This period also saw such individuals as John of Ruysbroeck, Catherine of
Siena and Catherine of Genoa, the Devotio Moderna, and such books as the Theologia
Germanica, The Cloud of Unknowing and The Imitation of Christ.
Moreover, there was the growth of groups of mystics centered around geographic
regions: the Beguines, such as Mechthild of Magdeburg and Hadewijch (among
others); the Rhineland mystics Meister Eckhart, Johannes Tauler and Henry Suso; and
the English mystics Richard Rolle, Walter Hilton and Julian of Norwich. The Spanish
mystics included Teresa of Avila, John of the Cross and Ignatius Loyola.
Later, the reformation saw the writings of Protestant visionaries such as Emmanuel
Swedenborg and William Blake, and the foundation of mystical movements such as the
Quakers.
Catholic mysticism continued into the modern period with such figures as Padre Pio
and Thomas Merton. The philokalia, an ancient method of Eastern Orthodox mysticism,
was promoted by the twentieth century Traditionalist School. The inspired or
"channeled" work A Course in Miracles represents a blending of non-denominational
Christian and New Age ideas.
Jewish mysticism
Kabbalah is a set of esoteric teachings
meant to explain the relationship between an
unchanging, eternal and mysterious Ein Sof
(no end) and the mortal and finite universe
(his creation). Inside Judaism, it forms the
foundations
of
mystical
religious
interpretation.
Kabbalah originally developed entirely within
the realm of Jewish thought. Kabbalists often
use classical Jewish sources to explain and
demonstrate its esoteric teachings. These
teachings are thus held by followers in
Judaism to define the inner meaning of both
the Hebrew Bible and traditional Rabbinic
literature,
their
formerly
concealed
transmitted dimension, as well as to explain
the significance of Jewish religious
observances.[121]
Kabbalah emerged, after earlier forms of
Jewish mysticism, in 12th to 13th century
Southern France and Spain, becoming
reinterpreted in the Jewish mystical
renaissance of 16th-century Ottoman
Palestine. It was popularised in the form of
Hasidic Judaism from the 18th century
onwards. 20th-century interest in Kabbalah
has inspired cross-denominational Jewish
renewal and contributed to wider non-Jewish
contemporary spirituality, as well as
engaging its flourishing emergence and
historical
re-emphasis
through
newly
established academic investigation.
799
Islamic mysticism
Sufism is a discipline within Islam: it is said to be Islam's inner and mystical
dimension.[122][123][124] Classical Sufi scholars have defined Sufism as
[A] science whose objective is the reparation of the heart and turning it away
from all else but God.[125]
A practitioner of this tradition is nowadays known as a ūfī (‫)ّيِفوُص‬, or, in earlier
usage, a dervish. The origin of the word "Sufi" is ambiguous. One understanding is that
Sufi means wool-wearer- wool wearers during early Islam were pious ascetics who
withdrew from urban life. Another explanation of the word "Sufi" is that it means
'purity'.[126]
Sufis generally belong to a Khalqa, a
circle or group, led by a Sheikh or
Murshid. Sufi circles usually belong to
a Tariqa, literally a path, a kind of
lineage, which traces its succession
back to notable Sufis of the past, and
often ultimately to the prophet
Muhammed or one of his close
associates. The turuq (plural of tariqa)
are not enclosed like Christian
monastic orders; rather the members
retain an outside life. Mmebershp of a
Sufi group often passes down family
lines. Meetings may or may not be
segregated according to the prevailing
custom of the wider society. An
existing Muslim faith is not always a requirement for entry, particularly in Western
countries.
Sufi practice includes
-Dhikr, or remembrance (of God), which often takes the form of rhythmic
chanting and breathing exercises.
-Sema, which takes the form of music and dance — the whirling dance of the
Mevlevi dervishes is a form well known in the West.
-Muraqaba or meditation.
-Visiting holy places, particularly the tombs of Sufi saints, in order to absorb
barakah, or spiritual energy.
The aims of Sufism include: the experience of ecstatic states (hal), purification of the
heart (qalb), overcoming the lower self (nafs), the development of extrasensory and
healing powers, extinction of the individual personality (fana), communion with God
(haqiqa), and higher knowledge (marifat). Some sufic beliefs and practices have been
found unorthodox by other Muslims; for instance Mansur al-Hallaj was put to death for
blasphemy after uttering the phrase Ana'l Haqq, "I am the Truth" (i.e. God) in a trance.
Notable classical Sufis include Jalaluddin Rumi, Fariduddin Attar, Saadi Shirazi and
Hafez, all major poets in the Persian language. Al-Ghazzali and Ibn Arabi were
renowned philosophers. Rabia Basri was the most prominent female Sufi.
Sufism first came into contact with the Judea-Christian world during the Moorish
occupation of Spain. An interest in Sufism revived in non-Muslim countries during the
modern era, led by such figures as Inayat Khan and Idries Shah (both in the UK), Rene
Guenon (France) and Ivan Aguéli (Sweden). Sufism has also long been present in
Asian countries that do not have a Muslim majority, such as India and China.[127]
800
Eastern mysticism
Buddhism
The main goal in Buddhism is not some sort of "union", but insight into reality, the
cessation by suffering reaching Nirvana, and Bodhicitta, compassion for the benefit of
all sentient beings.[128] Buddhism has developed several branches and philosophies
throughout its history, and offers various paths to liberation. The classic path is the
Noble Eightfold Path, but others include Oath of Purification, the Bodhisattva path,
Lamrim and subitism.
Enlightenment
A central term in Buddhism is "enlightenment", the "full comprehension of a
situation".[web 7] The English term "enlightenment" has commonly been used to
translate several Sanskrit, Pali,[web 8] Chinese and Japanese terms and concepts,
especially bodhi, prajna, kensho, satori and buddhahood. Bodhi is a Theravada term. It
literally means "awakening" and "understanding". Someone who is awakened has
gained insight into the workings of the mind which keeps us imprisoned in craving,
suffering and rebirth,[web 7] and has also gained insight into the way that leads to
nirvana, the liberation of oneself from this imprisonment. Prajna is a Mahayana term. It
refers to insight into our true nature, which according to Madhyamaka is empty of a
personal essence in the stream of experience. But it also refers to the Tathāgatagarbha or Buddha-nature, the essential basic-consciousness beyond the stream of
experience. In Zen, kensho means "seeing into one's true nature".[129] Satori is often
used interchangeably with kensho, but refers to the experience of kensho.[129]
Buddhahood
Buddhahood is the attainment of full awakening and becoming a Buddha. According to
the Tibetan Thubten Yeshe,[web 9] enlightenment
[means] full awakening; buddhahood. The ultimate goal of Buddhist practice,
attained when all limitations have been removed from the mind and one's
positive potential has been completely and perfectly realized. It is a state
characterized by infinite compassion, wisdom and skill.[web 10]
Absolute and relative
Various schools of Buddhism discern levels of truth, reflecting a polarity of "absolute"
and "relative" truth. A fully enlightened life asks for the integration of these two levels of
truth in daily life.[130]
-The Two truths doctrine of the Madhyamaka
-The Three Natures of the Yogacara
-Essence-Function, or Absolute-relative in Chinese and Korean Buddhism
-The Trikaya-formule, consisting of
The Dharmakāya or Truth body which embodies the very principle of
enlightenment and knows no limits or boundaries;
The Sambhogakāya or body of mutual enjoyment which is a body of bliss or
clear light manifestation;
The Nirmānakāya or created body which manifests in time and space.[131]
The two truths doctrine states that there is:
-Relative or common-sense truth (Sanskrit samv tisatya, Pāli sammuti sacca,
Tibetan kun-rdzob bden-pa), which describes our daily experience of a concrete
world, and
-Ultimate truth (Sanskrit, paramārthasatya, Pāli paramattha sacca, Tibetan:
don-dam bden-pa), which describes the ultimate reality as sunyata, empty of
concrete and inherent characteristics.
801
Zen
The Rinzai-Zen tradition stresses the need of further training after attaining kenshō.
Practice is to be continued to deepen the insight and to express it in daily
life.[132][129][133][134]
According to Hakuin, the main aim of "post-satori practice"[135][136][137] (gogo no
shugyo,[138] or kojo, "going beyond"[139]) is to cultivate the "Mind of
Enlightenment",[140] "benefiting others by giving them the gift of the Dharma
teaching".[141][note 24] According to Yamada Koun, "if you cannot weep with a person
who is crying, there is no kensho".[143]
But one also has to purify oneself by ongoing practice.[144][145] And "experience" has
to be supplemented by intellectual understanding and study of the Buddhist
teachings;[146][147][148] otherwise one remains a zen temma, a "Zen devil".[149]
Finally, these efforts are to result in a natural, effortless, down-to-earth state of being,
the "ultimate liberation", "knowing without any kind of defilement".[150]
To deepen the initial insight of kensho, shikantaza and kōan-study are necessary. This
trajectory of initial insight followed by a gradual deepening and ripening is expressed by
Linji Yixuan in his Three mysterious Gates, the Four Ways of Knowing of Hakuin,[151]
and the Ten Ox-Herding Pictures[152] which detail the steps on the Path.
Indian mystcism
Hindu mysticism
Hinduism has a number of interlinked ascetic traditions and philosophical schools
which aim at moksha[153] and the acquisition of higher powers.[154] With the onset of
the British colonisation of India, those traditions came to be interpreted in western
terms such as "mysticism", drawing equivalents with western terms and practices.[58]
These western notions were taken over by Indian elites, and popularised as NeoVedanta, in which the notion of "spiritual experience" as validation of "religious
knowledge" plays an essential role.[58][155]
Yoga
Yoga is the physical, mental, and spiritual practices or disciplines which originated in
ancient India with a view to attain a state of permanent peace.[156] The term yoga can
be derived from either of two roots, yujir yoga (to yoke) or yuj samādhau (to
concentrate).[157] The Yoga Sūtras of Patañjali defines yoga as "the stilling of the
changing states of the mind".[158] Yoga has also been popularly defined as "union with
the divine" in other contexts and traditions.[159][160]
Various traditions of yoga are found in Hinduism, Buddhism and
Jainism.[161][162][163][162] In Hinduism, yoga is one of the six āstika ("orthodox")
schools of Hindu philosophy.[164] Yoga is also an important part of Vajrayana and
Tibetan Buddhist philosophy.[165][166][167]
Hatha yoga, the yoga of bodily postures, is widely practised in the west. A popular
summary of the forms of yoga,[citation needed] as popularised in the west by Swami
Vivekananda.
-karma yoga, based on ethical action.
-bhakti yoga emphasising devotion to deities.
-jnana yoga, the "path of knowledge"
raja yoga, based on meditation.
In the vedantic and yogic paths, the shishya or aspirant is usually advised to find a
guru, or teacher, who may prescribe spiritual exercises (siddhis) or be credited with the
ability to transmit shakti, divine energy.
802
Vedanta
Classical Vedanta gives philosophical interpretations and commentaries of the
Upanishads, a vast collection of ancient hymns. Vedanta originally meant the
Upanishads.[168] By the 8th century,[citation needed] it came to mean all philosophical
traditions concerned developed by interpreting the three basic texts, namely the
Upanishads, the Brahman Sutras and the Bhagavadgita.[168] At least ten schools of
Vedanta are known,[169] of which Advaita Vedanta, Vishishtadvaita, and Dvaita are
the best known.[170]
Advaita Vedanta is a branch of Vedanta which states that there is no difference
between Atman and Brahman. The best-known subschool is Kevala Vedanta or
mayavada as expounded by Adi Shankara. Shankara's interpretation was influenced
by Buddhism[171][note 25] It was reformulated by Shankara who systematised the
works of preceding philosophers.[175] In modern times, due to the influence of western
Orientalism and Perennialism on Indian Neo-Vedanta and Hindu nationalism,[176]
Advaita Vedanta has acquired a broad acceptance in Indian culture and beyond as the
paradigmatic example of Hindu spirituality.[176]
Shankara emphasizes anubhava, correct understanding of the sruti,[155] which is
supposed to lead to mukti, liberation from endless cycles of reincarnation.[155] In
modern times, the term anubhava has been reinterpreted by Vivekananda and
Radhakrisnan as meaning "religious experience"[155] or "intuition".[web 6]
Four scriptural passages, the Mahavakyas, or "great sayings" are given special
significance by Shankara, in support of his non-dual interpretation of the Upanishads:
1) prajñānam brahma – "Prajñānam (consciousness) is Brahman (Aitareya
Upanishad 3.3 of the Rig Veda)
2) ayam ātmā brahma – "I am Brahman", or "This Self (Atman) is Brahman"
(Mandukya Upanishad 1.2 of the Atharva Veda)
3) tat tvam asi – "Thou art That" or "Thou arrt Brahman"(Chandogya
Upanishad 6.8.7 of the Sama Veda)
4) aham brahmāsmi – "I am Brahman", or "I am Divine"[177] (Brhadaranyaka
Upanishad 1.4.10 of the Yajur Veda)
In contrast Bhedabheda-Vedanta emphasizes that Atamn and Brahman are both the
same and not the same,[178] while Dvaita Vedanta states that Atman and God are
fundamentally different.[178]
In modern times, the Upanishads have been interpreted by Neo-Vedanta as being
"mystical".[58] According to Dasupta,
[T]he sages of the Upanishads believed in a supra-conscious experience of
pure self-illumination as the ultimate principle, superior to and higher than any
of our mental states of cognition, willing, or feeling. The nature of this principle
is itself extremely mystical; many persons, no doubt, are unable to grasp its
character. [160]
Contemporary Advaita teachers warn against a rush for superficial "enlightenment
experiences. Jacobs warns that Advaita Vedanta practice takes years of committed
practice to sever the "occlusion"[179] of the so-called "vasanas, samskaras, bodily
sheats and vrittis", and the "granthi[note 26] or knot forming identification between Self
and mind":[180]
The main Neo-Advaita fallacy ignores the fact that there is an occlusion or
veiling formed by vasanas, samskaras, bodily sheaths and vrittis, and there is a
granthi or knot forming identification between Self and mind, which has to be
severed [...] The Maharshi's remedy to this whole trap is persistent effective
Self-enquiry, and/or complete unconditional surrender of the 'phantom ego' to
803
Self or God, until the granthi is severed, the vasanas are rendered harmless like
a burned out rope.[181]
And according to Puligandla:
Any philosophy worthy of its title should not be a mere intellectual exercise but
should have practical application in enabling man to live an enlightened life. A
philosophy which makes no difference to the quality and style of our life is no
philosophy, but an empty intellectual construction.[182]
Tantra
Tantra is the name given by scholars to a style of meditation and ritual which arose in
India no later than the fifth century AD.[183] Tantra has influenced the Hindu, Bön,
Buddhist, and Jain traditions and spread with Buddhism to East and Southeast
Asia.[184]
Tantric practice includes visualisation of deities, mantras and mandalas. It can also
include sexual and other (antinomian) practices.[citation needed]
Tantric ritual seeks to access the supra-mundane through the mundane, identifying the
microcosm with the macrocosm.[185] The Tantric aim is to sublimate (rather than
negate) reality.[186] The Tantric practitioner seeks to use prana (energy flowing
through the universe, including one's body) to attain goals which may be spiritual,
material or both.[187]
Sikh mysticism
Mysticism in the Sikh dharm began with
its founder, Guru Nanak, who as a child
had profound mystical experiences.[188]
Guru Nanak stressed that God must be
seen with 'the inward eye', or the 'heart',
of a human being.[189] Guru Arjan, the
fifth Sikh Guru, added religious mystics
belonging to other religions into the holy
scriptures that would eventually become
the Guru Granth Sahib.
In Sikhi there is no dogma[190] but only
the search for truth. Sikhs meditate as a
means
to
progress
towards
enlightenment; it is devoted meditation
simran that enables a sort of
communication between the Infinite and
finite human consciousness.[191]
The goal of Sikhi is to be one with
God.[192] For the Sikhs there is no
concentration on the breath but chiefly
the remembrance of God through the
recitation of the name of God. Sikhs are instructed to recite the name of God
(Waheguru) 24 hours a day[193] and surrender themselves to Gods presence often
metaphorized as surrendering themselves to the Lord's feet.[194]
There are no priests, monastics or yogis in the Sikh dharm and these mystic practices
are not limited to an elite few who remove themselves from the world. Rather, Sikhs do
not renounce the world and the participation in ordinary life is considered spiritually
essential to the Sikh.[195][196]
804
Modern mysticism
Perennial philosophy
The Perennial philosophy (Latin: philosophia perennis),[note 27] also referred to as
"perennialism", is a perspective within the philosophy of religion which views each of
the world’s religious traditions as sharing a single, universal truth on which foundation
all religious knowledge and doctrine has grown.
The term philosophia perennis was first used by Agostino Steuco (1497–1548),[197]
drawing on an already existing philosophical tradition, the most direct predecessors of
which were Marsilio Ficino (1433–1499) and Giovanni Pico della Mirandola (1463–94).
A major proponent in the 20th century was Aldous Huxley, who "was heavily influenced
in his description by Vivekananda's neo-Vedanta and the idiosyncratic version of Zen
exported to the west by D.T. Suzuki. Both of these thinkers expounded their versions of
the perennialist thesis",[198] which they originally received from western thinkers and
theologians.[30]
According to the Perennial Philosophy the mystical experiences in all religions are
essentially the same. It supposes that many, if not all of the world's great religions,
have arisen around the teachings of mystics, including Buddha, Jesus, Lao Tze, and
Krishna. It also sees most religious traditions describing fundamental mystical
experience, at least esoterically.
According to Steindl-Rast, this common core of mystical experience may be repressed
by institutional religion. Conventional religions, by definition, have strong institutional
structures, including formal hierarchies and mandated sacred texts and/or creeds.
Personal experience may be a threat to these structures.[web 11]
Transcendentalism and Unitarian Universalism
Ralph Waldo Emerson (1803–1882) was
a pioneer of the idea of spirituality as a
distinct field.[199] He was one of the
major figures in Transcendentalism, an
early 19th-century liberal Protestant
movement, which was rooted in English
and German Romanticism, the Biblical
criticism of Herder and Schleiermacher,
and the skepticism of Hume.[web 1] The
Transcendentalists
emphasised
an
intuitive,
experiential
approach
of
religion.[web
12]
Following
Schleiermacher,[200]
an
individual's
intuition of truth was taken as the criterion
for truth.[web 12] In the late 18th and
early 19th century, the first translations of
Hindu texts appeared, which were also
read by the Transcendentalists, and
influenced their thinking.[web 12] They
also
endorsed
universalist
and
Unitarianist ideas, leading to Unitarian
Universalism, the idea that there must be
truth in other religions as well, since a
loving God would redeem all living
beings, not just Christians.[web 12][web
13]
805
Theosophical Society
The Theosophical Society was formed in 1875 by Helena Blavatsky, Henry Steel
Olcott, William Quan Judge and others to advance the spiritual principles and search
for Truth known as Theosophy.[201][note 28] The Theosophical Society has been
highly influential in promoting interest, both in west and east, in a great variety of
religious teachings:
"No single organization or movement has contributed so many components to
the New Age Movement as the Theosophical Society [...] It has been the major
force in the dissemination of occult literature in the West in the twentieth
century.[201]
The Theosophical Society searched for 'secret teachings' in Asian religions. It has been
influential on modernist streams in several Asian religions, notably Hindu reform
movements, the revival of Theravada Buddhism, and D.T. Suzuki, who popularized the
idea of enlightenment as insight into a timeless, transcendent reality.[web 14][web
15][57] Another example can be seen in Paul Brunton's A Search in Secret India, which
introduced Ramana Maharshi to a western audience.
New Thought
The New Thought movement is a spiritually focused or philosophical interpretation of
New Thought beliefs. New Thought promotes the ideas that Infinite Intelligence, or
God, is everywhere, spirit is the totality of real things, true human selfhood is divine,
divine thought is a force for good, sickness originates in the mind, and "right thinking"
has a healing effect.[web 16][web 17]
New Thought was propelled along by a number of spiritual thinkers and philosophers
and emerged through a variety of religious denominations and churches, particularly
the Unity Church, Religious Science, and Church of Divine Science.[202] The Home of
Truth, which belongs to the New Thought movement has, from its inception as the
Pacific Coast Metaphysical Bureau in the 1880s, disseminated the teachings of the
Hindu teacher Swami Vivekananda.[web 18]
According to Ernest Holmes, who belongs to the New Thought movement,
A mystic is not a mysterious person; but is one who has a deep, inner sense of
Life and Unity with the Whole; mysticism and mystery are entirely different
things; one is real while the other may, or may not, be an illusion. There is
nothing mysterious in the Truth, so far as It is understood; but all things, of
course, are mysteries until we understand them.[203]
Orientalism and the "pizza effect"
The interplay between western and eastern notions of religion is an important factor in
the popularisation of the notion of "mystical experience". In the 19th century, when
Asian countries were colonialised by western states, there started a process of cultural
mememis.[30][31][18] In this process Western ideas about religion, especially the
notion of "religious experience" were introduced in Asian countries by missionaries,
scholars and the Theosophical Society, and amalgated in a new understanding of the
Indian and Buddhist traditions. This amalgam was exported back to the west as
'authentic Asian traditions', and acquired a great popularuty in the west. Due this
western popularity it also gained authority back in India, Sri Lanka and
Japan.[30][31][18]
The best-known representatives of this amalgan tradition are Annie Besant
(Theosophical Society), Swami Vivekenanda and Sarvepalli Radhakrishnan (NeoVedanta), Anagarika Dharmapala, a 19th-century Sri Lankan Buddhist activist who
founded the Maha Bodhi Society, and D.T. Suzuki, a Japanese scholar and Zen-
806
Buddhist. A synonymous term for this broad understanding is nondualism. This mutual
influence is also known as the pizza effect.
The Fourth Way
The Fourth Way is a term used by George Gurdjieff to describe an approach to selfdevelopment he learned over years of travel in the East[204] that combined what he
saw as three established traditional "ways," or "schools" into a fourth way.[205] These
three ways were of the body, mind and emotions. The term "The Fourth Way" was
further developed by P. D. Ouspensky in his lectures and writings. According to this
system, the chief difference between the three traditional schools, or ways, and the
fourth way is that "they are permanent forms which have survived throughout history
mostly unchanged, and are based on religion. Where schools of yogis, monks or fakirs
exist, they are barely distinguishable from religious schools. The fourth way differs in
that it is not a permanent way. It has no specific forms or institutions and comes and
goes controlled by some particular laws of its own."
The Fourth Way mainly addresses the question of people's place in the Universe, their
possibilities for inner development, and transcending the body to achieve a higher state
of consciousness. It emphasizes that people live their lives in a state referred to as
"waking sleep", but that higher levels of consciousness and various inner abilities are
possible.[206] The Fourth Way teaches people how to increase and focus their
attention and energy in various ways, and to minimize daydreaming and
absentmindedness.[207][208] According to this teaching, this inner development in
oneself is the beginning of a possible further process of change, whose aim is to
transform a man into what Gurdjieff taught he ought to be.[209]
Skepticism
Schopenhauer
According to Schopenhauer mysticism is unconvincing:[210]
In the widest sense, mysticism is every guidance to the immediate awareness
of what is not reached by either perception or conception, or generally by any
knowledge. The mystic is opposed to the philosopher by the fact that he begins
from within, whereas the philosopher begins from without. The mystic starts
from his inner, positive, individual experience, in which he finds himself as the
eternal and only being, and so on. But nothing of this is communicable except
the assertions that we have to accept on his word; consequently he is unable to
convince.
—Schopenhauer, The World as Will and Representation, Vol. II, Ch. XLVIII
Marvin Minsky
In The Emotion Machine, Marvin Minsky[211] argues that mystical experiences only
seem profound and persuasive because the mind's critical faculties are relatively
inactive during them:
Meditator: It suddenly seemed as if I was surrounded by an immensely powerful
Presence. I felt that a Truth had been "revealed" to me that was far more
important than anything else, and for which I needed no further evidence. But
when later I tried to describe this to my friends, I found that I had nothing to say
except how wonderful that experience was.
This peculiar type of mental state is sometimes called a "Mystical Experience"
or "Rapture," "Ecstasy," or "Bliss." Some who undergo it call it "wonderful," but a
807
better word might be "wonderless," because I suspect that such a state of mind
may result from turning so many Critics off that one cannot find any flaws in it.
What might that "powerful Presence" represent? It is sometimes seen as a
deity, but I suspect that it is likely to be a version of some early Imprimer that for
years has been hiding inside your mind. In any case, such experiences can be
dangerous—for some victims find them so compelling that they devote the rest
of their lives to trying to get themselves back to that state again.
Minsky's idea of 'some early Imprimer hiding in the mind' was an echo of Freud's belief
that mystical experience was essentially infantile and regressive, i.e., a memory of
'Oneness' with the mother.
808
God
God is often conceived as the Supreme Being and principal object of faith.[1] In theism,
God is the creator and sustainer of the universe. In deism, God is the creator (but not
the sustainer) of the universe. In pantheism, God is the universe itself. Theologians
have ascribed a variety of attributes to the many different conceptions of God.
Common among these are omniscience (infinite knowledge), omnipotence (unlimited
power), omnipresence (present everywhere), omnibenevolence (perfect goodness),
divine simplicity, and eternal and necessary existence. Monotheism is the belief in the
existence of one God or in the oneness of God. God has also been conceived as being
incorporeal (immaterial), a personal being, the source of all moral obligation, and the
"greatest conceivable existent".[1] Many notable medieval philosophers and modern
philosophers have developed arguments for and against the existence of God.[2]
There are many names for God, and different names are attached to different cultural
ideas about who God is and what attributes possessed. In the ancient Egyptian era of
Atenism, possibly the earliest recorded monotheistic religion premised on there being
one "true" Supreme Being and Creator of the Universe,[3] this deity is called Aten.[4] In
the Hebrew Bible "He Who Is," "I Am that I Am", and the "Tetragrammaton" YHVH are
used as names of God, while Yahweh, and Jehovah are sometimes used in Christianity
as vocalizations of YHVH. In Arabic and other Semitic language, the name Allah, "AlEl," or "Al-Elah" ("the God") is used. Muslims regard a multitude of titular names for
God, while in Judaism it is common to refer to God by the titular names Elohim or
Adonai, the latter of which is believed by some scholars to descend from the Egyptian
Aten.[5][6][7][8][9][10] In Hinduism, Brahman is often considered a monistic deity.[11]
Other religions have names for God, for instance, Baha in the Bahá'í Faith,[12]
Waheguru in Sikhism,[13] and Ahura Mazda in Zoroastrianism.[14]
The many different conceptions of God, and competing claims as to God's
characteristics, aims, and actions, has led to the development of ideas of Omnitheism,
Pandeism,[15][16] or a Perennial philosophy, wherein it is supposed that there is one
underlying theological truth, of which all religions express a partial understanding, and
as to which "the devout in the various great world religions are in fact worshipping that
one God, but through different, overlapping concepts or mental images of him."[17]
809
Contents
1 Etymology and usage
2 General conceptions
2.1 Oneness
2.2 Theism, deism and pantheism
2.3 Other concepts
3 Non-theistic views of God
3.1 Anthropomorphism
4 Existence of God
5 Specific attributes
5.1 Epitheta
5.2 Gender
5.3 Relationship with creation
6 Theological approaches
7 Distribution of belief in God
Etymology and usage
The earliest written form of the
Germanic word God (always, in this
usage, capitalized[18]) comes from
the 6th century Christian Codex
Argenteus. The English word itself is
derived from the Proto-Germanic *
uđan. Most linguists[who?] agree
that the reconstructed Proto-IndoEuropean form * ghu-tó-m was based
on the root * ghau(ə)-, which meant
either "to call" or "to invoke".[19] The
Germanic words for God were
originally neuter—applying to both
genders—but during the process of
the Christianization of the Germanic
peoples
from
their
indigenous
Germanic
paganism,
the
word
became
a
masculine
syntactic
form.[20]
In the English language, the
capitalized form of God continues to
represent a distinction between
monotheistic "God" and "gods" in
polytheism.[21][22] The English word
"God" and its counterparts in other
languages are normally used for any
and all conceptions and, in spite of
significant
differences
between
religions, the term remains an English translation common to all. The same holds for
Hebrew El, but in Judaism, God is also given a proper name, the tetragrammaton
810
(written YHWH), in origin the name of an Edomite or Midianite deity, Yahweh. In many
translations of the Bible, when the word "LORD" is in all capitals, it signifies that the
word represents the tetragrammaton.[23] Allāh (Arabic: ‫ﻩللا‬‎) is the Arabic term with no
plural used by Muslims and Arabic speaking Christians and Jews meaning "The God"
(with a capital G), while " ilāh" (Arabic: ‫ﻩلإ‬‎) is the term used for a deity or a god in
general.[24][25][26] God may also be given a proper name in monotheistic currents of
Hinduism which emphasize the personal nature of God, with early references to his
name as Krishna-Vasudeva in Bhagavata or later Vishnu and Hari.[27]
General conceptions
There is no clear consensus on the nature of God.[28] The Abrahamic conceptions of
God include the monotheistic definition of God in Judaism, the trinitarian view of
Christians, and the Islamic concept of God. The dharmic religions differ in their view of
the divine: views of God in Hinduism vary by region, sect, and caste, ranging from
monotheistic to polytheistic to atheistic. Divinity was recognized by the historical
Buddha, particularly Śakra and Brahma. However, other sentient beings, including
gods, can at best only play a supportive role in one's personal path to salvation.
Conceptions of God in the latter developments of the Mahayana tradition give a more
prominent place to notions of the divine.[citation needed]
Oneness
Monotheists hold that there is only one god,
and may claim that the one true god is
worshiped in different religions under
different names. The view that all theists
actually worship the same god, whether they
know it or not, is especially emphasized in
Hinduism[29] and Sikhism.[30]
In Christianity, most Christians believe in
Trinitarian monotheism, known simply as the
Trinity. The doctrine of the Trinity defines
God as one God in three persons. The
Trinity is comprised of God the Father, God
the Son (Jesus), and God the Holy
Spirit.[31]
Islam's most fundamental concept is tawhīd
(meaning "oneness" or "uniqueness"). God
is described in the Qur'an as: "Say: He is
Allah, the One and Only; Allah, the Eternal, Absolute; He begetteth not, nor is He
begotten; And there is none like unto Him."[32][33] Muslims repudiate the Christian
doctrine of the Trinity and divinity of Jesus, comparing it to polytheism. In Islam, God is
beyond all comprehension or equal and does not resemble any of his creations in any
way. Thus, Muslims are not iconodules, and are not expected to visualize God.[34]
Henotheism is the belief and worship of a single god while accepting the existence or
possible existence of other deities.[35]
Theism, deism and pantheism
Theism generally holds that God exists realistically, objectively, and independently of
human thought; that God created and sustains everything; that God is omnipotent and
eternal; personal and interacting with the universe through for example religious
experience and the prayers of humans.[36] It holds that God is both transcendent and
811
immanent; thus, God is simultaneously infinite and in some way present in the affairs of
the world.[37] Not all theists subscribe to all the above propositions, but usually a fair
number of them, c.f., family resemblance.[36] Catholic theology holds that God is
infinitely simple and is not involuntarily subject to time. Most theists hold that God is
omnipotent, omniscient, and benevolent, although this belief raises questions about
God's responsibility for evil and suffering in the world. Some theists ascribe to God a
self-conscious or purposeful limiting of omnipotence, omniscience, or benevolence.
Open Theism, by contrast, asserts that, due to the nature of time, God's omniscience
does not mean the deity can predict the future. "Theism" is sometimes used to refer in
general to any belief in a god or gods, i.e., monotheism or polytheism.[38][39]
Deism holds that God is wholly transcendent: God exists, but does not intervene in the
world beyond what was necessary to create it.[37] In this view, God is not
anthropomorphic, and does not literally answer prayers or cause miracles to occur.
Common in Deism is a belief that God has no interest in humanity and may not even
be aware of humanity. Pandeism and Panendeism, respectively, combine Deism with
the Pantheistic or Panentheistic beliefs discussed below.[40][41][16] Pandeism is
proposed to explain as to Deism why God would create a universe and then abandon
it,[42] and as to Pantheism, the origin and purpose of the universe.[42][43]
Pantheism holds that God is the universe and the universe is God, whereas
Panentheism holds that God contains, but is not identical to, the Universe; the
distinctions between the two are subtle.[citation needed] It is also the view of the
Liberal Catholic Church, Theosophy, some views of Hinduism except Vaishnavism
which believes in panentheism, Sikhism, some divisions of Neopaganism and Taoism,
along with many varying denominations and individuals within denominations.
Kabbalah, Jewish mysticism, paints a pantheistic/panentheistic view of God — which
has wide acceptance in Hasidic Judaism, particularly from their founder The Baal
Shem Tov — but only as an addition to the Jewish view of a personal god, not in the
original pantheistic sense that denies or limits persona to God.
Other concepts
Dystheism, which is related to theodicy, is a form of theism which holds that God is
either not wholly good or is fully malevolent as a consequence of the problem of evil.
One such example comes from Dostoevsky's The Brothers Karamazov, in which Ivan
Karamazov rejects God on the grounds that he allows children to suffer.[44] Another
example would be Theistic Satanism.[citation needed]
In modern times, some more abstract concepts have been developed, such as process
theology and open theism. The contemporaneous French philosopher Michel Henry
has however proposed a phenomenological approach and definition of God as
phenomenological essence of Life.[45]
God has also been conceived as being incorporeal (immaterial), a personal being, the
source of all moral obligation, and the "greatest conceivable existent".[1] These
attributes were all supported to varying degrees by the early Jewish, Christian and
Muslim theologian philosophers, including Maimonides,[46] Augustine of Hippo,[46]
and Al-Ghazali,[2] respectively.
Non-theistic views of God
Nontheism holds that the universe can be explained without any reference to the
supernatural, or to a supernatural being. Some non-theists avoid the concept of God,
whilst accepting that it is significant to many; other non-theists understand God as a
symbol of human values and aspirations. The nineteenth-century English atheist
Charles Bradlaugh declared that he refused to say "There is no God", because "the
word 'God' is to me a sound conveying no clear or distinct affirmation";[47] he said
812
more specifically that he disbelieved in the Christian God. Stephen Jay Gould proposed
an approach dividing the world of philosophy into what he called "non-overlapping
magisteria" (NOMA). In this view, questions of the supernatural, such as those relating
to the existence and nature of God, are non-empirical and are the proper domain of
theology. The methods of science should then be used to answer any empirical
question about the natural world, and theology should be used to answer questions
about ultimate meaning and moral value. In this view, the perceived lack of any
empirical footprint from the magisterium of the supernatural onto natural events makes
science the sole player in the natural world.[48]
Another view, advanced by Richard Dawkins, is that the existence of God is an
empirical question, on the grounds that "a universe with a god would be a completely
different kind of universe from one without, and it would be a scientific difference."[49]
Carl Sagan argued that the doctrine of a Creator of the Universe was difficult to prove
or disprove and that the only conceivable scientific discovery that could disprove the
existence of a Creator would be the discovery that the universe is infinitely old.[50]
Anthropomorphism
Pascal Boyer argues that while there is a wide array of supernatural concepts found
around the world, in general, supernatural beings tend to behave much like people.
The construction of gods and spirits like persons is one of the best known traits of
religion. He cites examples from Greek mythology, which is, in his opinion, more like a
modern soap opera than other religious systems.[51] Bertrand du Castel and Timothy
Jurgensen demonstrate through formalization that Boyer's explanatory model matches
physics' epistemology in positing not directly observable entities as intermediaries.[52]
Anthropologist Stewart Guthrie contends that people project human features onto nonhuman aspects of the world because it makes those aspects more familiar. Sigmund
Freud also suggested that god concepts are projections of one's father.[53][not in
citation given]
Likewise, Émile Durkheim was one of the earliest to suggest that gods represent an
extension of human social life to include supernatural beings. In line with this
reasoning, psychologist Matt Rossano contends that when humans began living in
larger groups, they may have created gods as a means of enforcing morality. In small
groups, morality can be enforced by social forces such as gossip or reputation.
However, it is much harder to enforce morality using social forces in much larger
groups. Rossano indicates that by including ever-watchful gods and spirits, humans
discovered an effective strategy for restraining selfishness and building more
cooperative groups.[54]
Existence of God
Countless arguments have been proposed in attempt to prove the existence of
God.[55] Some of the most notable arguments are the Five Ways of Aquinas, the
Argument from Desire proposed by C.S. Lewis, and the Ontological Argument
formulated both by St. Anselm and Descartes.[56] Even among theists, these proofs
are heavily debated. Some, such as the Ontological Argument, are highly controversial
among theists. Aquinas spends a section of his treatise on God refuting St. Anselm's
proof.[57]
St. Anselm's approach was to define God as, "that than which nothing greater can be
conceived". Famed pantheist philosopher Baruch Spinoza would later carry this idea to
its extreme: “By God I understand a being absolutely infinite, i.e., a substance
consisting of infinite attributes, of which each one expresses an eternal and infinite
essence.” For Spinoza, the whole of the natural universe is made of one substance,
813
God, or its equivalent, Nature.[58] His proof for the existence of God was a variation of
the Ontological argument.[59]
Renowned physicist Stephen Hawking and co-author Leonard Mlodinow state in their
book, The Grand Design, that it is reasonable to ask who or what created the universe,
but if the answer is God, then the question has merely been deflected to that of who
created God. In this view it is accepted that some entity exists that needs no creator,
and that entity is called God.[citation needed] This is known as the first-cause
argument for the existence of God. Both authors claim however, that it is possible to
answer these questions purely within the realm of science, and without invoking any
divine beings.[60]
Some theologians, such as the scientist and theologian A.E. McGrath, argue that the
existence of God is not a question that can be answered using the scientific
method.[61][62] Agnostic Stephen Jay Gould argues that science and religion are not
in conflict and do not overlap.[63]
There are many philosophical issues concerning the existence of God. Some
definitions of God are nonspecific, while others can be self-contradictory. Arguments
for the existence of God typically include metaphysical, empirical, inductive, and
subjective types, while others revolve around perceived holes in evolutionary theory
and order and complexity in the world.
Arguments against the existence of God typically include empirical, deductive, and
inductive types. Conclusions reached include views that: "God does not exist" (strong
atheism); "God almost certainly does not exist"[49] (de facto atheism[64]); "no one
knows whether God exists" (agnosticism[65]); "God exists, but this cannot be proven or
disproven" (weak theism); and that "God exists and this can be proven" (strong
theism). There are numerous variations on these positions.[citation needed]
Specific attributes
Epitheta
It is difficult to distinguish between proper names and epitheta of God. Throughout the
Hebrew and Christian Bible there are many names for God that portray his nature and
character. One of them is Elohim. Another one is El Shaddai, meaning “God
Almighty”.[66] A third notable name is El Elyon, which means “The Most High God”.[67]
God is described and referred in the Quran and hadith by certain names or attributes,
the most common being Al-Rahman, meaning "Most Compassionate" and Al-Rahim,
meaning "Most Merciful" (See Names of God in Islam).[68]
Vaishnavism, a tradition in Hinduism, has list of titles and names of Krishna.
Gender
The gender of God can be viewed as a literal or as an allegorical aspect of a deity who,
in Classical western philosophy, transcends bodily form.[69][70] In polytheistic
religions, the gods are more likely to have literal sexual genders which would enable
them to interact with each other, and even with humans, in a sexual way. In most
monotheistic religions, there is no comparable being for God to relate to in a literal
gender-based way. Thus, in Classical western philosophy the gender of this one-andonly deity is most likely to be an analogical statement of how humans and God
address, and relate to, each other. Namely, God is seen as begetter of the world and
revelation which corresponds to the active (as opposed to feminine receptive) role in
sexual intercourse.[71]
God is usually characterised as male in Biblical sources, except: female in Genesis
1:26-27,[72][73] Psalm 123:2-3, and Luke 15:8-10; a mother in Hosea...
814
Relationship with creation
Prayer plays a significant role among
many believers. Muslims believe that
the purpose of existence is to worship
God.[74][75] He is viewed as a
personal God and there are no
intermediaries, such as clergy, to
contact God. Prayer often also
includes supplication and asking
forgiveness. God is often believed to
be forgiving. For example, a hadith
states God would replace a sinless
people with one who sinned but still
asked
repentance.[76]
Christian
theologian Alister McGrath writes that
there are good reasons to suggest that
a "personal god" is integral to the
Christian outlook, but that one has to understand it is an analogy. "To say that God is
like a person is to affirm the divine ability and willingness to relate to others. This does
not imply that God is human, or located at a specific point in the universe."[77]
Adherents of different religions generally disagree as to how to best worship God and
what is God's plan for mankind, if there is one. There are different approaches to
reconciling the contradictory claims of monotheistic religions. One view is taken by
exclusivists, who believe they are the chosen people or have exclusive access to
absolute truth, generally through revelation or encounter with the Divine, which
adherents of other religions do not. Another view is religious pluralism. A pluralist
typically believes that his religion is the right one, but does not deny the partial truth of
other religions. An example of a pluralist view in Christianity is supersessionism, i.e.,
the belief that one's religion is the fulfillment of previous religions. A third approach is
relativistic inclusivism, where everybody is seen as equally right; an example being
universalism: the doctrine that salvation is eventually available for everyone. A fourth
approach is syncretism, mixing different elements from different religions. An example
of syncretism is the New Age movement.
Theological approaches
Theologians and philosophers have ascribed a number of attributes to God, including
omniscience, omnipotence, omnipresence, perfect goodness, divine simplicity, and
eternal and necessary existence. God has been described as incorporeal, a personal
being, the source of all moral obligation, and the greatest conceivable being existent.[1]
These attributes were all claimed to varying degrees by the early Jewish, Christian and
Muslim scholars, including St Augustine,[46] Al-Ghazali,[78] and Maimonides.[46]
Many medieval philosophers developed arguments for the existence of God,[2] while
attempting to comprehend the precise implications of God's attributes. Reconciling
some of those attributes generated important philosophical problems and debates. For
example, God's omniscience may seem to imply that God knows how free agents will
choose to act. If God does know this, their apparent free will might be illusory, or
foreknowledge does not imply predestination; and if God does not know it, God may
not be omniscient.[79]
However, if by its essential nature, free will is not predetermined, then the effect of its
will can never be perfectly predicted by anyone, regardless of intelligence and
knowledge. Although knowledge of the options presented to that will, combined with
815
perfect-infinite intelligence, could be said to provide God with omniscience if
omniscience is defined as knowledge or understanding of all that is.
The last centuries of philosophy have
seen vigorous questions regarding the
arguments for God's existence raised by
such philosophers as Immanuel Kant,
David Hume and Antony Flew, although
Kant held that the argument from
morality was valid. The theist response
has been either to contend, like Alvin
Plantinga, that faith is "properly basic";
or to take, like Richard Swinburne, the
evidentialist position.[80] Some theists
agree that none of the arguments for
God's existence are compelling, but
argue that faith is not a product of
reason, but requires risk. There would
be no risk, they say, if the arguments for
God's existence were as solid as the
laws of logic, a position summed up by
Pascal as: "The heart has reasons
which reason knows not of."[81]
Most major religions hold God not as a
metaphor, but a being that influences
our day-to-day existences. Many believers allow for the existence of other, less
powerful spiritual beings, and give
them names such as angels, saints,
djinns,
demons,
and
devas.[82][83][84][85][86]
Distribution of belief in God
As of 2000, approximately 53% of the
world's population identified with one
of the three primary Abrahamic
religions (33% Christian, 20% Islam,
<1% Judaism), 6% with Buddhism,
13% with Hinduism, 6% with traditional
Chinese religion, 7% with various
other religions, and less than 15% as
non-religious. Most of these religious
beliefs involve a god or gods.[87]
Abrahamic
religions
beyond
Christianity, Islam and Judaism
include Baha'i, Samaritanism, the
Rastafari movement, Yazidism, and
the Unification Church.
816
Soul
Contents
1 Linguistic aspects
1.1 Etymology
1.2 Semantics
2 Philosophical views
2.1 Socrates and Plato
2.2 Aristotle
2.3 Avicenna and Ibn al-Nafis
2.4 Thomas Aquinas
2.5 Immanuel Kant
2.6 James Hillman
2.7 Philosophy of mind
3 Religious views
3.1 Ancient Near East
3.2 Bahá'í
3.3 Buddhism
3.4 Christianity
3.4.1 Various denominations
3.5 Hinduism
3.6 Islam
3.7 Jainism
3.8 Judaism
3.9 Shamanism
3.10 Sikhism
3.11 Taoism
3.12 Zoroastrianism
3.13 Other religious beliefs and views
3.14 Spirituality, New Age and new religions
3.14.1 Brahma Kumaris
3.14.2 Theosophy
3.14.3 Anthroposophy
3.14.4 Miscellaneous
4 Science
5 Parapsychology
5.1 Weight of the soul
The soul, in many religious, philosophical, psychological, and mythological traditions, is
the incorporeal and, in many conceptions, immortal essence of a person, living thing, or
object.[1] According to some religions, including the Abrahamic religions in most of
their forms, souls — or at least immortal souls capable of union with the divine[2] —
belong only to human beings. For example, the Catholic theologian Thomas Aquinas
817
attributed "soul" (anima) to all organisms but taught that only human souls are
immortal.[3] Other religions (most notably Jainism and Hinduism) teach that all
biological organisms have souls, and others further still that non-biological entities
(such as rivers and mountains) possess souls. This latter belief is called animism.[4]
Greek philosophers such as Socrates, Plato and Aristotle understood the psyche
(ψυχή) to be crowned with the logical faculty, the exercise of which was the most divine
of human actions. At his defense trial, Socrates even summarized his teachings as
nothing other than an exhortation for his fellow Athenians to firstly excel in matters of
the psyche since all bodily goods are dependent on such excellence The Apology (30ab). ( Anima mundi is the concept of a "world soul."
Soul can function as a synonym for spirit, mind, psyche or self.[5]
Linguistic aspects
Etymology
The Modern English word soul derived from Old English sáwol, sáwel, first attested to
in the 8th century poem Beowulf v. 2820 and in the Vespasian Psalter 77.50, and is
cognate with other Germanic and Baltic terms for the same idea, including Gothic
saiwala, Old High German sêula, sêla, Old Saxon sêola, Old Low Franconian sêla, sîla,
Old Norse sála as well as Lithuanian siela. Further etymology of the Germanic word is
uncertain. A more recent suggestion[6] connects it with a root for "binding", Germanic
*sailian (OE sēlian, OHG seilen), related to the notion of being "bound" in death, and
the practice of ritually binding or restraining the corpse of the deceased in the grave to
prevent his or her return as a ghost.
The word is probably an adaptation by early missionaries—particularly Ulfilas, apostle
to the Goths during the 3rd century—of a native Germanic concept, which was a
translation of Greek ψυχή psychē "life, spirit, consciousness".
The Greek word is derived from a verb "to cool, to blow" and hence refers to the vital
breath, the animating principle in humans and other animals, as opposed to σ μα
(soma) meaning "body". It could refer to a ghost or spirit of the dead in Homer, and to a
more philosophical notion of an immortal and immaterial essence left over at death
since Pindar. Latin anima figured as a translation of ψυχή since Terence. Psychē
occurs juxtaposed to σ μα e.g. in Matthew 10:28:
Vulgate: et nolite timere eos qui occidunt corpus animam autem non possunt
occidere sed potius eum timete qui potest et animam et corpus perdere in
gehennam.
Authorized King James Version (KJV) "And fear not them which kill the body,
but are not able to kill the soul: but rather fear Him which is able to destroy both
soul and body in hell."
In the Septuagint (LXX), ψυχή translates Hebrew ‫ שפנ‬nephesh, meaning "life, vital
breath" and specifically refers to a mortal, physical life, but is in English variously
translated as "soul, self, life, creature, person, appetite, mind, living being, desire,
emotion, passion"; e.g. in Genesis 1:20:
Vulgate Creavitque Deus cete grandia, et omnem animam viventem atque
motabilem.
KJV "And God created great whales, and every living creature that moveth."
Paul of Tarsus used ψυχή specifically to distinguish between the Jewish notions
of nephesh and (spirit) (also in LXX, e.g. Genesis 1:2 spiritus Dei = "the Spirit
of God").
818
Semantics
Although the terms soul and spirit are sometimes used interchangeably, soul may
denote a more worldly and less transcendent aspect of a person.[7] According to
psychologist James Hillman, soul has an affinity for negative thoughts and images,
whereas spirit seeks to rise above the entanglements of life and death.[8] The words
soul and psyche can also be treated synonymously, although psyche has more
physical connotations, whereas soul is connected more closely to spirituality and
religion.[9]
Philosophical views
The Ancient Greeks used the same word
for 'alive' as for 'ensouled', indicating that
the earliest surviving western philosophical
view believed that the soul was that which
gave the body life. The soul was
considered the incorporeal or spiritual
'breath' which animates (from the Latin,
anima, cf. animal) the living organism.
Francis M. Cornford quotes Pindar in
saying that the soul sleeps while the limbs
are active, but when one is sleeping, the
soul is active and reveals in many a dream
"an award of joy or sorrow drawing
near."[10]
Erwin Rohde writes that the early prePythagorean belief was that the soul had
no life when it departed from the body, and
retired into Hades with no hope of
returning to a body.[11]
It has been argued that a strict line of
causality
fails
to
explain
certain
phenomena within human experience
(such as free will) that have at times been
attributed to the soul.
Some metaphysical thinkers believe that the concept of soul can be a solution for the
explanatory gap and the problem of other minds, which suggests that we cannot know
if other people really have consciousness.
Socrates and Plato
Drawing on the words of his teacher Socrates, Plato considered the psyche to be the
essence of a person, being that which decides how we behave. He considered this
essence to be an incorporeal, eternal occupant of our being. As bodies die, the soul is
continually reborn in subsequent bodies. The Platonic soul comprises three parts:
1-the logos, or logistikon (mind, nous, or reason)
2-the thymos, or thumetikon (emotion, or spiritedness, or masculine)
3-the eros, or epithumetikon (appetitive, or desire, or feminine)
Each of these has a function in a balanced, level and peaceful soul.
819
Aristotle
Aristotle (384 BC – 322 BC) defined the soul or psyche (ψυχή) as the first actuality of a
naturally organized body,[12] but argued against its having a separate existence from
the physical body. In Aristotle's view, the primary activity of a living thing constitutes its
soul; for example, the soul of an eye, if it were an independent organism, would be
seeing (its purpose or final cause).
The various faculties of the soul or psyche, such as nutrition, sensation, movement,
and so forth, when exercised, constitute the "second" actuality, or fulfillment, of the
capacity to be alive. A good example is someone who falls asleep, as opposed to
someone who falls dead; the former actuality can wake up and go about their life, while
the second actuality can no longer do so. Aristotle identified three hierarchical levels of
living things: plants, animals, and people, for which groups he identified three
corresponding levels of soul, or biological activity: the nutritive activity of growth,
sustenance and reproduction which all life shares; the self-willed motive activity and
sensory faculties, which only animals and people have in common; and finally reason,
of which people alone are capable. Aristotle treats of the soul in his work, De Anima
(On the Soul). Although mostly seen as opposing Plato in relation to the immortality of
the soul, there's a controversy about the fifth chapter of the third book of his work De
Anima. In that text both interpretations can be argued for: soul as a whole is mortal or a
part called active intellect or active mind is immortal and eternal.[13] There are
commentators in both sides of the controversy and it's understood that since there
aren't any other Aristotle texts where this specific point appears and this part of De
Anima is obscure, there'll be permanent contestation about its final conclusions.[14]
Avicenna and Ibn al-Nafis
Following Aristotle, the Muslim philosophers Avicenna (Ibn Sina) and Ibn al-Nafis,
further elaborated on the Aristotelian understanding of the soul and developed their
own theories on the soul. They both made a distinction between the soul and the spirit,
and in particular, the Avicennian doctrine on the nature of the soul was influential
among the Scholastics. Some of Avicenna's views on the soul included the idea that
the immortality of the soul is a consequence of its nature, and not a purpose for it to
fulfill. In his theory of "The Ten Intellects", he viewed the human soul as the tenth and
final intellect.
While he was imprisoned, Avicenna wrote his famous "Floating Man" thought
experiment to demonstrate human self-awareness and the substantiality of the soul. He
told his readers to imagine themselves suspended in the air, isolated from all
sensations, which includes no sensory contact with even their own bodies. He argues
that in this scenario one would still have self-consciousness. He thus concludes that
the idea of the self is not logically dependent on any physical thing, and that the soul
should not be seen in relative terms, but as a primary given, a substance. This
argument was later refined and simplified by René Descartes in epistemic terms when
he stated: "I can abstract from the supposition of all external things, but not from the
supposition of my own consciousness."[15]
Avicenna generally supported Aristotle's idea of the soul originating from the heart,
whereas Ibn al-Nafis rejected this idea and instead argued that the soul "is related to
the entirety and not to one or a few organs." He further criticized Aristotle's idea that
every unique soul requires the existence of a unique source, in this case the heart. Ibn
al-Nafis concluded that "the soul is related primarily neither to the spirit nor to any
organ, but rather to the entire matter whose temperament is prepared to receive that
soul," and he defined the soul as nothing other than "what a human indicates by saying
'I'."[16]
820
Thomas Aquinas
Following Aristotle and Avicenna, St. Thomas Aquinas (1225 – 1274) understood the
soul to be the first actuality of the living body. Consequent to this, he distinguished
three orders of life: plants, which feed and grow; animals, which add sensation to the
operations of plants; and humans, which add intellect to the operations of animals.
Concerning the human soul, his epistemological theory required that, since the knower
becomes what he knows[17] the soul was definitely not corporeal: for, if it were
corporeal when it knew what some corporeal thing was, that thing would come to be
within it. Therefore, the soul had an operation which did not rely on a bodily organ and
therefore the soul could subsist without the body. Furthermore, since the rational soul
of human beings was a subsistent form and not something made up of matter and
form, it could not be destroyed in any natural process.[18] The full argument for the
immortality of the soul and Thomas's elaboration of Aristotelian theory is found in
Question 75 of the Summa Theologica.
Immanuel Kant
In his discussions of rational psychology Immanuel Kant (1724–1804) identified the
soul as the "I" in the strictest sense and that the existence of inner experience can
neither be proved nor disproved. "We cannot prove a priori the immateriality of the soul,
but rather only so much: that all properties and actions of the soul cannot be cognized
from materiality." It is from the "I", or soul, that Kant proposes transcendental
rationalization, but cautions that such rationalization can only determine the limits of
knowledge if it is to remain practical.[19]
James Hillman
Contemporary psychology is defined as the study of mental processes and behavior.
However, the word "psychology" literally means "study of the soul,"[20] and
psychologist James Hillman, the founder of archetypal psychology, has been credited
with "restoring 'soul' to its psychological sense."[21] Although the words soul and spirit
are often viewed as synonyms, Hillman argues that they can refer to antagonistic
components of a person. Summarizing Hillman's views, author and psychotherapist
Thomas Moore associates spirit with "afterlife, cosmic issues, idealistic values and
hopes, and universal truths", while placing soul "in the thick of things: in the repressed,
in the shadow, in the messes of life, in illness, and in the pain and confusion of
love."[22] Hillman believes that religion—especially monotheism and monastic faiths—
and humanistic psychology have tended to the spirit, often at the unfortunate expense
of soul.[7] This happens, Moore says, because to transcend the "lowly conditions of the
soul ... is to lose touch with the soul, and a split-off spirituality, with no influence from
the soul, readily falls into extremes of literalism and destructive fanaticism."[23]
Hillman's archetypal psychology is in many ways an attempt to tend to the oftneglected soul, which Hillman views as the "self-sustaining and imagining substrate"
upon which consciousness rests. Hillman described the soul as that "which makes
meaning possible, [deepens] events into experiences, is communicated in love, and
has a religious concern," as well as "a special relation with death."[24] Departing from
the Cartesian dualism "between outer tangible reality and inner states of mind," Hillman
takes the Neoplatonic stance[25] that there is a "third, middle position" in which soul
resides.[26] Archetypal psychology acknowledges this third position by attuning to, and
often accepting, the archetypes, dreams, myths, and even psychopathologies through
which, in Hillman's view, soul expresses itself.
821
Philosophy of mind
For a contemporary understanding of the soul/mind and the problem concerning its
connection to the brain/body, consider the rejection of Descartes' mind/body dualism by
Gilbert Ryle's ghost-in-the-machine argument,[27] the tenuous unassailability of
Richard Swinburne's argument for the soul,[28] and the advances that have been
made in neuroscience that are steadily undermining the validity of the concept of an
independent soul/mind. The philosophies of mind and of personal identity also
contribute to a contemporary understanding of the mind. The contemporary approach
does not so much attack the existence of an independent soul as render the concept
less relevant. The advances in neuroscience mainly serve to support the mind/brain
identity hypothesis, showing the extent of the correlation between mental states and
physical-brain states. The notion of soul has less explanatory power in a western
world-view which prefers the empirical explanations involving observable and locatable
elements of the brain. Even so, there remain considerable objections to simple-identity
theory. Notably, philosophers such as Thomas Nagel and David Chalmers have argued
that the correlation between physical-brain states and mental states is not strong
enough to support identity theory. Nagel (1974) argues that no amount of physical data
is sufficient to provide the "what it is like" of first-person experience, and Chalmers
(1996) argues for an "explanatory gap" between functions of the brain and phenomenal
experience. On the whole, brain/mind identity theory does poorly in accounting for
mental phenomena of qualia and intentionality. While neuroscience has done much to
illuminate the functioning of the brain, much of subjective experience remains
mysterious.
Religious views
Ancient Near East
In the ancient Egyptian religion,
an individual was believed to be
made up of various elements,
some
physical
and
some
spiritual.
Similar ideas are found in
ancient Assyrian and Babylonian
religion. Kuttamuwa, an 8thcentury BC royal official from
Sam'al, ordered an inscribed
stele erected upon his death.
The inscription requested that
his mourners commemorate his
life and his afterlife with feasts
"for my soul that is in this stele".
It is one of the earliest
references to a soul as a
separate entity from the body.
The 800-pound (360 kg) basalt
stele is 3 ft (0.91 m) tall and 2 ft
(0.61 m) wide. It was uncovered
in
the
third
season
of
excavations by the Neubauer
Expedition of the Oriental
Institute in Chicago, Illinois.[29]
822
Bahá'í
The Bahá'í Faith affirms that "the soul is a sign of God, a heavenly gem whose reality
the most learned of men hath failed to grasp, and whose mystery no mind, however
acute, can ever hope to unravel."[30] Bahá'u'lláh stated that the soul not only continues
to live after the physical death of the human body, but is, in fact, immortal.[31] Heaven
can be seen partly as the soul's state of nearness to God; and hell as a state of
remoteness from God. Each state follows as a natural consequence of individual
efforts, or the lack thereof, to develop spiritually.[32] Bahá'u'lláh taught that individuals
have no existence prior to their life here on earth and the soul's evolution is always
towards God and away from the material world.[32]
Buddhism
Buddhism teaches that all things are in a constant state of flux: all is changing, and no
permanent state exists by itself.[33][34] This applies to human beings as much as to
anything else in the cosmos. Thus, a human being has no permanent self.[35][36]
According to this doctrine of anatta (Pāli; Sanskrit: anātman) – "no-self" or "no soul" –
the words "I" or "me" do not refer to any fixed thing. They are simply convenient terms
that allow us to refer to an ever-changing entity.[37]
The anatta doctrine is not a kind of materialism. Buddhism does not deny the existence
of "immaterial" entities, and it (at least traditionally) distinguishes bodily states from
mental states.[38] Thus, the conventional translation of anatta as "no-soul"[39] can be
confusing. If the word "soul" simply refers to an incorporeal component in living things
that can continue after death, then Buddhism does not deny the existence of the
soul.[40] Instead, Buddhism denies the existence of a permanent entity that remains
constant behind the changing corporeal and incorporeal components of a living being.
Just as the body changes from moment to moment, so thoughts come and go. And
there is no permanent, underlying mind that experiences these thoughts, as in
Cartesianism; rather, conscious mental states simply arise and perish with no "thinker"
behind them.[41] When the body dies, the incorporeal mental processes continue and
are reborn in a new body.[40] Because the mental processes are constantly changing,
the being that is reborn is neither entirely different than, nor exactly the same as, the
being that died.[42] However, the new being is continuous with the being that died – in
the same way that the "you" of this moment is continuous with the "you" of a moment
before, despite the fact that you are constantly changing.[43]
Buddhist teaching holds that a notion of a permanent, abiding self is a delusion that is
one of the causes of human conflict on the emotional, social, and political
levels.[44][45] They add that an understanding of anatta provides an accurate
description of the human condition, and that this understanding allows us to pacify our
mundane desires.
Various schools of Buddhism have differing ideas about what continues after death.[46]
The Yogacara school in Mahayana Buddhism said there are Store consciousness
which continue to exist after death.[47] In some schools, particularly Tibetan Buddhism,
the view is that there are three minds: very subtle mind, which does not disintegrate in
death; subtle mind, which disintegrates in death and which is "dreaming mind" or
"unconscious mind"; and gross mind, which does not exist when one is sleeping.
Therefore, gross mind less permanent than subtle mind, which does not exist in death.
Very subtle mind, however, does continue, and when it "catches on", or coincides with
phenomena,
again,
a
new
subtle
mind
emerges,
with
its
own
personality/assumptions/habits, and that entity experiences karma in the current
continuum.
Plants were said to be non-sentient,[48] but Buddhist monks should avoid cutting or
burning trees, because some sentient beings rely on them.[49] Some Mahayana
823
monks said non-sentient beings such as plants and stones have buddhanature.[50][51] Some buddhists said about plants or divisible consciousnesses [52]
Certain modern Buddhists, particularly in Western countries, reject—or at least take an
agnostic stance toward—the concept of rebirth or reincarnation, which they view as
incompatible with the concept of anatta. Stephen Batchelor discusses this issue in his
book, Buddhism Without Beliefs. Others point to research that has been conducted at
the University of Virginia as proof that some people are reborn.[53]
Christianity
Most Christians understand the soul as
an ontological reality distinct from, yet
integrally connected with, the body. Its
characteristics are described in moral,
spiritual, and philosophical terms.
According to a common Christian
eschatology, when people die, their
souls will be judged by God and
determined to spend an eternity in
Heaven or in Hell. Though all branches
of Christianity – Catholics, Eastern
Orthodox,
Oriental
Orthodox,
Evangelical and mainline Protestants –
teach that Jesus Christ plays a
decisive role in the Christian salvation
process, the specifics of that role and the part played by individual persons or
ecclesiastical rituals and relationships, is a matter of wide diversity in official church
teaching, theological speculation and popular practice. Some Christians believe that if
one has not repented of one's sins and trusted in Jesus Christ as Lord and Savior, one
will go to Hell and suffer eternal damnation or eternal separation from God. Variations
also exist on this theme, e.g. some which hold that the unrighteous soul will be
destroyed instead of suffering eternally (Annihilationism). Believers will inherit eternal
life in Heaven and enjoy eternal fellowship with
God. There is also a belief that babies
(including the unborn) and those with cognitive
or mental impairments who have died will be
received into Heaven on the basis of God's
grace through the sacrifice of Jesus. And there
are beliefs in universal salvation and Christian
conditionalism.
Among some Christians, there is uncertainty
regarding whether human embryos have souls,
and at what point between conception and
birth the fetus acquires a soul, consciousness,
and/or personhood. This uncertainty is the
general reasoning behind the Christian belief
that abortion should not be legal.[54][55][56]
Soul as the personality: Some Christians
regard the soul as the immortal essence of a
human – the seat or locus of human will,
understanding,
and
personality.[citation
needed]
824
Trichotomy of the soul: Augustine, one of western Christianity's most influential early
Christian thinkers, described the soul as "a special substance, endowed with reason,
adapted to rule the body". Some Christians espouse a trichotomic view of humans,
which characterizes humans as consisting of a body (soma), soul (psyche), and spirit
(pneuma).[57] However, the majority of modern Bible scholars point out how spirit and
soul are used interchangeably in many biblical passages, and so hold to dichotomy: the
view that each of us is body and soul. Paul said that the "body wars against" the soul,
and that "I buffet my body", to keep it under control. Philosopher Anthony Quinton said
the soul is a "series of mental states connected by continuity of character and memory,
[and] is the essential constituent of personality. The soul, therefore, is not only logically
distinct from any particular human body with which it is associated; it is also what a
person is". Richard Swinburne, a Christian philosopher of religion at Oxford University,
wrote that "it is a frequent criticism of substance dualism that dualists cannot say what
souls are... Souls are immaterial subjects of mental properties. They have sensations
and thoughts, desires and beliefs, and perform intentional actions. Souls are essential
parts of human beings...".
Origin of the soul: The origin of the soul has provided a vexing question in
Christianity; the major theories put forward include soul creationism, traducianism and
pre-existence. According to creationism, each individual soul is created directly by God,
either at the moment of conception or some later time (identical twins arise several cell
divisions after conception, but no creationist would deny that they have whole souls .
According to traducianism, the soul comes from the parents by natural generation.
According to the preexistence theory, the soul exists before the moment of conception.
Various denominations
The present Catechism of the Catholic
Church defines the soul as "the
innermost aspect of humans, that
which is of greatest value in them, that
by which they are most especially in
God's image: 'soul' signifies the
spiritual principle in man."[58] All souls
living and dead will be judged by Jesus
Christ when he comes back to earth.
The souls of those who die
unrepentant of serious sins, or in
conscious rejection of God, will at
judgment day be forever in a state
called Hell[citation needed]. The
Catholic Church teaches that the
existence of each individual soul is
dependent wholly upon God: "The doctrine of the faith affirms that the spiritual and
immortal soul is created immediately by God."[59]
Eastern Orthodox and Oriental Orthodox views are somewhat similar, in essence, to
Roman Catholic views although different in specifics. Orthodox Christians believe that
after death, the soul is judged individually by God, and then sent to either Abraham's
Bosom (temporary paradise) or Hades/Hell (temporary torture).[citation needed] At the
Last Judgment, God judges all people who have ever lived. Those that know the Spirit
of God, because of the sacrifice of Jesus, go to Heaven (permanent paradise) whilst
the damned experience the Lake of Fire (permanent torture).[citation needed] The
Orthodox Church does not teach that Purgatory exists.
825
Protestants generally believe in the soul's existence, but fall into two major camps
about what this means in terms of an afterlife. Some, following Calvin,[60] believe in
the immortality of the soul and conscious existence after death, while others, following
Luther,[61] believe in the mortality of the soul and unconscious "sleep" until the
resurrection of the dead.[62]
Other Christians reject the idea of the immortality of the soul, citing the Apostles'
Creed's reference to the "resurrection of the body" (the Greek word for body is soma
σωμα, which implies the whole person, not sarx σαρξ, the term for flesh or corpse).
They consider the soul to be the life force, which ends in death and will be restored in
the resurrection.[citation needed] Theologian Frederick Buechner sums up this position
in his 1973 book Whistling in the Dark: "...we go to our graves as dead as a doornail
and are given our lives back again by God (i.e., resurrected) just as we were given
them by God in the first place. "[citation needed]
Christadelphians believe that we are all created out of the dust of the earth and
became living souls once we received the breath of life based on the Genesis 2
account of humanity's creation. Adam was said to have become a living soul. His body
did not contain a soul, rather his body (made from dust) plus the breath of life together
were called a soul, in other words a living being. They believe that we are mortal and
when we die our breath leaves our body, and our bodies return to the soil. They believe
that we are mortal until the resurrection from the dead when Christ returns to this earth
and grants immortality to the faithful. In the meantime, the dead lie in the earth in the
sleep of death until Jesus comes.[63]
Seventh-day Adventists believe that the main definition of the term "Soul" is a
combination of spirit (breath of life) and body, disagreeing with the view that the soul
has a consciousness or sentient existence of its own.[citation needed] They affirm this
through Genesis 2:7 "And (God) breathed into his nostrils the breath of life; and man
became a living soul."[64] When God united His breath, or spirit with man, man
became a living soul. A living soul is composed of body and spirit.[65] Adventists
believe at death the body returns to dust and life returns to the God who bestowed it.
This belief is expressed in the following quotation from their fundamental beliefs,
"The wages of sin is death. But God, who alone is immortal, will grant eternal life to His
redeemed. Until that day death is an unconscious state for all people..." (Rom. 6:23; 1
Tim. 6:15, 16; Eccl. 9:5, ....
Jehovah's Witnesses take the Hebrew word nephesh, which is commonly translated as
"soul", to be a person, an animal, or the life that a person or an animal enjoys. They
believe that the Hebrew word ruach (Greek pneuma), which is commonly translated as
"spirit" but literally means "wind", refers to the life force or the power that animates
living things. A person is a breathing creature, a body animated by the "spirit of God",
not an invisible being contained in a body and able to survive apart from that body after
death. Jesus spoke of himself, having life, as having a soul. When he surrendered his
life, he surrendered his soul. John 10:15 reads "just as the Father knows me and I
know the father, and I surrender my soul in behalf of the sheep." This belief that man is
a soul, rather than having a soul, is also in line with the knowledge that Hell (Sheol in
Hebrew and Hades in Greek) represents the common grave with the hope of
resurrection rather than eternal torment in hellfire.[66][67]
Latter-day Saints (Mormons) believe that the spirit and body together constitute the
Soul of Man (Mankind). "The spirit and the body are the soul of man."[68] They believe
that the soul is the union of a pre-existing, God-made spirit[69][70] and a temporal
body, which is formed by physical conception on earth. After death, the spirit continues
to live and progress in the Spirit world until the resurrection, when it is reunited with the
body that once housed it. This reuniting of body and spirit results in a perfect soul that
is immortal and eternally young and healthy.[71]
826
Hinduism
In Hinduism, the Sanskrit words most
closely corresponding to soul are jiva,
Ātman and "purusha", meaning the
individual self. The term "soul" is
misleading as it implies an object
possessed, whereas self signifies the
subject which perceives all objects.
This self is held to be distinct from the
various mental faculties such as
desires,
thinking,
understanding,
reasoning and self-image (ego), all of
which are considered to be part of
prakriti (nature).
The three major schools of Hindu
philosophy agree that the atman
(individual self) is related to Brahman
or the Paramatman, the Absolute
Atman or Supreme Self, but they differ
in the nature of this relationship. In
Advaita Vedanta the individual self and the Supreme Self are one and the same. Dvaita
rejects this concept of identity, instead identifying the self as a separate but similar part
of Supreme Self (God), that never loses its individual identity. Visishtadvaita takes a
middle path and accepts the atman as a "mode" (prakara) or attribute of the Brahman.
For an alternative atheistic and dualistic view of the soul in ancient Hindu philosophy,
see Samkhya.
The atman becomes involved in the process of becoming and transmigrating through
cycles of birth and death because of ignorance of its own true nature. The spiritual path
consists of self-realization – a process in which one acquires the knowledge of the self
(brahma-jñanam) and through this knowledge applied through meditation and
realization one then returns to the Source which is Brahman.
The qualities which are common to both Brahman and atmam are being (sat),
consciousness (chit), and bliss/love (ananda). Liberation or moksha is liberation from
all limiting adjuncts (upadhis) and the unification with Brahman.
The Mandukya Upanishad verse 7 describes the atman in the following way:
"Not inwardly cognitive, not outwardly cognitive, not both-wise cognitive, not a
cognition-mass, not cognitive, not non-cognitive, unseen, with which there can
be no dealing, ungraspable, having no distinctive mark, non-thinkable, that
cannot be designated, the essence of the assurance of which is the state of
being one with the Self, the cessation of development, tranquil, benign, without
a second (a-dvaita)—[such] they think is the fourth. That is the Self. That should
be discerned."
In Bhagavad Gita 2.20[72] Lord Krishna describes the soul in the following way:
na jayate mriyate va kadacin nayam bhutva bhavita va na bhuyah ajo nityah
sasvato yam purano na hanyate hanyamane sarire
"For the soul there is neither birth nor death at any time. He has not come into
being, does not come into being, and will not come into being. He is unborn,
eternal, ever – existing and primeval. He is not slain when the body is slain."
[Translation by A.C. Bhaktivedanta Swami Prabhupada (Srila Prabhupada)][73]
Srila Prabhupada,[74] a great Vaishnava saint of the modern time further explains:
"The soul does not take birth there, and the soul does not die...And because the soul
827
has no birth, he therefore has no past, present or future. He is eternal, ever-existing
and primeval – that is, there is no trace in history of his coming into being."
Since the quality of Aatma is primarily consciousness, all sentient and insentient beings
are pervaded by Aatma, including plants, animals, humans and gods. The difference
between them is the contracted or expanded state of that consciousness. For example,
animals and humans share in common the desire to live, fear of death, desire to
procreate and to protect their families and territory and the need for sleep, but animals'
consciousness is more contracted and has less possibility to expand than does human
consciousness.
When the Aatma becomes embodied it is called birth, when the Aatma leaves a body it
is called death. The Aatma transmigrates from one body to another body based on
karmic [performed deeds] reactions.
In Hinduism, the Sanskrit word most closely corresponding to soul is "Aatma", which
can mean soul or even God. It is seen as the portion of Brahman within us. Hinduism
contains many variant beliefs on the origin, purpose, and fate of the soul. For example,
advaita or non-dualistic conception of the soul accords it union with Brahman, the
absolute uncreated (roughly, the Godhead), in eventuality or in pre-existing fact. Dvaita
or dualistic concepts reject this, instead identifying the soul as a different and
incompatible substance.
There are 25 coverings wrapped on our Soul ( Reference Taken from Vaikunta
Varnane written by Sanyasi Vadiraja Swami) 1. Iccha avarka, 2. Linga deha, 3. Avyakta
Sharira, 4. Avidya Avarna, 5. Karma avarna, 6. Kama avarna, 7. Jeevacchadaka, 8.
Paramacchadaka, 9. Narayana rupa avarna, 10. Vasudeva rupa Avarna, 11.
Sankarshana rupa avarna, 12. Pradhyumna Avarka, 13. Anniruddha avarka, 14.
Anniruddha Sharira, 15. Vasudeva Kavaca, 16. Narayana Kavaca, 17. Anandamaya
kosha, 18. Vignanamaya kosha, 19. Manomaya kosha, 20. Vangmaya kosha, 21.
Shrotrumaya kosha, 22. Chakshurmaya kosha, 23. Pranamaya kosha, 24. Annamaya
kosha, 25. Gross Body.
Islam
According to the Quran, Ruh (Spirit) is a command from Allah (God).
And they ask you, [O Muhammad], about the soul (Rûh). Say, "The soul (Rûh) is of the
affair of my Lord. And mankind have not been given of knowledge except a little."
[Quran 17:85]
Islam teaches the soul is immortal and eternal. What a person does is definitely
recorded and will be judged at the utterly court of the God.
From the Holy Quran Chapter 39 Surah Zumar verse 42:
42 It is Allah that takes the souls at death: and those that die not (He takes their souls)
during their sleep: those on whom He has passed the Decree of death He keeps back
(their souls from returning to their bodies); but the rest He sends (their souls back to
their bodies) for a term appointed. Verily in this are Signs for those who contemplate.
Jainism
In Jainism every living being, from a plant or a bacterium to human, has a soul and the
concept forms the very basis of Jainism. The soul (Atman (Jainism)) is basically
categorized in two based on its liberation state.
1-Liberated Souls- These are souls which have attained (Moksha) and never
become part of the life cycle again. Attaining
2-Non-Liberated Souls - The Souls of any living being which are stuck in the life
cycle of 4 forms : Manushya Gati (Human Being), Tiryanch Gati (Any other
828
living being), Dev Gati (Heaven) and Narak Gati (Hell). Till the time the soul is
not liberated from the innumerable birth and death cycle, it gets attached to
different types of above bodies based on the karma of individual soul. According
to Jainism, there is no beginning and end to the existence of soul. It is eternal in
nature and changes its form till it attains (Moksha)
Irrespective of which state the soul is in, it has got the same attributes and qualities.
The difference between the liberated and non-liberated souls is that the qualities and
attributes are exhibited completely in case of Siddhas (Siddha) as they have overcome
all the karmic bondages whereas in case of non liberated souls they are partially
exhibited.
The soul (jiva) is differentiated from non-soul or non-living reality (ajiva) that consists of
matter, time, space, medium of motion and medium of rest.[citation needed]
Concerning the Jain view of the soul, Virchand Gandhi quoted "...the soul lives its own
life, not for the purpose of the body, but the body lives for the purpose of the soul. If we
believe that the soul is to be controlled by the body then soul misses its power."[75]
Judaism
“
The fruit of a righteous man is the tree of life, and the wise man acquires
‫ תֹוׁשָפְנ‬souls.
”
—Mishlei, Proverbs 11:30
The Hebrew terms ‫ שפנ‬nephesh (literally "living being"), ‫ חור‬ruach (literally "wind"),
‫ המשנ‬neshama (literally "breath"), ‫ היח‬chaya (literally "life") and ‫ הדיחי‬yechidah (literally
"singularity") are used to describe the soul or spirit. In modern Judaism the soul is
believed to be given by God to a person by his/her first breath, as mentioned in
Genesis, "And the LORD God formed man [of] the dust of the ground, and breathed
into his nostrils the breath of life; and man became a living being." Genesis 2:7.
Judaism relates the quality of one's soul to one's performance of mitzvot and reaching
higher levels of understanding, and thus closeness to God. A person with such
closeness is called a tzadik. Therefore Judaism embraces nahala and not Birthday[76]
as a festive of remembrance, for only toward the end of life's struggles, tests and
challenges human souls could be judged and credited - b'ezrat hashem - for
righteousness and holyness.[77][78]
“ For I [Hashem] will not contend forever, neither will I be wroth to eternity, when
a spirit from before Me humbles itself, and ‫ ַחּור‬souls [which] I have made. ”
—Nevi'im, Yeshayahu 57:16
Kabbalah and other mystic traditions go into greater detail into the nature of the soul.
Kabbalah separates the soul into five elements, corresponding to the five worlds:
1-Nephesh, related to natural instinct.
2-Ruach, related to emotion and morality.
3-Neshamah, related to intellect and the awareness of God.
4-Chayah, considered a part of God, as it were.
5-Yechidah, also termed the pintele Yid (the "essential [inner] Jew"). This
aspect is essentially one with God.
Kabbalah also proposed a concept of reincarnation, the gilgul. (See also nefesh
habehamit the "animal soul".)
Shamanism
According to Nadya Yuguseva, a shaman from the Altai, "'A woman has 40 souls; men
have just one[.]'"[79]
829
Sikhism
Sikhism considers Soul (atma) to be part of God (Waheguru). Various hymns are cited
from the holy book "Sri Guru Granth Sahib" (SGGS) that suggests this belief. "God is in
the Soul and the Soul is in the God."[80] The same concept is repeated at various
pages of the SGGS. For example: "The soul is divine; divine is the soul. Worship Him
with love."[81] and "The soul is the Lord, and the Lord is the soul; contemplating the
Shabad, the Lord is found."[82] The "Atma" or "Soul" according to Sikhism is an entity
or "spiritual spark" or "light" in our body because of which the body can sustain life. On
the departure of this entity from the body, the body becomes lifeless – No amount of
manipulations to the body can make the person make any physical actions. The soul is
the ‘driver’ in the body. It is the ‘roohu’ or spirit or atma, the presence of which makes
the physical body alive. Many religious and philosophical traditions, support the view
that the soul is the ethereal substance – a spirit; a non material spark – particular to a
unique living being. Such traditions often consider the soul both immortal and innately
aware of its immortal nature, as well as the true basis for sentience in each living being.
The concept of the soul has strong links with notions of an afterlife, but opinions may
vary wildly even within a given religion as to what happens to the soul after death.
Many within these religions and philosophies see the soul as immaterial, while others
consider it possibly material.
Taoism
According to Chinese traditions, every person has two types of soul called hun and po ,
which are respectively yang and yin. Taoism believes in ten souls, sanhunqipo "three
hun and seven po".[83] The pò is linked to the dead body and the grave, whereas the
hún is linked to the ancestral tablet. A living being that loses any of them is said to have
mental illness or unconsciousness, while a dead soul may reincarnate to a disability,
lower desire realms or may even be unable to reincarnate.
Other religious beliefs and views
In theological reference to the soul, the
terms "life" and "death" are viewed as
emphatically more definitive than the
common concepts of "biological life" and
"biological death". Because the soul is
said to be transcendent of the material
existence, and is said to have
(potentially) eternal life, the death of the
soul is likewise said to be an eternal
death. Thus, in the concept of divine
judgment, God is commonly said to have
options with regard to the dispensation of
souls, ranging from Heaven (i.e. angels)
to hell (i.e. demons), with various
concepts in between. Typically both
Heaven and hell are said to be eternal, or
at least far beyond a typical human
concept of lifespan and time.
Some transhumanists believe that it will
become possible to perform mind
transfer, either from one human body to another, or from a human body to a computer.
Operations of this type (along with teleportation), raise philosophical questions related
to the concept of the soul.
830
Spirituality, New Age and new religions
Brahma Kumaris
In Brahma Kumaris, human souls are believed to be incorporeal and eternal. God is
considered to be the Supreme Soul, with maximum degrees of spiritual qualities, such
as peace, love and purity.[84]
Theosophy
In Helena Blavatsky's Theosophy, the soul is the field of our psychological activity
(thinking, emotions, memory, desires, will, and so on) as well as of the so-called
paranormal or psychic phenomena (extrasensory perception, out-of-body experiences,
etc.). However, the soul is not the highest, but a middle dimension of human beings.
Higher than the soul is the spirit, which is considered to be the real self; the source of
everything we call “good”—happiness, wisdom, love, compassion, harmony, peace,
etc. While the spirit is eternal and incorruptible, the soul is not. The soul acts as a link
between the material body and the spiritual self, and therefore shares some
characteristics of both. The soul can be attracted either towards the spiritual or towards
the material realm, being thus the “battlefield” of good and evil. It is only when the soul
is attracted towards the spiritual and merges with the Self that it becomes eternal and
divine.
Anthroposophy
Rudolf Steiner differentiated three stages of soul development, which interpenetrate
one another in consciousness:[85]
1-the "sentient soul", centering on sensations, drives, and passions, with strong
conative (will) and emotional components;
2-the "intellectual" or "mind soul", internalizing and reflecting on outer
experience, with strong affective (feeling) and cognitive (thinking) components;
and
3-the "consciousness soul", in search of universal, objective truths.
Miscellaneous
In Surat Shabda Yoga, the soul is considered to be an exact replica and spark of the
Divine. The purpose of Surat Shabd Yoga is to realize one's True Self as soul (SelfRealisation), True Essence (Spirit-Realisation) and True Divinity (God-Realisation)
while living in the physical body.
George Gurdjieff in his Fourth Way taught that nobody is ever born with a soul. Rather,
an individual must create a soul by a process of self-remembrance and observation
during the course of their life. Without a soul, Gurdjieff taught that one will "die like a
dog".[citation needed]
Eckankar, founded by Paul Twitchell in 1965, defines Soul as the true self; the inner,
most sacred part of each person.[86]
Science
Science and medicine seek naturalistic accounts of the observable natural world. This
stance is known as methodological naturalism.[87] Much of the scientific study relating
to the soul has involved investigating the soul as an object of human belief, or as a
concept that shapes cognition and an understanding of the world, rather than as an
entity in and of itself.
831
When modern scientists speak of the soul outside of this cultural context, they
generally treat soul as a poetic synonym for mind. Francis Crick's book, The
Astonishing Hypothesis, for example, has the subtitle, "The scientific search for the
soul". Crick held the position that one can learn everything knowable about the human
soul by studying the workings of the human brain. Depending on one's belief regarding
the relationship between the soul and the mind, then, the findings of neuroscience may
be relevant to one's understanding of the soul. Skeptic Robert T. Carroll suggests that
the concept of a non-substantial substance is an oxymoron, and that the scholarship
done by philosophers based on the assumption of a non-physical entity has not
furthered scientific understanding of the working of the mind.[88]
Daniel Dennett has championed the idea that the human survival strategy depends
heavily on adoption of the intentional stance, a behavioral strategy that predicts the
actions of others based on the expectation that they have a mind like one's own (see
theory of mind). Mirror neurons in brain regions such as Broca's area may facilitate this
behavioral strategy.[89] The intentional stance, Dennett suggests, can be so successful
that people tend to apply it to all aspects of human experience, thus leading to animism
and to other conceptualizations of soul.[90][non-primary source needed]
Jeremy Griffith has defined soul as the human species' instinctive memory of a time
when modern human ancestors lived in a cooperative, selfless state,[91] suggesting
this occurred in pre-Homo (i.e. Australopithecus) hominins.
Parapsychology
Some parapsychologists have attempted to establish by scientific experiment whether
a soul separate from the brain, as more commonly defined in religion rather than as a
synonym of psyche or mind, exists. Milbourne Christopher in his book Search for the
Soul (1979) explained that none of the attempts by parapsychologists have yet
succeeded.[92]
Weight of the soul
In 1901 Dr Duncan MacDougall made weight measurements of patients as they died.
He claimed that there was weight loss of varying amounts at the time of death.[93] His
results have never been successfully reproduced, and are therefore scientifically
meaningless.[94]
Furthermore, an Oregon rancher, Lewis Hollander Jr., also attempted to weigh the soul
of animals. According to Hollander, the animals actually gained weight upon death,
instead of losing weight as expected.[95]
832
Immortality
Immortality is the ability to live forever,
or eternal life.[2] Biological forms have
inherent limitations which medical
interventions or engineering may or
may not be able to overcome. Natural
selection has developed potential
biological immortality in at least one
species, the jellyfish Turritopsis
dohrnii.[3]
Certain scientists, futurists, and
philosophers, have theorized about
the immortality of the human body,
and advocate that human immortality
is achievable in the first few decades
of the 21st century, while other
advocates believe that life extension is
a more achievable goal in the short
term, with immortality awaiting further
research breakthroughs into an
indefinite future. Aubrey de Grey, a
researcher who has developed a
series of biomedical rejuvenation
strategies to reverse human aging
(called SENS), believes that his
proposed plan for ending aging may
be implementable in two or three
decades.[4] The absence of aging
would provide humans with biological
immortality, but not invulnerability to death by physical trauma. What form an unending
human life would take, or whether an immaterial soul exists and possesses immortality,
has been a major point of focus of religion, as well as the subject of speculation,
fantasy, and debate.
In religious contexts, immortality is often stated to be among the promises by God (or
other deities) to human beings who show goodness or else follow divine law (cf.
resurrection).
The Epic of Gilgamesh, one of the first literary works, dating back at least to the 22nd
century BC, is primarily a quest of a hero seeking to become immortal.[5]
Wittgenstein, in a notably non-theological interpretation of eternal life, writes in the
Tractatus that, "If we take eternity to mean not infinite temporal duration but
timelessness, then eternal life belongs to those who live in the present."[6]
The atheist philosopher William Godwin asked 'Why may not man one day be
immortal?' [7]
833
Contents
1 Definitions
1.1 Scientific
1.2 Religious
2 Physical immortality
2.1 Causes of death
2.2 Biological immortality
2.3 Prospects for human biological immortality
2.4 Mystical and religious pursuits of physical immortality
3 Religious views
3.1 Ancient Greek religion
3.2 Buddhism
3.3 Christianity
3.4 Hinduism
3.5 Islam
3.6 Judaism
3.7 Taoism
3.8 Zoroastrianism
4 Ethics of immortality
4.1 Undesirability of immortality
5 Politics
6 Symbols
7 Fiction
Definitions
Scientific
Life extension technologies promise a path to complete rejuvenation. Cryonics holds
out the hope that the dead can be revived in the future, following sufficient medical
advancements. While, as shown with creatures such as hydra and planarian worms, it
is indeed possible for a creature to be biologically immortal, it is not yet known if it is
possible for humans.
Mind uploading is the concept of transference of consciousness from a human brain to
an alternative medium providing the same functionality. Assuming the process to be
possible and repeatable, this would provide immortality to the consciousness, as
predicted by futurists such as Ray Kurzweil.[8]
Religious
The belief in an afterlife is a fundamental tenet of most religions, including Hinduism,
Sikhism, Christianity, Zoroastrianism, Islam, Judaism, and the Bahá'í Faith; however,
the concept of an immortal soul is not. The "soul" itself has different meanings and is
not used in the same way in different religions and different denominations of a religion.
For example, various branches of Christianity have disagreeing views on the soul's
immortality and its relation to the body (cf. Soul (spirit).
834
Physical immortality
Physical immortality is a state of life that allows a person to avoid death and maintain
conscious thought. It can mean the unending existence of a person from a physical
source other than organic life, such as a computer. In the early 21st century, physical
immortality remains a goal rather than a current reality. Active pursuit of physical
immortality can either be based on scientific trends, such as cryonics, digital
immortality, breakthroughs in rejuvenation or predictions of an impending technological
singularity, or because of a spiritual belief, such as those held by Rastafarians or
Rebirthers.
Causes of death
There are three main causes of death: aging, disease and trauma.[9]
Aging
Aubrey de Grey, a leading researcher in the
field,[5] defines aging as follows: "a collection
of cumulative changes to the molecular and
cellular structure of an adult organism, which
result in essential metabolic processes, but
which also, once they progress far enough,
increasingly disrupt metabolism, resulting in
pathology and death." The current causes of
aging in humans are cell loss (without
replacement), DNA damage, oncogenic
nuclear mutations and epimutations, cell
senescence,
mitochondrial
mutations,
lysosomal
aggregates,
extracellular
aggregates, random extracellular cross-linking,
immune system decline, and endocrine changes. Eliminating aging would require
finding a solution to each of these causes, a program de Grey calls engineered
negligible senescence. It has also been researched that aging is not driven by genes,
and that it is driven by random events. Everything in the world changes or ages without
being driven by a purpose. There is also no direct evidence that proves that age
changes are governed by a genetic program. There is also a huge body of knowledge
indicating that change is characterized by the loss of molecular fidelity.[10] This leads
to the fact that there is no longer a chance for repair and turnover, increasing the
vulnerability to pathology or age-associated diseases.
Disease
Disease is theoretically surmountable via technology. In short, it is an abnormal
condition affecting the body of an organism, something the body shouldn't typically
have to deal with its natural make up.[11] Human understanding of genetics is leading
to cures and treatments for myriad previously incurable diseases. The mechanisms by
which other diseases do their damage are becoming better understood. Sophisticated
methods of detecting diseases early are being developed. Preventative medicine is
becoming better understood. Neurodegenerative diseases like Parkinson's and
Alzheimer's may soon be curable with the use of stem cells. Breakthroughs in cell
biology and telomere research are leading to treatments for cancer. Vaccines are being
researched for AIDS and tuberculosis. Genes associated with type 1 diabetes and
certain types of cancer have been discovered allowing for new therapies to be
developed. Artificial devices attached directly to the nervous system may restore sight
to the blind. Drugs are being developed to treat myriad other diseases and ailments.
835
Trauma
Physical trauma would remain as a threat to perpetual physical life, even if the
problems of aging and disease were overcome, as an otherwise immortal person would
still be subject to unforeseen accidents or catastrophes. Longevity researchers would
prefer to mitigate the risk of encountering trauma. Taking preventative measures by
engineering inherent resistance to injury is thus relevant, in addition to entirely reactive
measures more closely associated with the paradigm of medical treatment.
The speed and quality of paramedic response remains a determining factor in surviving
severe trauma.[12] A body that could automatically treat itself from severe trauma,
such as speculated uses for nanotechnology, would mitigate this factor. Without
improvements to such things, very few people would remain alive after several tens of
thousands of years purely based on accident rate statistics, much less millions or
billions or more.[citation needed]
Being the seat of consciousness, the brain cannot be risked to trauma if a continuous
physical life is to be maintained. Therefore, it cannot be replaced or repaired in the
same way other organs can. A method of transferring consciousness would be required
for an individual to survive trauma to the brain, and this transfer would have to
anticipate and precede the damage itself.[citation needed]
If there is no limitation on the degree of gradual mitigation of risk then it is possible that
the cumulative probability of death over an infinite horizon is less than certainty, even
when the risk of fatal trauma in any finite period is greater than zero. Mathematically,
this is an aspect of achieving "Actuarial escape velocity".
Biological immortality
Biological immortality is an absence of
aging, specifically the absence of a
sustained increase in rate of mortality as
a function of chronological age. A cell or
organism that does not experience
aging, or ceases to age at some point, is
biologically immortal.
Biologists have chosen the word
immortal to designate cells that are not
limited by the Hayflick limit, where cells
no longer divide because of DNA
damage or shortened telomeres. The
first and still most widely used immortal
cell line is HeLa, developed from cells
taken from the malignant cervical tumor
of Henrietta Lacks without her consent in 1951. Prior to the 1961 work of Leonard
Hayflick and Paul Moorhead, there was the erroneous belief fostered by Alexis Carrel
that all normal somatic cells are immortal. By preventing cells from reaching
senescence one can achieve biological immortality; telomeres, a "cap" at the end of
DNA, are thought to be the cause of cell aging. Every time a cell divides the telomere
becomes a bit shorter; when it is finally worn down, the cell is unable to split and dies.
Telomerase is an enzyme which rebuilds the telomeres in stem cells and cancer cells,
allowing them to replicate an infinite number of times.[13] No definitive work has yet
demonstrated that telomerase can be used in human somatic cells to prevent healthy
tissues from aging. On the other hand, scientists hope to be able to grow organs with
the help of stem cells, allowing organ transplants without the risk of rejection, another
step in extending human life expectancy. These technologies are the subject of
ongoing research, and are not yet realized.
836
Biologically immortal species
Life defined as biologically immortal is still susceptible to causes of death besides
aging, including disease and trauma, as defined above. Notable immortal species
include:
-Turritopsis nutricula, a jellyfish, after becoming a sexually mature adult, can
transform itself back into a polyp using the cell conversion process of
transdifferentiation.[3] Turritopsis nutricula repeats this cycle, meaning that it
may have an indefinite lifespan.[14] Its immortal adaptation has allowed it to
spread from its original habitat in the Caribbean to "all over the world".[15]
-Bacteria (as a colony) – Bacteria reproduce through Binary Fission. A parent
bacterium splits itself into two identical daughter cells. These daughter cells
then split themselves in half. This process repeats, thus making the bacterium
colony essentially immortal. A 2005 PLoS Biology paper[16] suggests that in a
bacterial colony, every particular bacterial cell may be considered to eventually
die since after each division the daughter cells can be identified as the older
and the younger, and the older is slightly smaller, weaker, and more likely to die
than the younger.[17]
-Bristlecone Pines are speculated to be potentially immortal;[citation needed]
the oldest known living specimen is over 5,000 years old.
-Hydra is a genus of simple fresh-water animal possessing radial symmetry.
Hydras are predatory animals belonging to the phylum Cnidaria and the class
Hydrozoa.[18]
Evolution of aging
As the existence of biologically immortal species demonstrates, there is no
thermodynamic necessity for senescence: a defining feature of life is that it takes in
free energy from the environment and unloads its entropy as waste. Living systems can
even build themselves up from seed, and routinely repair themselves. Aging is
therefore presumed to be a byproduct of evolution, but why mortality should be
selected for remains a subject of research and debate. Programmed cell death and the
telomere "end replication problem" are found even in the earliest and simplest of
organisms.[19] This may be a tradeoff between selecting for cancer and selecting for
aging.[20]
Modern theories on the evolution of aging include the following:
-Mutation accumulation is a theory formulated by Peter Medawar in 1952 to
explain how evolution would select for aging. Essentially, aging is never
selected against, as organisms have offspring before the mortal mutations
surface in an individual.
-Antagonistic pleiotropy is a theory proposed as an alternative by George C.
Williams, a critic of Medawar, in 1957. In antagonistic pleiotropy, genes carry
effects that are both beneficial and detrimental. In essence this refers to genes
that offer benefits early in life, but exact a cost later on, i.e. decline and
death.[21]
-The disposable soma theory was proposed in 1977 by Thomas Kirkwood,
which states that an individual body must allocate energy for metabolism,
reproduction, and maintenance, and must compromise when there is food
scarcity. Compromise in allocating energy to the repair function is what causes
the body gradually to deteriorate with age, according to Kirkwood.[22]
837
Prospects for human biological immortality
Life-extending substances
There are some known naturally occurring and artificially produced chemicals that may
increase the lifetime or life-expectancy of a person or organism, such as
resveratrol.[23][24] Future research might enable scientists to increase the effect of
these existing chemicals or to discover new chemicals (life-extenders) which might
enable a person to stay alive as long as the person consumes them at specified
periods of time.
Scientists believe that boosting the amount or proportion of a naturally forming enzyme,
telomerase, in the body could prevent cells from dying and so may ultimately lead to
extended, healthier lifespans. Telomerase is a protein that helps maintain the
protective caps at the ends of chromosomes.[25] A team of researchers at the Spanish
National Cancer Centre (Madrid) tested the hypothesis on mice. It was found that those
mice which were genetically engineered to produce 10 times the normal levels of
telomerase lived 50% longer than normal mice.[26]
In normal circumstances, without the presence of telomerase, if a cell divides
repeatedly, at some point all the progeny will reach their Hayflick limit. With the
presence of telomerase, each dividing cell can replace the lost bit of DNA, and any
single cell can then divide unbounded. While this unbounded growth property has
excited many researchers, caution is warranted in exploiting this property, as exactly
this same unbounded growth is a crucial step in enabling cancerous growth. If an
organism can replicate its body cells faster, then it would theoretically stop aging.
Embryonic stem cells express telomerase, which allows them to divide repeatedly and
form the individual. In adults, telomerase is highly expressed in cells that need to divide
regularly (e.g., in the immune system), whereas most somatic cells express it only at
very low levels in a cell-cycle dependent manner.
Technological immortality
Technological immortality is the prospect for much longer life spans made possible by
scientific advances in a variety of fields: nanotechnology, emergency room procedures,
genetics, biological engineering, regenerative medicine, microbiology, and others.
Contemporary life spans in the advanced industrial societies are already markedly
longer than those of the past because of better nutrition, availability of health care,
standard of living and bio-medical scientific advances. Technological immortality
predicts further progress for the same reasons over the near term. An important aspect
of current scientific thinking about immortality is that some combination of human
cloning, cryonics or nanotechnology will play an essential role in extreme life extension.
Robert Freitas, a nanorobotics theorist, suggests tiny medical nanorobots could be
created to go through human bloodstreams, find dangerous things like cancer cells and
bacteria, and destroy them.[27] Freitas anticipates that gene-therapies and
nanotechnology will eventually make the human body effectively self-sustainable and
capable of living indefinitely, short of severe brain trauma. This supports the theory that
we will be able to continually create biological or synthetic replacement parts to replace
damaged or dying ones.
Cryonics
Cryonics, the practice of preserving organisms (either intact specimens or only their
brains) for possible future revival by storing them at cryogenic temperatures where
metabolism and decay are almost completely stopped, can be used to 'pause' for those
who believe that life extension technologies will not develop sufficiently within their
lifetime. Ideally, cryonics would allow clinically dead people to be brought back in the
838
future after cures to the patients' diseases have been discovered and aging is
reversible. Modern cryonics procedures use a process called vitrification which creates
a glass-like state rather than freezing as the body is brought to low temperatures. This
process reduces the risk of ice crystals damaging the cell-structure, which would be
especially detrimental to cell structures in the brain, as their minute adjustment evokes
the individual's mind.
Mind-to-computer uploading
One idea that has been advanced involves uploading an individual's personality and
memories via direct mind-computer interface. The individual's memory may be loaded
to a computer or to a newly born baby's mind. The baby will then grow with the
previous person's individuality, and may not develop its own personality. Extropian
futurists like Moravec and Kurzweil have proposed that, thanks to exponentially
growing computing power, it will someday be possible to upload human consciousness
onto a computer system, and live indefinitely in a virtual environment. This could be
accomplished via advanced cybernetics, where computer hardware would initially be
installed in the brain to help sort memory or accelerate thought processes.
Components would be added gradually until the person's entire brain functions were
handled by artificial devices, avoiding sharp transitions that would lead to issues of
identity. After this point, the human body could be treated as an optional accessory and
the mind could be transferred to any sufficiently powerful computer. Another possible
mechanism for mind upload is to perform a detailed scan of an individual's original,
organic brain and simulate the entire structure in a computer. What level of detail such
scans and simulations would need to achieve to emulate consciousness, and whether
the scanning process would destroy the brain, is still to be determined.[28] Whatever
the route to mind upload, persons in this state would then be essentially immortal, short
of loss or traumatic destruction of the machines that maintained them. Time's futurists,
as well as Dmitry Itskov, head of the 2045 Initiative predict that this technology will be
available by 2045.
Cybernetics
Transforming a human into a cyborg can include brain implants or extracting a human
mind and placing it in a robotic life-support system. Even replacing biological organs
with robotic ones could increase life span (i.e., pace makers) and depending on the
definition, many technological upgrades to the body, like genetic modifications or the
addition of nanobots would qualify an individual as a cyborg. Such modifications would
make one impervious to aging and disease and theoretically immortal unless killed or
destroyed.
Evolutionary immortality
Another approach, developed by biogerontologist Marios Kyriazis, holds that human
biological immortality is an inevitable consequence of evolution. As the natural
tendency is to create progressively more complex structures,[29] there will be a time
(Kyriazis claims this time is now[30]), when evolution of a more complex human brain
will be faster via a process of developmental singularity[31] rather than through
Darwinian evolution. In other words, the evolution of the human brain as we know it will
cease and there will be no need for individuals to procreate and then die. Instead, a
new type of development will take over, in the same individual who will have to live for
many centuries in order for the development to take place. This intellectual
development will be facilitated by technology such as synthetic biology, artificial
intelligence and a technological singularity process.
839
Mystical and religious pursuits of physical immortality
Many Indian fables and tales include
instances of metempsychosis—the ability to
jump into another body—performed by
advanced Yogis in order to live a longer life.
There are also entire Hindu sects devoted to
the attainment of physical immortality by
various methods, namely the Naths and the
Aghoras.
Long before modern science made such
speculation feasible, people wishing to
escape death turned to the supernatural
world for answers. Examples include
Chinese Taoists[citation needed] and the
medieval alchemists and their search for the
Philosopher's Stone, or more modern
religious mystics, who believed in the
possibility of achieving physical immortality
through spiritual transformation.
Individuals claiming to be physically immortal
include Comte de Saint-Germain; in 18th
century France, he claimed to be centuries
old, and people who adhere to the Ascended
Master Teachings are convinced of his
physical immortality.
An Indian saint known as Vallalar claimed to have achieved immortality before
disappearing forever from a locked room in 1874.[32]
Rastafarians believe in physical immortality as a part of their religious doctrines. They
believe that after God has called the Day of Judgment they will go to what they
describe as Mount Zion in Africa to live in freedom forever. They avoid the term
"everlasting life" and deliberately use "ever-living" instead.
Another group that believes in physical immortality are the Rebirthers, who believe that
by following the connected breathing process of rebirthing and the spiritual purification
practices with earth, water, fire and mind, they can physically live forever.
Religious views
The world's major religions hold a number of perspectives on spiritual immortality, the
unending existence of a person from a nonphysical source or in a nonphysical state
such as a soul. However any doctrine in this area misleads without a prior definition of
"soul". Another problem is that "soul" is often confused and used synonymously or
interchangeably with "spirit".
As late as 1952, the editorial staff of the Syntopicon found in their compilation of the
Great Books of the Western World, that "The philosophical issue concerning
immortality cannot be separated from issues concerning the existence and nature of
man's soul."[33] Thus, the vast majority of speculation regarding immortality before the
21st century was regarding the nature of the afterlife.
In both Western and Eastern religions, the spirit is an energy or force that transcends
the mortal body, and returns to the spirit realm whether to enjoy heavenly bliss or suffer
eternal torment in hell, or the cycle of life, directly or indirectly depending on the
tradition.
840
Ancient Greek religion
In ancient Greek religion, immortality originally always included an eternal union of
body and soul, as can be seen in Homer, Hesiod, and various other ancient texts. The
soul was considered to have an eternal existence in Hades, but without the body the
soul was considered dead. Although almost everybody had nothing to look forward to
but an eternal existence as a disembodied dead soul, a number of men and women
were considered to have gained physical immortality and been brought to live forever in
either Elysium, the Islands of the Blessed, heaven, the ocean or literally right under the
ground. Among these were Amphiaraus, Ganymede, Ino, Iphigenia, Menelaus, Peleus,
and a great part of those who fought in the Trojan and Theban wars. Some were
considered to have died and been resurrected before they achieved physical
immortality. Asclepius was killed by Zeus only to be resurrected and transformed into a
major deity. Achilles, after being killed, was snatched from his funeral pyre by his divine
mother Thetis, resurrected, and brought to an immortal existence in either Leuce, the
Elysian plains, or the Islands of the Blessed. Memnon, who was killed by Achilles,
seems to have a received a similar fate. Alcmene, Castor, Heracles, and Melicertes
were also among the figures sometimes considered to have been resurrected to
physical immortality. According to Herodotus' Histories, the 7th century BC sage
Aristeas of Proconnesus was first found dead, after which his body disappeared from a
locked room. Later he was found not only to have been resurrected but to have gained
immortality.[34]
The philosophical idea of an immortal soul was a belief first appearing with either
Pherecydes or the Orphics, and most importantly advocated by Plato and his followers.
This, however, never became the general norm in Hellenistic thought. As may be
witnessed even into the Christian era, not least by the complaints of various
philosophers over popular beliefs, many or perhaps most traditional Greeks maintained
the conviction that certain individuals were resurrected from the dead and made
physically immortal and that others could only look forward to an existence as
disembodied and dead, though everlasting, souls. The parallel between these
traditional beliefs and the later resurrection of Jesus was not lost on the early
Christians, as Justin Martyr argued: "when we say ... Jesus Christ, our teacher, was
crucified and died, and rose again, and ascended into heaven, we propose nothing
different from what you believe regarding those whom you consider sons of Zeus." (1
Apol. 21).[35]
Buddhism
Buddhism teaches that there is a cycle of birth, death, and rebirth and that the process
is according to the qualities of a person's actions. This constant process of becoming
ceases at the fruition of Bodhi (enlightenment) at which a being is no longer subject to
causation (karma) but enters into a state that the Buddha called amata
(deathlessness).
According to the philosophical premise of the Buddha, the initiate to Buddhism who is
to be "shown the way to Immortality (amata)",[36] wherein liberation of the mind
(cittavimutta) is effectuated through the expansion of wisdom and the meditative
practices of sati and samādhi, must first be educated away from his former ignorancebased (avijja) materialistic proclivities in that he "saw any of these forms, feelings, or
this body, to be my Self, to be that which I am by nature".
Thus, desiring a soul or ego (ātman) to be permanent is a prime consequence of
ignorance, itself the cause of all misery and the foundation of the cycle of rebirth
(sa sāra). Form and consciousness being two of the five skandhas, or aggregates of
ignorance[citation needed], Buddhism teaches that physical immortality is neither a
path to enlightenment, nor an attainable goal[citation needed]: even the gods which
can live for eons eventually die. Upon enlightenment, the "karmic seeds" (sa khāras or
841
sanskaras) for all future becoming and rebirth are exhausted. After biological death an
arhat, or buddha, enters into parinirvana, a state of deathlessness due to the absence
of rebirth, which resulted from cessation of wanting.
Christianity
Christian theology holds that Adam and Eve
lost physical immortality for themselves and
all their descendants in the Fall of Man,
though this initial "imperishability of the
bodily frame of man" was "a preternatural
condition".[37]
Christians who profess the Nicene Creed
believe that every dead person (whether
they believed in Christ or not) will be
resurrected from the dead, and this belief is
known as Universal resurrection.
Bible passages like 1 Corinthians 15 are
interpreted as teaching that the resurrected
body will, like the present body, be both
physical (but a renewed and non-decaying
physical body) and spiritual.
Contrary to common belief, there is no
biblical support of "soul immortality" as such
in the New Testament, see Soul in the
Bible. The theme in the Bible is
"resurrection life" which imparts immortality,
not about "soul" remaining after death.
Luther and others rejected Calvin's idea of "soul immortality". Specific imagery of
resurrection into immortal form is found in the Pauline letters:
Behold, I shew you a mystery; We shall not all sleep, but we shall all be
changed,
In a moment, in the twinkling of an eye, at the last trump: for the trumpet shall
sound, and the dead shall be raised incorruptible, and we shall be changed.
For this corruptible must put on incorruption, and this mortal must put on
immortality.
So when this corruptible shall have put on incorruption, and this mortal shall
have put on immortality, then shall be brought to pass the saying that is written,
Death is swallowed up in victory.
O death, where is thy sting? O grave, where is thy victory?
The sting of death is sin; and the strength of sin is the law.
But thanks be to God, which giveth us the victory through our Lord Jesus Christ.
Therefore, my beloved brethren, be ye stedfast, unmoveable, always abounding
in the work of the Lord, forasmuch as ye know that your labour is not in vain in
the Lord.
—1 Corinthians 15:51–58
In Romans 2:6–7 Paul declares that God "will render to every man according to his
deeds: To them who by patient continuance in well doing seek for glory and honour
and immortality, eternal life", but then in Romans 3 warns that no one will ever meet
this standard with their own power but that Jesus did it for us.
842
Born-again Christians believe
that after the Last Judgment,
those who have been "born
again" will live forever in the
presence of God, and those who
were never "born again" will be
abandoned
to never-ending
consciousness
of
guilt,
separation from God, and
punishment for sin. Eternal
death is depicted in the Bible as
a realm of constant physical and
spiritual anguish in a lake of fire,
and a realm of darkness away
from God. Some see the fires of
Hell as a theological metaphor,
representing the inescapable
presence of God endured in
absence of love for God; others
suggest that Hell represents
complete destruction of both the
physical body and of spiritual
existence.
N.T. Wright, a theologian and former Bishop of Durham, has said many people forget
the physical aspect of what Jesus promised. He told Time: "Jesus' resurrection marks
the beginning of a restoration that he will complete upon his return. Part of this will be
the resurrection of all the dead, who will 'awake', be embodied and participate in the
renewal. John Polkinghorne, a physicist and a priest, has put it this way: 'God will
download our software onto his hardware until the time he gives us new hardware to
run the software again for ourselves.' That gets to two things nicely: that the period
after death (the Intermediate state) is a period when we are in God's presence but not
active in our own bodies, and also that the more important transformation will be when
we are again embodied and administering Christ's kingdom."[38] This kingdom will
consist of Heaven and Earth "joined together in a new creation", he said.
Roman Catholicism
Catholic Christians teach that there is a supernatural realm called Purgatory where
souls who have died in a state of grace but have yet to expiate venial sins or temporal
punishments due to past sins are cleansed before they are admitted into
Heaven.[citation needed] The Catholic Church also professes a belief in the
resurrection of the body. It is believed that, before the Final Judgement, the souls of all
who have ever lived will be reunited with their resurrected body.[citation needed] In the
case of the righteous, this will result in a glorified body which can reside in Heaven.
The damned, too, shall reunite body and soul, but shall remain eternally in Hell.
Seventh-day Adventists
Seventh-day Adventists believe that only God has immortality, and when a person dies,
death is a state of unconscious sleep until the resurrection. They base this belief on
biblical texts such as Ecclesiastes 9:5 which states "the dead know nothing", and 1
Thessalonians 4:13–18 which contains a description of the dead being raised from the
grave at the second coming.
843
"And the LORD God formed man of the dust of the ground, and breathed into
his nostrils the breath of life; and man became a living soul." (cf. Gen 2:7)
The text of Genesis 2:7 clearly states that God breathed into the formed man the
"breath of life" and man became a living soul. He did not receive a living soul; he
became one. The New King James Bible states that "man became a living being".
According to the Scriptures, only man received life in this way from God. Because of
this man is the only living creature to have a soul.
"And out of the ground the Lord God formed every beast of the field ... wherein
is the breath of life." (cf. Genesis 2:19, 7:15)
"Both man and beast ... have all one breath, so that a man hath no
preeminence above the beast."(cf. Ecclesiastes 3:19)
Of the many references to soul and spirit in the Bible, never once is either the soul or
the spirit declared to be immortal, imperishable or eternal. Indeed only God has
immortality (1 Timothy 1:17; 6:16). Adventists teach that the resurrection of the
righteous will take place at the second coming of Jesus, at which time they will be
restored to life and taken to reside in Heaven.
Jehovah's Witnesses
Jehovah's Witnesses believe the word soul (nephesh or psykhe) as used in the Bible is
a person, an animal, or the life a person or animal enjoys. Hence, the soul is not part of
man, but is the whole man—man as a living being. Hence, when a person or animal
dies, the soul dies, and death is a state of non-existence, based on Psalms 146:4,
Ezekiel 18:4, and other passages.[39] Hell (Hades or Sheol) is not a place of fiery
torment, but rather the common grave of humankind, a place of
unconsciousness.[40][41]
After the final judgment, it is expected that the righteous will receive eternal life and live
forever in an Earth turned into a paradise. Another group referenced as "the little flock"
of 144,000 people will receive immortality and go to heaven to rule as Kings and
Priests. Jehovah's Witnesses make the distinction that those with "eternal life" can die
though they do not succumb to disease or old age, whereas immortal ones cannot die
by any cause.[42] They teach that Jesus was the first to be rewarded with heavenly
immortality, but that Revelation 7:4 and Revelation 14:1, 3 refer to a literal number
(144,000) of additional people who will become "self-sustaining", that is, not needing
anything outside themselves (food, sunlight, etc.) to maintain their own life.[43]
Church of Jesus Christ of Latter-day Saints (Mormonism)
In Latter-day Saint (Mormon) theology,
the spirit and the body constitute the
human soul. Whereas the human body
is subject to death on earth, they
believe that the spirit never ceases to
exist and that one day the spirits and
bodies of all mankind will be reunited
again. This doctrine stems from their
belief that the resurrection of Jesus
Christ grants the universal gift of
immortality to every human being.
Members of the Church of Jesus Christ
of Latter-day Saints also believe that,
prior to their mortal birth, individuals
existed as men and women in a
844
spiritual state. That period of life is referred to as the first estate or the Pre-existence.
Latter-day Saints cite Biblical scriptures, such as Jeremiah 1:5, as an allusion to the
concept that mankind had a preparation period prior to mortal birth: "Before I formed
thee in the belly I knew thee; and before thou camest forth out of the womb I sanctified
thee, and I ordained thee a prophet unto the nations".[44] Joseph Smith, Jr., the
founder of the Latter Day Saint movement, provided a description of the afterlife based
upon a vision he received, which is recorded within the Church of Jesus Christ of
Latter-day Saint's canonical writings entitled Doctrine and Covenants.[45] According to
the 76th section of the LDS scripture, the afterlife consists of three degrees or
kingdoms of glory, called the Celestial Kingdom, the Terrestrial Kingdom, and the
Telestial Kingdom. Other Biblical scriptures speak of varying degrees of glory, such as
1 Corinthians 15:40-41: "There are also celestial bodies, and bodies terrestrial: but the
glory of the celestial is one, and the glory of the terrestrial is another. There is one glory
of the sun, and another glory of the moon, and another glory of the stars: for one star
cdiffereth from another star in glory."
The few who do not inherit any degree of glory (though they are resurrected) reside in
a state called outer darkness, which, though not a degree of glory, is often discussed in
this context. Only those known as the "Sons of Perdition" are condemned to this state.
Other Christian beliefs
The doctrine of conditional
immortality
states
the
human soul is naturally
mortal,
and
that
immortality is granted by
God as a gift. The doctrine
is a "significant minority
evangelical view" that has
"grown
within
evangelicalism in recent
years".[46]
Some sects who hold to
the doctrine of baptismal
regeneration also believe
in a third realm called
Limbo, which is the final
destination of souls who
have not been baptised,
but who have been innocent of mortal sin. Souls in Limbo include unbaptised infants
and those who lived virtuously but were never exposed to Christianity in their lifetimes.
Christian Scientists believe that sin brought death, and that death will be overcome with
the overcoming of sin.
Hinduism
Hinduism propounds that every living being, be it a human or animal, has a body and a
soul (consciousness) and the bridge between the two is the mind (a mixture of both). If
there is an imbalance between any of these three components it can result in illness
and 'death'. 'Death' as we know it, is the ceasing of the body to function and therefore
the soul which is immortal will have to migrate to another body and occupy some-other
mind thereby creating consciousness there, be it a human or animal depending upon
the 'karma' or 'past deeds' done in the previous physical body/bodies or life/lives.
845
Terminology
Hindus believe in an immortal soul
which is reincarnated after death.
According to Hinduism, people repeat a
process of life, death, and rebirth in a
cycle called samsara. If they live their
life well, their karma improves and their
station in the next life will be higher, and
conversely lower if they live their life
poorly. Eventually after many life times
of perfecting its karma, the soul is freed
from the cycle and lives in perpetual
bliss. There is no eternal torment in
Hinduism, temporal existence being
harsh enough, although if a soul
consistently lives very evil lives, it could
work its way down to the very bottom of
the cycle. Punarjanma means the birth
of a person that pays for all the karma
of previous lives in this birth.[citation
needed]
Sri Aurobindo states that the Vedic and
the post-Vedic rishis (such as
Markandeya)
attained
physical
immortality, which includes the ability to
change one's shape at will, and create
multiple bodies simultaneously in
different locations.[citation needed]
There are explicit renderings in the Upanishads alluding to a physically immortal state
brought about by purification, and sublimation of the 5 elements that make up the body.
For example in the Shvetashvatara Upanishad (Chapter 2, Verse 12), it is stated
"When earth, water fire, air and akasa arise, that is to say, when the five attributes of
the elements, mentioned in the books on yoga, become manifest then the yogi's body
becomes purified by the fire of yoga and he is free from illness, old age and death."
The above phenomenon is possible when the soul reaches enlightenment while the
body and mind are still intact, an extreme rarity, and can only be achieved upon the
highest most dedication, meditation and consciousness.
Certain peculiar practices
The Aghoris of India consume human flesh in pursuit of immortality and supernatural
powers, they call themselves gods and according to them they punish the sinners by
rewarding them death on their way to immortality. But it is to be noted that today they
only consume the humans who are already dead and only those who wish to be treated
this way upon death. They are looked down upon by Brahmins because of their
fascination for physical form as opposed to the immortal soul aspect of it. Also
vegetarianism which is propagated by hinduism is so completely diregarded in that they
even consume humans be it the already dead.[47] They distinguish themselves from
other Hindu sects and priests by their alcoholic and cannibalistic rituals.[48]
Another view of immortality is traced to the Vedic tradition by the interpretation of
Maharishi Mahesh Yogi:
That man indeed whom these (contacts) do not disturb, who is even-minded in
pleasure and pain, steadfast, he is fit for immortality, O best of men.[49]
846
To Maharishi Mahesh Yogi, the verse means, "Once a man has become established in
the understanding of the permanent reality of life, his mind rises above the influence of
pleasure and pain. Such an unshakable man passes beyond the influence of death and
in the permanent phase of life: he attains eternal life ... A man established in the
understanding of the unlimited abundance of absolute existence is naturally free from
existence of the relative order. This is what gives him the status of immortal life."[49]
Islam
And they say [non-believers in Allah],
"There is not but our worldly life; we die
and live (i.e., some people die and
others live, replacing them) and nothing
destroys us except time." And when Our
verses are recited to them as clear
evidences, their argument is only that
they say, "Bring [back] our forefathers, if
you should be truthful." Say, "Allah
causes you to live, then causes you to
die; then He will assemble you for the
Day of Resurrection, about which there
is no doubt," but most of the people do
not know. (Quran, 45:24–26)
Muslims believe that everyone will be
resurrected after death. Those who
believed in Islam and led an evil life will
undergo correction in Jahannam (Hell)
but once this correction is over, they are
admitted to Jannat (Paradise) and attain
immortality.[citation needed] Infidels on
the other hand and those who committed
unforgivable evil will never leave Hell.
Some individuals will therefore never
taste Heaven.
(Quran,002.028) "How can ye reject the faith in Allah?- seeing that ye were
without life, and He gave you life; then will He cause you to die, and will again
bring you to life; and again to Him will ye return."
Muslims believe that the present life is a trial in preparation for the next realm of
existence. He says [man says], "Who will give life to bones while they are
disintegrated?" Say, "He will give them life who produced them the first time; and He is,
of all creation, Knowing." [It is Allah] He who made for you from the green tree, fire, and
then from it you ignite. Is not He who created the heavens and the earth Able to create
the likes of them? Yes, [it is so]; and He is the Knowing Creator. (Quran, 36:78–81)
But those who disbelieve say, "The Hour (i.e., the Day of Judgment) will not come to
us." Say, "Yes, by my Lord, it will surely come to you. [Allah is] the Knower of the
unseen." Not absent from Him is an atom's weight within the heavens or within the
earth or [what is] smaller than that or greater, except that it is in a clear register – That
He may reward those who believe and do righteous deeds. Those will have forgiveness
and noble provision. But those who strive against Our verses [seeking] to cause failure
(i.e., to undermine their credibility) – for them will be a painful punishment of foul
nature. (Quran, 34:3–5)
847
Judaism
In both Judaism and Christianity, there is no biblical support of "soul immortality" as
such. The focus is on attaining resurrection life after death on the part of the believers.
Judaism claims that the righteous dead will be resurrected in the Messianic age with
the coming of the messiah. They will then be granted immortality in a perfect world.
The wicked dead, on the other hand, will not be resurrected at all. This is not the only
Jewish belief about the afterlife. The Tanakh is not specific about the afterlife, so there
are wide differences in views and explanations among believers.
The Hebrew Bible speaks about Sheol (‫)לואש‬, originally a synonym of the grave-the
repository of the dead or the cessation of existence until the Resurrection. This doctrine
of resurrection is mentioned explicitly only in Daniel 12:1–4 although it may be implied
in several other texts. New theories arose concerning Sheol during the intertestamental
literature. Some Hellenistic Jews postulated that the soul (nefesh ‫ )שפנ‬was really
immortal and that Sheol was actually a destination of the dead awaiting the
Resurrection, a syncretic form of Platonic Philosophy. By the 2nd century BC, Jews
who accepted the Oral Torah had come to believe that those in Sheol awaited the
resurrection either in Paradise (in the bosom of Abraham) or in Torment (Tartarus).
Taoism
It is repeatedly stated in Lüshi Chunqiu that death is unavoidable.[50] Henri Maspero
noted that many scholarly works frame Taoism as a school of thought focused on the
quest for immortality.[51] Isabelle Robinet asserts that Taoism is better understood as
a way of life than as a religion, and that its adherents do not approach or view Taoism
the way non-Taoist historians have done.[52] In the Tractate of Actions and their
Retributions, a traditional teaching, spiritual immortality can be rewarded to people who
do a certain amount of good deeds and live a simple, pure life. A list of good deeds and
sins are tallied to determine whether or not a mortal is worthy. Spiritual immortality in
this definition allows the soul to leave the earthly realms of afterlife and go to pure
realms in the Taoist cosmology.[53]
Zoroastrianism
Zoroastrians believe that on the fourth day after death, the human soul leaves the body
and the body remains as an empty shell. Souls would go to either heaven or hell; these
concepts of the afterlife in Zoroastrianism may have influenced Abrahamic religions.
The word immortal is driven from the month "Amurdad", meaning "deathless" in
Persian, in the Iranian calendar (near the end of July). The month of Amurdad or
Ameretat is celebrated in Persian culture as ancient Persians believed the "Angel of
Immortality" won over the "Angel of Death" in this month.[54]
Ethics of immortality
The possibility of clinical immortality raises a host of medical, philosophical, and
religious issues and ethical questions. These include persistent vegetative states, the
nature of personality over time, technology to mimic or copy the mind or its processes,
social and economic disparities created by longevity, and survival of the heat death of
the universe.
Undesirability of immortality
The doctrine of immortality is essential to many of the world's religions. Narratives from
Christianity and Islam assert that immortality is not desirable to the unfaithful:
848
The poor man died and was carried away by the angels to be with Abraham.
The rich man also died and was buried. In Hades, where he was being
tormented, he looked up and saw Abraham far away with Lazarus by his side.
He called out, 'Father Abraham, have mercy on me, and send Lazarus to dip
the tip of his finger in water and cool my tongue; for I am in agony in these
flames.' But Abraham said, 'Child, remember that during your lifetime you
received your good things, and Lazarus in like manner evil things; but now he is
comforted here, and you are in agony. Besides all this, between you and us a
great chasm has been fixed, so that those who might want to pass from here to
you cannot do so, and no one can cross from there to us.'
—Luke 16:22–26 NIV Translation
Those who are wretched shall be in the Fire: There will be for them therein
(nothing but) the heaving of sighs and sobs: They will dwell therein for all the
time that the heavens and the earth endure, except as thy Lord willeth: for thy
Lord is the (sure) accomplisher of what He planneth. And those who are
blessed shall be in the Garden: They will dwell therein for all the time that the
heavens and the earth endure, except as thy Lord willeth: a gift without break.
—The Qur'an, 11:106–108
The modern mind has addressed the undesirability of immortality. Science fiction writer
Isaac Asimov commented, "There is nothing frightening about an eternal dreamless
sleep. Surely it is better than eternal torment in Hell and eternal boredom in Heaven."
Physical immortality has also been imagined as a form of eternal torment, as in Mary
Shelley's short story "The Mortal Immortal", the protagonist of which witnesses
everyone he cares about dying around him. Jorge Luis Borges explored the idea that
life gets its meaning from death in the short story "The Immortal"; an entire society
having achieved immortality, they found time becoming infinite, and so found no
motivation for any action. In his book "Thursday's Fictions", and the stage and film
adaptations of it, Richard James Allen tells the story of a woman named Thursday who
tries to cheat the cycle of reincarnation to get a form of eternal life. At the end of this
fantastical tale, her son, Wednesday, who has witnessed the havoc his mother's quest
has caused, forgoes the opportunity for immortality when it is offered to him.[55]
Likewise, the novel Tuck Everlasting depicts immortality as "falling off the wheel of life"
and is viewed as a curse as opposed to a blessing.
University of Cambridge philosopher Simon Blackburn, in his essay "Religion and
Respect," writes, ". . . things do not gain meaning by going on for a very long time, or
even forever. Indeed, they lose it. A piece of music, a conversation, even a glance of
adoration or a moment of unity have their alloted time. Too much and they become
boring. An infinity and they would be intolerable."
Politics
Although scientists state that radical life extension, delaying and stopping aging are
achievable,[56] there are still no international or national programs focused on stopping
aging or on radical life extension. In 2012 in Russia, and then in the United States,
Israel and the Netherlands, pro-immortality political parties were launched. They aimed
to provide political support to anti-aging and radical life extension research and
technologies and at the same time transition to the next step, radical life extension, life
without aging, and finally, immortality and aim to make possible access to such
technologies to most currently living people.[57]
849
Symbols
There are numerous symbols representing immortality.
Pictured here is an Egyptian symbol of life that holds
connotations of immortality when depicted in the hands of the
gods and pharaohs who were seen as having control over the
journey of life, the ankh (left). The Möbius strip in the shape of
a trefoil knot is another symbol
of immortality. Most symbolic
representations of infinity or
the life cycle are often used to
represent
immortality
depending on the context they
are placed in. Other examples
include the Ouroboros, the
Chinese fungus of longevity,
the ten kanji, the phoenix, the
peacock in Christianity,[58]
and the colors amaranth (in
Western culture) and peach (in
Chinese culture).
Fiction
Immortal species abound in fiction, especially in fantasy literature.
850
Shamanism
Shamanism
SHAH-mən or
SHAY-mən) is a practice that
involves a practitioner reaching
altered states of consciousness
in order to encounter and
interact with the spirit world and
channel these transcendental
energies into this world.[2] A
shaman is a person regarded as
having access to, and influence
in, the world of benevolent and
malevolent spirits, who typically
enters into a trance state during
a ritual, and practices divination
and healing.[3]
The term "shamanism" is currently often used[by whom?] as an umbrella term referring
to a variety of spiritual practices, although it was first applied to the ancient religion of
the Turks and Mongols, as well as those of the neighboring Tungusic and Samoyedicspeaking peoples. The word "shaman" originates from the Evenk language (Tungusic)
of North Asia and was introduced to the west after Russian forces conquered the
shamanistic Khanate of Kazan in 1552. Upon learning more about religious traditions
across the world, western scholars also described similar magico-religious practices
found within the indigenous religions of other parts of Asia, Africa, Australasia and the
Americas as shamanism. Various historians[who?] have argued that shamanism also
played a role in many of the pre-Christian religions of Europe, and that shamanic
elements may have survived in popular culture right through to the Early Modern
period. Various[which?] archaeologists and historians of religion have also suggested
that shamanism may have been a dominant pre-religious practice for humanity during
the Palaeolithic.
Mircea Eliade writes, "A first definition of this complex phenomenon, and perhaps the
least hazardous, will be: shamanism = 'technique of religious ecstasy'."[4] Shamanism
encompasses the premise that shamans are intermediaries or messengers between
the human world and the spirit worlds. Shamans are said to treat ailments/illness by
mending the soul. Alleviating traumas affecting the soul/spirit restores the physical
body of the individual to balance and wholeness. The shaman also enters supernatural
realms or dimensions to obtain solutions to problems afflicting the community.
Shamans may visit other worlds/dimensions to bring guidance to misguided souls and
to ameliorate illnesses of the human soul caused by foreign elements. The shaman
operates primarily within the spiritual world, which in turn affects the human world. The
restoration of balance results in the elimination of the ailment.[4]
Shamanic beliefs and practices have attracted the interest of scholars from a wide
variety of disciplines, including anthropologists, archaeologists, historians, religious
studies scholars and psychologists. Hundreds of 
Download

Cognitive Science