Docx
Docx
Subcortical Structures
-hidden from view under the cortex are the subcortical parts
-one of these parts is the thalamus: acts as a relay station for nearly all the sensory info going to the
cortex
-directly underneath the thalamus is the hypothalamus: structure that plays a crucial role in controlling
motivated behaviors such as eating, drinking, and sexual activity
-around the thalamus and hypothalamus is another set of interconnected structures that form the limbic
system
-included here is the amygdala and close by is the hippocampus, both located underneath the cortex in
the temporal lobe (essential for learning and memory)
-amygdala and presentation of frightful faces, more active amygdala = better memory advantage for
emotional events
Lateralization
-parts of the brain come in pairs – hippocampus and amygdala on left and right side
-same is true for cerebral cortex – temporal cortex in the left and right hemisphere
-even though same shape and pattern of connection there are differences in function between the left-side
and right-side structures
-the functioning of one side is closely integrated with that of the other side, this integration is made
possible by the commissures: thick bundles of fibers that carry info back and forth between the 2
hemispheres
-largest commissure is the corpus callosum
- “split brain patient” – severed corpus callosum (last resort to treating epilepsy) – research with these
patients has taught us about specialized functions in left and right hemisphere – left hemisphere =
language, right hemisphere = spatial judgment
-2 hemispheres are not cerebral competitors – instead, the hemispheres pool their specialized capacities to
produce a seamlessly integrated single mental self
1. Motor Areas
-tissue crucial to organizing and controlling bodily movements
-primary motor projection areas (located at the rearmost edge of the frontal lobe) contain departure
points in primary motor projection areas for signals leaving the cortex and controlling muscle
movements and arrival points in primary sensory projection areas for info coming from the sense
organs (eyes, ears …)
-evidence for these areas comes from studies involving mild electrical currents – they produce movement
that show a pattern of contralateral control – with stimulation to the left hemisphere leading to
movements on the right side of the body and vice versa
-in human brain, the map that constitutes the motor projection area is
located on a strip of tissue toward the rear of the frontal lobe
-areas of the body that we can move with great precision (fingers and lips) have a
lot of cortical area devoted to them while areas which have less control (shoulders
and back) have less cortical coverage
2. Sensory Areas
-tissue essential for organizing and analyzing the info we receive from senses
-info arriving from the skin senses is projected to a region in the parietal lobe just
behind the motor projection area labeled the “somatosensory area”
-stimulation is parts of the brain responsible for certain senses will cause the
patient to experience these senses
-in the somatosensory area, each part of the body’s surface is represented by its
own region on the cortex – near areas of the body are typically similarly nearby
areas of the brain
-sensitive areas of the body have more cortical space
-also evidence of contralateral connections – somatosensory area in the left hemisphere receives its info
from the right side of the body (visual projection area – both brain hemispheres receive info from both
eyes but left hemisphere receives info corresponding to right side of visual field, 60% of nerve fibers
from each ear send info to opposite side of brain)
3. Association Areas
-association cortex - 75% or cerebral cortex
-lesions in the frontal lobe produce apraxias – disturbances in the initiation or organizing of voluntary
action
-lesions in the occipital cortex or in the rearmost part of the parietal lobe lead to agnosias – disruptions in
the ability to identify familiar objects – they usually affect 1 modality only so a person can recognize a
fork through touch but not sight
-other lesions (usually in parietal lobe) produce neglect syndrome – individual ignores half of the visual
world – patient will only shave half of their face, and eat half of plate, when reading will read half of
word
-aphasia – lesions in areas near the lateral fissure (deep groove that separates the frontal and temporal
lobes) can result in disruptions to language capacities
-damage to the front most part of the frontal lobe, the prefrontal area causes problems in planning and
implementing strategies, problems in inhibiting own behaviors, relying on habit in inappropriate
situations, and confusion about reality and imagination
Brain Cells
The Synapse
-when a neuron has been sufficiently stimulated it releases a minute quantity of neurotransmitter – the
molecules of this substance drift across the tiny gap between neurons and latch on to the dendrites of the
adjacent cell – if the dendrites receive enough of this substance, the next neuron will fire and the signal
will be sent along to other neurons
-the neurons don’t touch each other directly
-the end of the axon, plus the gap, plus the receiving membrane of the next neuron is called a synapse
-the space between the neurons is the synaptic gap, the bit of the neuron that releases the transmitter into
the gap is the presynaptic membrane, and the bit of the neuron on the other side of the gap affected by
the neurotransmitter is the postsynaptic membrane
-when the neurotransmitters arrive at the postsynaptic membrane, they cause changes in this membrane
that enable certain ions to flow into and out of the postsynaptic cell
-if the ionic flows are large enough they trigger a response in the postsynaptic cell – if the incoming
signal reaches the post synaptic cell’s threshold, then the cell fires; that is, it produces an action
potential – a signal that moves down its axon, which in turn causes the release of neurotransmitters at the
next synapse causing the next cell to fire
-neurons depend on 2 different forms on information flow – communication from one neuron to the next
mediated by a chemical signal and communication from one end of the neuron to the other by electrical
signal created by the flow of ions in and out of the cell
-postsynaptic neurons initial response can vary in size
-if input reaches the postsynaptic neurons firing threshold there is no variability in response, if a signal is
sent it is always the same magnitude – this is the all-or-none law
-brain relies on many different neurotransmitters – some stimulate subsequent neurons some inhibit, some
help with learning and memory others regulate arousal in the brain, some influence motivation and
emotion
-synaptic gap is 20-30 nanometers across (a lot smaller than human hair diameter)
-pattern of neurons feeding each other info makes it possible for them to compare signals and adjust
response according to signal arriving at different input – this communication is adjustable and strength of
the synaptic connection can be altered by experience which is crucial in learning and storage of
knowledge
The Photoreceptors
-reflected light is what launches the process of visual perception
-some of this light hits the front surface of the eyeball, passes through the cornea and the lens, and then
hits (produces a sharply focused image on) the retina, the light sensitive tissue that lines the back of the
eyeball
-adjustments of this process are made by muscle surrounding the lens – when muscle tightens lens bulges
creating proper shape for focusing images of nearby objects, when the muscle relaxes, the lens returns to
a flatter shape allowing proper focus for far away objects
-there are 2 types of photoreceptors (specialized neural cells that respond directly to incoming light) on
the retina:
1. rods – sensitive to very low levels of light and so help move around in semi darkness, but color
blind – they can distinguish different levels of light but they cannot discriminate one hue from
another
2. cones – less sensitive and therefore need more incoming light to operate, sensitive to color
differences, more precisely, there are 3 different types of cones each having its own pattern of
sensitivities to different wavelengths, you perceive color by comparing the outcomes from these
three cone types, patterns of firing across the three cone types correspond to different perceived
hues
-ability to see detail is referred to as acuity and it is much higher is cones than rods
-we point our eyes toward a target when we wish to perceive it in detail to position our eyes so that the
target image falls onto our fovea – the very center of the retina, cones far outnumber rods and as a result
this is the area with the greatest acuity
-in portions of the retina more distant from the fovea rods are predominant and well out into our periphery
there are no cones at all – explains why we are better able to see very dim light out of the corner of our
eyes (looking at stars’ example – don’t look directly at very dim stars)
Lateral Inhibition
-rods and cones (photoreceptors) stimulate bipolar cells, which in turn excite ganglion cells
-the ganglion cells are uniformly spread across the retina but all of their axons converge to form the
bundle of nerve fibers that we call the optic nerve – the nerve tract that leaves the eye ball and carries
info to the brain
-this info is sent to the lateral geniculate nucleus (LGN) in the thalamus and from there info is
transmitted to the primary projection area for vision in the occipital lobe
-optic nerve is not just a cable that conducts signals from one site to another – the cells that link the retina
to the brain are already engaged in the task of analyzing the visual input
-lateral inhibition is an example of this – activity is cell b is decreased by the lateral inhibition in the
cells beside it leading to a moderate level of activity in cell b while a cell on the edge of the retina, cell c
that is only inhibited by one cell beside it will receive less inhibition, cell b and cell c initially receive the
same input but cell c is less inhibited so will fire more strongly than cell b
-lateral inhibition causes a process called edge enhancement – exaggerated contrast of the edge which
highlight an objects shape which is crucial for determining what the object is
-moreover analysis of an image begins immediately in the eyeball
Form Perception
-gestalt psychology – the perceptual whole is often different from the sum of tis parts
-Jerome Bruner – “beyond the information given” - described the ways that our perception of a stimulus
differs or goes beyond the stimulus itself
-the Necker cube – example of a reversible figure – it can be perceived in different ways
-figure/ground organization – determining what is the depicted object and what is the ground (case or 2
faces example)
Constancy
-perceptual constancy – we perceive the constant properties of objects in this world (their size shape and
so on) even though the sensory info we receive about there attributes changes whenever our viewing
circumstances change
-size constancy – correctly perceiving the sizes of objects in the world despite the changes in retinal-
image size created by changes in viewing distance
-shape constancy – correctly perceive the shapes of objects despite changes in the retinal image created
by shifts in your viewing angle
-brightness constancy – correctly perceive brightness of an object whether they are illuminated by dim
light or strong sun
Unconscious Inference
-size constancy may be achieved by focusing on unchanging relationships (object in relation to
background – comparison objects) rather than the images themselves
-Helmholtz – there is a simple inverse relationship between distance and retinal image size – if an object
doubles in distance from the viewer, the size of its image is reduced to half – if tripled, image size is a
third of initial size – believed we were constantly calculating this through unconscious inference
Illusions
-table tops – adjust for apparent viewing angles which cause illusion
-monster illusion – misperceive the depth relationship and take this faulty info into account in interpreting
the shapes
-contrast effect (brightness illusion) – square surrounded by dark squares looked darker but it same shade
of grey as square surrounded by white squares (also shadow creates unconscious inference)
Binocular Cues
-perception of distance depends on distance cues – features of the stimulus that indicate an objects
position
-an important cue comes from binocular disparity – the difference between the 2 eyes’ views, this can
induce the perception of depth even when no other distance cues are present
Monocular Cues
-we can also perceive depth with one eye closed meaning that there are also depth cues that depend only
on what each eye sees by itself – these are monocular distance cues
-how much adjustment is needed (used as a cue) – more for near objects less for far
-pictorial cues – are used by artists to create an impression of depth on a flat surface
-interposition – the blocking of your view of one object by some other object (man and mail box
example)
-linear perspective – the name for the pattern in which parallel lines seem to converge as they get farther
and farther from the viewer
-changes in texture gradient provide important info about spatial arrangements
Word Recognition
-evidence shows that object recognition begins with detecting simple features and once this has occurred
separate mechanisms are needed to put the features together into complete objects
Degree of Well-Formedness
-it easier to recognize the letter E if it appears in context
-however, there is no context effect if presented with a string like “HZYE”
-words like “FIKE” or “LAFE” even though not English words are easy to read and do produce a context
effect
-can evaluate if a 2-3 letter string is well formed – how well it conforms to usual spelling patterns of
English
Making Errors
-you somehow are using your knowledge of spelling patterns when you look at and recognize the words
you encounter – have an easier time with letter strings the conform to these patterns
-errors that occur in this are quite systematic – there is a strong tendency to misread less common letter
sequences as if they were more common patterns – for example “TPUM” as “TRUM”
Ambiguous Inputs
-context does not allow you to see more it allows you to make more use of what you see – the most
familiar bigram detector will be triggered
Recognition Errors
-downside of this is that we will perceive something like “CQRN” as “CORN” – confusion will sorted out
at the bigram level and the primed CO detector will respond wrongly
-network is biased to frequent letter combinations – but helps more than hurts
Distributed Knowledge
-the networks knowledge is not locally represented anywhere; it is not stored into a particular location or
built into a specific process
-we need to look at the relationship between the CO-detector and CF-detector levels of priming, and we
also need to look at how this relationship will lead to one detector being more influential than the other
-the knowledge about bigram frequencies is distributed knowledge – it is represented in a fashion that’s
distributed across the network and is detectable only if we consider how the entire network functions
-actual mechanisms of the feature net involve neither inferences nor knowledge (in the conventional
sense) and activity is locally determined – influenced by just the detectors feeding into it (just acts as if it
knows the rules)
Recognition by Components
-recognizing objects other than print (three dimensional)
-recognition by components (RBC) model – includes an intermediate level of detectors sensitive to
geons (geometric ions) which serve as basic building blocks of all the objects we recognize – the alphabet
from which all objects are constructed
-geons are simple shapes such a cylinders, cones and blocks
-we need (at most) 3 dozen different geons to describe every object in the world (just as 26 letters in
alphabet) – they can be combined in various ways – in a top-of relation, or a side-connected relation, and
so on
-RBC uses a hierarchy of detectors (lowest to highest) - feature detectors – respond to edges, curves,
vertices and so on these in turn then activate the geon detectors geon assemblies – which represent
the relations between geons (top-of or side-connected) these assemblies activate the object model –
presentation of complete recognized object
-geons can be identified from any angle of view so recognition based on geons is viewpoint-independent
-as a result this model can recognize object even if much of the object is hidden from view
Holistic Recognition
-face recognition does not depend on an inventory of face parts
-instead it seems to depend on holistic perception of the face
-the recognition depends on complex relationships created by the face’s overall configuration – the
spacing of the eyes relative to the length of the nose, the height of the forehead elative to the width of the
face and so on
-features can’t be considered one by one – features matter by virtue of the relationships and
configurations they create
-evidence for this come from the composite effect (combining top half of face with bottom of another)
Selective Attention
Dichotic Listening
-participants wore headphones and heard one input in the left ear and a different input in the right ear
-participants were instructed to pay attention to one of these inputs – called the attended channel – and
told to simply ignore the message in the other ear – the unattended channel
-to make sure the participants were paying attention they were given a task called shadowing – required
to repeat what they heard in the attended channel
-if asked about the unattended channel after shadowing they have no idea what it said – not even sure if
coherent or jumble of words, or gibberish
-however not completely oblivious to unattended channel – noticed it it contained human speech, musical
instruments, or silence
-can report whether the speaker was male or female, had a high or low voice, or was speaking loudly or
softly – physical attributes are heard but oblivious to semantic content
Selective Priming
-you can literally prepare yourself for perceiving by priming the relevant detectors – aka you somehow
“reach into” the network and deliberately activate detectors you think will soon be needed
-once the detectors have been primed in this fashion they will be on high alert and ready to fire
-there is a limit to how much priming you can do
-two types of priming – response time in neutral, primed, and misled conditions
-priming is observed even in the absence of expectations
-priming the wrong detector has no effect
-explaining the costs and benefits – 2 types of primes – stimulus based and expectation based
-with high validity primes responses in the mislead condition were slower than responses in the neutral
condition (priming the wrong detector takes something away from other detectors)
-limited capacity system – priming one detector uses some of the limited supply of activation needed, if
there was an unlimited supply this effect would not occur (Posner & Snyder)
-not enough resources
Attention as a Spotlight
-visual attention can be compared to a spotlight that can shine anywhere in the visual field – the beam
marks the region of space for which you are prepared and inputs within it can be processed more
efficiently (refers to movements of attention not movement of eyes)
-benefits of attention can occur before any eye movement
-control of attention depends on a network of brain sites in the frontal cortex and parietal cortex
-the orienting system is responsible for shifting attention, the alerting system is responsible for achieving
and maintaining an alert state in the brain, and the executive system control voluntary actions
Feature Binding
-suggestion that unlimited capacity to prime all detectors would be better is wrong
-this would promote an information overload
-your limited capacity helps you by allowing you only a manageable flow of stimulation
-need attention to perceive a unified object – distracted participants correctly catalogue the features that
are in view but fail to bundle the features in the right way
Divided Attention
-the effort to divide your focus between multiple tasks or multiple inputs
The Specificity of Resources
-two relatively similar tasks – reading a book and listening to a lecture – both involve the use of language
and so it seems plausible that both these tasks will have similar resource requirements which means when
trying to do them at the same time they are likely to compete for resources and make this sort of
multitasking difficult
-two very different tasks – knitting and listening to a lecture – are unlikely to interfere with each other as
they have distinct resource requirements
Executive Control
-the minds executive control – mechanism that sets goals and priorities, chooses strategies, and controls
the sequence of cognitive processes
-executive control is needed whenever you want to avoid interference from habits supplied by memory or
habits triggered by situational cues
-it works to maintain the desired goal in mind and inhibit automatic responses
-people who have diminished executive control (damage to the prefrontal cortex, frontal lobe damage)
show preservation error –a tendency to produce the same response over and over even when its plain
that the task requires a change in the response
-these patients also show a patters of goal neglect – failing to organize their behavior in a way that moves
them toward their goals
Practice
-situation is different for a novice driver than an experienced driver
Automaticity
-once a task is well practiced, you can lose the option of controlling your own mental operations –
practice enables many mental processes to go forward untouched by control mechanisms with the result
that these processes are now uncontrollable
-tasks that have been frequently practiced can achieve a state of automaticity – distinguish between
controlled tasks – typically novel or continuously vary in their demand, and automatic tasks – are
typically highly familiar and do not require great flexibility
-downside to automatic tasks – are not governed by the minds control mechanisms and an act as if they
are “mental reflexes”
-an example involves an effect known as the stroop interference – participants shown words and asked
to name color of the ink out loud while the words themselves were color names – extremely difficult
because of strong tendency to read the printed words