Mem and Cog Study Guide - Quiz 1
Mem and Cog Study Guide - Quiz 1
2. Methodology:
• Introspectionism: Uses introspection and subjective reports.
• Gestalt Psychology: Employs observation, phenomenological methods, and experimentation
on perception.
• Behaviorism: Grounded in the idea that behavior is learned through interactions with the
environment.
4. Criticisms:
• Introspectionism: Criticized for subjectivity and lack of empirical rigor.
• Gestalt Psychology: Criticized for lack of precise measurement and vague concepts.
• Behaviorism: Criticized for neglecting cognitive and emotional aspects of behavior.
What is a TOTE unit?
A TOTE unit, which stands for Test-Operate-Test-Exit, is a conceptual framework used in
cognitive psychology to describe the process of goal-directed behavior. It was introduced by
George A. Miller, Eugene Galanter, and Karl H. Pribram in their 1960 book, “Plans and the
Structure of Behavior.” The TOTE unit represents a hierarchical model for how organisms achieve
goals through a series of operations and checks.
Algorithms
Definition:
• An algorithm is a step-by-step, systematic procedure that guarantees a solution to a problem if
followed correctly. Algorithms are precise and unambiguous, providing a clear path from the
problem’s initial state to its goal state.
Characteristics:
• Structured: Algorithms follow a specific sequence of operations.
• Repeatable: They can be consistently applied to similar problems to achieve the same results.
• Reliable: Given the same inputs, an algorithm will always produce the same output.
Examples in Problem-Solving:
• Mathematics: Long division, formulas for solving quadratic equations, and algorithms used in
computer science (like sorting algorithms) are examples of mathematical algorithms.
• Everyday Tasks: A recipe for baking a cake is an algorithm that, if followed precisely, guarantees
a successful outcome.
Advantages:
• Accuracy: Algorithms provide accurate and reliable solutions, assuming no errors in their
execution.
• Efficient: They provide quick and practical solutions, often through trial and error or educated
guesses.
• Flexible: Heuristics can be adapted to different situations and contexts.
Examples in Problem-Solving:
• Trial and Error: Trying different solutions and eliminating those that do not work.
• Means-Ends Analysis: Breaking down a problem into smaller subproblems and solving each one
to reduce the difference between the current state and the goal state.
• Availability Heuristic: Making decisions based on information that is most readily available or
recent in memory.
• Representativeness Heuristic: Assessing the likelihood of an event based on how closely it
resembles a typical case or prototype.
Advantages:
• Speed: Heuristics allow for quick decision-making and problem-solving.
• Cognitive Economy: They reduce the mental effort required to solve problems.
• Practicality: Heuristics are useful in situations where an exhaustive search is impractical or
impossible.
Disadvantages:
• Biases: Heuristics can lead to cognitive biases and errors in judgment, such as overconfidence
or stereotyping.
• Inaccuracy: They do not guarantee a correct solution and may result in suboptimal outcomes.
Comparison and Application
Describe overconfidence, belief perseverance, the availability heuristic, and confirmation bias
Each of these cognitive biases can significantly impact how we process information and make
decisions:
• Overconfidence: Leads to risk-taking and planning errors due to an inflated sense of one’s
abilities and knowledge.
• Belief Perseverance: Prevents individuals from updating their beliefs in the face of new
evidence, leading to persistent misconceptions.
• Availability Heuristic: Causes judgments to be influenced by what is most readily brought to
mind, rather than a thorough assessment of all relevant information.
• Confirmation Bias: Results in the selective gathering and interpretation of evidence that
supports existing beliefs, ignoring contrary information.
Understanding these biases helps in developing strategies to mitigate their effects, leading to
more rational and informed decision-making.
Describe the attenuation process
The attenuation process, proposed by Anne Treisman, is a model of attention that refines Donald
Broadbent’s filter theory. Instead of completely blocking unattended stimuli, the attenuation
process reduces their strength, allowing for a more flexible and dynamic handling of information.
Key Concepts:
• Selective Attention: The brain prioritizes certain information based on relevance, using an
attenuator that weakens, but does not entirely block, unattended signals.
• Attenuation Mechanism: Attended messages pass through with full strength, while unattended
messages are weakened but still processed to some extent.
• Thresholds: Different stimuli have different detection thresholds. Salient or important stimuli,
even if attenuated, can exceed these thresholds and capture attention.
• Top-Down Influence: Higher cognitive processes, such as expectations and prior knowledge,
influence what information gets prioritized.
Example - Cocktail Party Effect:
• At a noisy party, you can focus on a conversation (attended message) but still notice if someone
mentions your name nearby (unattended message) because it has a low threshold and is highly
relevant to you.
Comparison:
• Broadbent’s Filter Model: Proposes a strict, all-or-nothing filter that completely blocks
unattended information.
Summary
• Iconic Memory: Visual sensory memory, very brief duration, essential for visual continuity.
• Echoic Memory: Auditory sensory memory, longer duration, crucial for processing and
understanding speech and sounds.
Both forms of memory play vital roles in how we perceive and interact with the world, providing
the necessary brief storage to process and make sense of sensory information.
Describe the Stroop Task and major findings from this type of research
The Stroop Task
Definition: The Stroop Task is a psychological test that demonstrates the interference in reaction times
when performing a task that requires attention and cognitive control. It was first introduced by John Ridley
Stroop in 1935.
Procedure:
• Classic Stroop Task: Participants are presented with a list of color words (e.g., “red,” “blue,” “green”)
printed in different-colored inks. The task is to name the color of the ink, not the word itself. For example,
if the word “red” is printed in blue ink, the correct response is “blue.”
• Congruent Condition: The color of the ink matches the color word (e.g., the word “red” printed in red
ink).
• Incongruent Condition: The color of the ink does not match the color word (e.g., the word “red” printed
in blue ink).
Findings:
• Stroop Effect: The main finding is that participants take significantly longer to name the ink color in the
incongruent condition compared to the congruent condition. This delay is known as the Stroop Effect.
1. Interference Effect:
• The Stroop Effect highlights the cognitive interference that occurs when the brain processes conflicting
information. The automatic process of reading the word interferes with the task of naming the ink color,
leading to slower reaction times and increased errors in the incongruent condition.
2. Automaticity of Reading:
• The task demonstrates that reading is an automatic process for literate individuals, meaning that reading
the word happens without conscious effort and cannot be easily suppressed. This automaticity conflicts
with the task of color naming, which requires more controlled processing.
• The Stroop Task is used to study cognitive control and executive function. It requires the participant to
inhibit an automatic response (reading the word) and instead perform a less automatic task (naming the
ink color). Performance on the Stroop Task is often used to assess the functioning of the prefrontal cortex,
which is involved in these executive processes.
4. Selective Attention:
• The task illustrates the role of selective attention, where participants must focus on one aspect of the
stimulus (ink color) while ignoring another (word meaning). The ability to manage this selective attention
is crucial for cognitive control.
• The Stroop Task is used in clinical settings to assess cognitive function in various populations, including
individuals with ADHD, schizophrenia, and other conditions affecting executive function. It helps in
identifying impairments in cognitive control and attentional processes.
• Researchers have developed various versions of the Stroop Task to explore different aspects of cognitive
processing. For example:
• Emotional Stroop Task: Uses emotionally charged words to study the effect of emotional content on
cognitive interference.
• Numerical Stroop Task: Involves numbers printed in different quantities to explore numerical cognition
and interference.
• Spatial Stroop Task: Uses spatial location words presented in conflicting spatial locations to study spatial
processing.
7. Neuroscientific Findings:
• Neuroimaging studies using the Stroop Task have shown that the anterior cingulate cortex (ACC) and
prefrontal cortex are heavily involved in resolving the conflict and managing the interference between
competing information.
Summary
The Stroop Task is a widely used psychological test that demonstrates the cognitive interference caused
by conflicting information. Major findings from Stroop Task research include:
• Interference Effect: Longer reaction times and more errors in the incongruent condition due to cognitive
interference.
• Automaticity of Reading: Reading is an automatic process that interferes with the task of color naming.
• Cognitive Control and Executive Function: The task assesses the ability to inhibit automatic responses
and exercise cognitive control.
• Selective Attention: Highlights the role of selective attention in managing conflicting information.
• Neuroscientific Insights: Involvement of the ACC and prefrontal cortex in managing interference and
cognitive control.
Overall, the Stroop Task provides valuable insights into the mechanisms of attention, cognitive control,
and the automaticity of reading, making it a cornerstone in cognitive psychology research.
Week 2
Compare and contrast bottom-up and top-down processing
Comparison and Contrast
1. Source of Information:
• Bottom-Up Processing: Starts with raw sensory input and builds up to perception.
• Top-Down Processing: Starts with cognitive processes, such as expectations and prior knowledge, which
influence the interpretation of sensory input.
• Bottom-Up Processing: Information flows from sensory receptors to higher cognitive processes.
• Top-Down Processing: Information flows from higher cognitive processes to interpret sensory input.
3. Role of Experience:
• Bottom-Up Processing: Does not rely on prior experience or knowledge; it is purely stimulus-driven.
• Top-Down Processing: Heavily relies on prior experience, knowledge, and context to shape perception.
4. Cognitive Load:
• Bottom-Up Processing: May be slower and more resource-intensive as it involves piecing together
sensory data.
• Top-Down Processing: Can be faster and more efficient as it uses shortcuts based on expectations and
prior knowledge.
5. Examples in Perception:
• Bottom-Up Processing: Recognizing a new type of fruit by analyzing its color, shape, and texture without
prior knowledge of it.
• Top-Down Processing: Quickly recognizing a partially obscured stop sign based on its familiar shape and
color.
6. Applications:
• Bottom-Up Processing: Important in situations where the sensory information is novel or unfamiliar,
such as in the early stages of learning or encountering new environments.
• Top-Down Processing: Crucial in familiar contexts, enabling quick and efficient processing, such as
reading or navigating known environments.
Integrated Approach
• Interaction: In real-life perception, bottom-up and top-down processing often work together. For
example, while reading, you use bottom-up processing to recognize letters and words, and top-down
processing to understand the meaning based on context and prior knowledge.
• Flexibility: The brain dynamically integrates both processes to adapt to different situations, enhancing
accuracy and efficiency in perception and cognition.
Summary
Bottom-Up Processing:
Top-Down Processing:
Understanding the interplay between bottom-up and top-down processing provides a comprehensive
view of how we perceive and interpret the world, highlighting the complexity and adaptability of human
cognition.
1. Ambient Optic Array: The ambient optic array is the structured pattern of light available
in the environment, which varies depending on the observer’s position and movement. It
encompasses all the visual information present in a scene.
2. Optic Flow: describes the continuous change in the pattern of light as an observer moves
through the environment. It provides crucial information about the relative motion of the
observer and the environment.
3. Components of Optic Flow:
• Radial Flow: When moving forward, objects in the visual field appear to radiate outwards from
a central point (focus of expansion).
• Lamellar Flow: When moving sideways, objects in the visual field appear to move parallel to the
direction of motion.
• Rotational Flow: When rotating, the entire visual field appears to rotate around a central point.
4. Information from Optic Flow:
• Direction of Movement: The pattern of optic flow helps determine the direction of the
observer’s movement.
• Speed of Movement: The rate of change in the optic flow pattern indicates the speed at which
the observer is moving.
• Distance and Depth: Objects closer to the observer produce faster optic flow, while distant
objects produce slower optic flow, aiding in depth perception.
5. Applications in Perception:
• Navigating Through the Environment: Optic flow provides real-time feedback that helps in
navigating and avoiding obstacles.
• Balance and Coordination: Visual information from optic flow assists in maintaining balance
and coordinating movements.
• Driving and Piloting: In activities like driving or flying, optic flow information is crucial for
controlling speed and direction.
Summary
The concept of flow in the ambient optic array highlights the importance of dynamic visual
information in understanding and interacting with the environment. It explains how changes in
the pattern of light, as we move, provide critical cues for perceiving direction, speed, distance,
and depth, thereby facilitating effective navigation and interaction within our surroundings. This
ecological approach underscores the active nature of perception, where movement and
environmental interaction are integral to obtaining and interpreting sensory information.
Describe the four main steps of Marr’s theory of perception
David Marr’s theory of perception outlines a computational approach involving four main steps:
1. Raw Primal Sketch: Initial extraction of basic visual features such as edges and contrasts.
2. Full Primal Sketch: Organization and grouping of these features into more structured
representations.
3. 2½-D Sketch: Viewer-centered representation incorporating depth and surface orientation.
4. 3-D Model Representation: Abstract, object-centered representation that allows for consistent
object recognition and manipulation.
This hierarchical approach to visual processing emphasizes the transformation of sensory input
into increasingly abstract and complex representations, enabling the brain to interpret and
understand the visual world.
Describe the path of light from the eye to the brain
1. Entry into the Eye:
• Cornea → Aqueous humor → Pupil → Lens → Vitreous humor → Retina.
2. Detection in the Retina:
• Light is detected by photoreceptors (rods and cones) → Signals are transmitted to bipolar cells
→ Then to ganglion cells.
• LGN → Optic radiations → Primary visual cortex (V1) in the occipital lobe → Further processing
in secondary visual areas.
This pathway ensures that light is transformed into neural signals and processed to produce visual
perception, allowing us to interpret and interact with our environment.
• Object Recognition: More general process involving feature-based processing, primarily supported by
the LOC. It involves categorizing a wide variety of objects and making both within-category distinctions
(different types of the same object) and between-category distinctions (different types of objects).
Within-Category Distinction:
• Example in Face Recognition: Identifying different individuals within the category of “faces.”
• Holistic Processing: Within-category distinctions for faces rely heavily on holistic processing and
sensitivity to subtle differences in features and configurations.
• Expertise: Extensive practice and exposure improve our ability to make fine-grained distinctions within
the face category.
• Example in Object Recognition: Differentiating between different types of chairs or different models of
cars.
• Detailed Feature Analysis: Requires detailed analysis of specific features and configurations unique to
each object.
Between-Category Distinction:
• General Feature Differences: Relies on recognizing general differences in shape, size, function, and other
basic features between categories.
Definition: Non-accidental properties are visual features that reliably indicate the presence of particular
geometric shapes regardless of the observer’s viewpoint. These properties are termed “non-accidental”
because their appearance is not dependent on the specific angle or distance from which the object is
viewed.
1. Curvature:
• Example: The difference between a cylinder (curved surface) and a rectangular block (straight edges).
2. Parallelism:
• Example: Parallel lines on a cylinder indicate its cylindrical shape, while parallel edges on a cube indicate
its rectangular form.
3. Co-termination:
• Example: The converging lines at the corners of a pyramid or the way lines meet at the vertices of a
cube.
4. Symmetry:
• Example: The bilateral symmetry of a human face or the radial symmetry of a starfish.
5. Collinearity:
6. Parallel Curves:
• Example: The parallel curves of a bottle or the rounded sides of an hourglass shape.
• Viewpoint Invariance:
• NAPs allow for recognition of objects from different angles and perspectives because these properties
remain constant even when the viewpoint changes.
• Geon Identification:
• Geons, or geometric ions, are the basic building blocks of objects in Biederman’s theory. NAPs help
identify these geons. For example, a cylinder is identified by its parallelism and curvature.
• Robust Recognition:
• By relying on NAPs, the human visual system can reliably recognize objects even under challenging
conditions, such as partial occlusion, varying lighting, and different orientations.
• Cylinder: Recognized by its curved surface (curvature), parallel lines along its length (parallelism), and
the straight lines of its ends (collinearity).
• Cube: Identified by its straight edges (collinearity), parallel edges on each face (parallelism), and the way
lines converge at its vertices (co-termination).
•Mug: Distinguished by the symmetry of its handle, the parallel curves of its body, and the curvature of
its rim.
Summary
Biederman’s non-accidental properties are essential visual features that remain consistent across
different viewpoints and are critical for object recognition. These properties include curvature,
parallelism, co-termination, symmetry, collinearity, and parallel curves. They help the visual system
identify the basic components or geons that make up objects, allowing for robust and efficient recognition
regardless of the observer’s perspective.
• Capgras Syndrome: Believed to involve a disconnection between facial recognition areas and
emotional processing centers.
• Prosopagnosia: Typically involves damage or dysfunction in the fusiform face area (FFA), which
is critical for face recognition.
4. Associated Conditions:
• Capgras Syndrome: Often occurs in conjunction with psychiatric disorders like schizophrenia or
neurological conditions like dementia.
• Prosopagnosia: Can be congenital or result from brain injury, particularly affecting the fusiform
gyrus.
• Overt Face Recognition: Assessed through explicit tests, such as asking individuals to identify
or recall faces.
• Covert Face Recognition: Assessed through implicit measures, such as physiological responses
(e.g., skin conductance) or behavioral cues.
Summary
• Neural Mechanisms: Involves regions like the Fusiform Face Area (FFA) and Occipital Face Area
(OFA), which are specialized for face processing.
• Implications: Supports the expertise hypothesis and has applications in clinical studies and face
recognition technology.
Understanding the inversion effect provides valuable insights into the unique cognitive and
neural mechanisms underlying face recognition, emphasizing the importance of holistic and
configural processing in this specialized perceptual skill.