CC Unit 4
CC Unit 4
UNIT IV
What Is Inductive Generalization?
Definition: The process of drawing general conclusions from specific instances.
Example:
You see three white swans → You generalize: "All swans are white"
• Human cognition
• Scientific reasoning
• Machine learning
While not inductive itself, classical logic forms the backdrop. Inductive reasoning is often not valid in strict
logic (because it’s uncertain), but formalizations try to overcome that.
2. Bayesian Models
Core Idea:
Generalization is probabilistic inference: What is the probability that a hypothesis is true given the
observed data?
Formula:
Bayes’ Theorem:
P(H∣D)=P(D∣H)⋅P(H)/P(D)
Implication:
Bayesian models formalize how prior beliefs and new evidence interact to form generalizations.
Core Idea:
Smaller hypotheses (those that cover fewer possibilities) are more likely if they explain the data well.
Example:
If you see that all 3 observed animals are dalmatians, it's more likely you’re referring to "dalmatians" than
"all dogs" or "all animals".
Core Idea:
Studies whether a learner can identify the correct rule (grammar, function, etc.) given infinite or finite data.
Example:
Can a learner converge on the rule for even numbers if given the examples: 2, 4, 6...?
Core Idea:
Core Idea:
The likelihood of generalizing to a new instance depends on its similarity to observed examples.
Applications:
Core Idea:
Prefer the simplest model that explains the data — in line with Occam’s Razor.
Comparison Table
Shepard’s Similarity
Psychometrics Similarity guides generalization Categorization, perception
Model
Algorithmic Learning
Computability Identifiability in the limit Theoretical computer science
Theory
• Cognitive Science: Explains how humans learn categories, concepts, and causal rules.
• AI & ML: Underlies models like Naive Bayes, decision trees, and neural networks.
• Education & Psychology: Helps design better teaching methods and assessments.
Summary
Formal models of inductive generalization attempt to explain how systems (human or artificial) go from
specific data to general rules. Each model offers a different perspective:
• Bayesian Networks: Probabilistic graphical models that represent causal relationships between
variables. Key in modeling how humans infer causality from observations.
o Example: Judea Pearl’s work on do-calculus and structural causal models (SCMs).
• Granger Causality: Mostly used in time-series analysis to determine whether one variable predicts
another.
• Counterfactual Models: Analyze “what if” scenarios to model human reasoning about alternate
outcomes.
2. Categorization
Formal Models:
• Prototype Theory (Rosch): Categories are formed around an idealized "average" or central
tendency.
Applications:
• Object recognition
3. Similarity
Formal Models:
• Tversky’s Contrast Model: Similarity as a function of shared and distinctive features (asymmetric
and context-sensitive).
Applications:
• Case-based reasoning
• Conceptual clustering
Modern cognitive systems often integrate these models to simulate or support human-like reasoning:
Imagine a cognitive system that assists doctors by diagnosing diseases based on patient symptoms.
Step 1: Categorization
The system needs to determine what kind of condition a patient might have.
• Input: A patient has symptoms like fever, cough, and shortness of breath.
• Process: The system compares these symptoms to known categories (e.g., flu, COVID-19,
pneumonia).
• Model Used: A Bayesian model of categorization computes the probability of each disease
category given the observed symptoms.
Step 2: Similarity
The system then looks for similar past cases to refine its decision or suggest a treatment.
• Model Used: Exemplar-based similarity model compares the current patient to a database of
previous patients.
• Distance Metric: Euclidean or cosine similarity on a feature space (e.g., symptom severity, age,
comorbidities).
If the current patient is highly similar to past COVID-19 patients who responded well to Treatment X, that
recommendation is surfaced.
Step 3: Causality
Now, the system reasons about causal relationships to avoid bad decisions.
This avoids recommending treatments that merely correlate with recovery but don’t cause it.
Integration
• Categorization helps narrow down what disease class we're dealing with.
• Causality ensures that the recommendations are not just correlative, but grounded in why
something works.
This helps us reason about something unfamiliar (atoms) by comparing it to something more familiar (solar
systems).
• When faced with a new problem, an intelligent system can retrieve a past problem that has a
similar structure and transfer the solution.
Applications
• Case-Based Reasoning (CBR) systems: Solve new problems by adapting solutions from previous
similar cases.
• Automated Theorem Provers: Use analogies between mathematical structures to generate proofs.
• AI tutoring systems: Use analogies to explain abstract concepts (e.g., electricity as water flow).
• Design and creativity tools: Help generate innovative ideas by analogy (e.g., bio-inspired design:
Velcro from burrs).
Mapping:
Heart → Pump
Blood → Water
Blood vessels → Pipes
• Two-stage process:
o MAC (Many Are Called): Uses quick, surface-level similarity to filter possible analogs.
• Uses parallel distributed processing (neural networks) to find and evaluate analogical mappings.
Example:
• Analogical Source: A general wants to capture a fortress without destroying the roads.
• Solution: Break the army into small groups that converge from multiple directions.
The doctor applies the same idea: use low-intensity rays from multiple angles so they converge on the
tumor but don’t harm surrounding tissue.
This is a classic example from Gick & Holyoak (1980s) showing how analogies guide creative problem
solving.
• ACT-R: Uses analogy to model human memory and problem solving by retrieving similar past
experiences.
MAC/FAC
MAC/FAC is a two-stage model for retrieving analogies from memory, based on structure-mapping theory.
• This stage uses a fast, shallow match to retrieve candidate analogs from long-term memory.
• The system compares the surface features (e.g., objects, labels, roles) using vector-based
similarity.
• Goal: Narrow down thousands of possible analogs to a small shortlist (maybe 5–10).
Analogy: Like Googling based on keywords — fast but not always deep.
• Uses Structure-Mapping Engine (SME) to compare relational structure between source and target
domains.
• The FAC stage picks the analog that best preserves relationships and systematicity (interconnected
relations).
Analogy: Like a human deeply analyzing a few results from the search to pick the best match.
The human mind can't afford to do deep comparisons on every past memory—it’s computationally
expensive.
MAC/FAC offers a cognitively plausible solution:
Example Scenario
Imagine a cognitive assistant trying to help with a new engineering problem involving heat transfer.
• MAC Stage: Scans memory and finds past cases involving fluid flow, electrical resistance, and
thermal conduction (all share surface-level terms like “flow” or “transfer”).
• FAC Stage: Evaluates which of these domains shares deep structural relations with the new
problem.
o It might find that electrical resistance is structurally most similar (Ohm’s law and Fourier’s
law are analogous).
The assistant now suggests an analogical solution based on electrical systems to solve a heat transfer
problem.
Cognitive Realism
• MAC/FAC models how people intuitively retrieve analogies: we don’t immediately go for the best
analog—we approximate first, then refine.
Concepts are mental representations that categorize objects, events, or ideas based on shared features.
Children gradually move from simple perceptual groupings to more abstract and symbolic categories.
• Object permanence develops (understanding that things continue to exist even when out of sight)
• Overgeneralization is common:
• Can sort and classify with multiple criteria (e.g., by color and size)
2. Exemplar Theory
3. Theory-Theory
• Children act like "little scientists", forming and testing intuitive theories
o E.g., They may believe “things that move are alive” and refine this with experience
• Vygotsky adds the importance of social interaction and language in shaping conceptual
development
Analogical Reasoning Mapping relationships across domains "The heart is like a pump"
Memory and attention help in forming and Inhibiting wrong labels like “cow”
Executive Function
refining categories for a horse
• Children with ASD (Autism Spectrum Disorder) may form concepts differently, focusing on details
• Those with language delays may struggle with abstraction and category formation
Educational Implications
Summary
• Sensory experiences
Understanding how children build concepts helps in education, language development, and AI systems
inspired by human cognition.
What is ACT-R?
ACT-R is a cognitive architecture — a theory and simulation framework of how the human mind works. It
was developed by John R. Anderson and colleagues at Carnegie Mellon University. The goal of ACT-R is to
understand and simulate the underlying mechanisms of human cognition, such as learning, problem-
solving, decision-making, and memory.
ACT-R aims to model the way people think, using a modular system that mimics how our brains process
and store information.
Core Architecture
1. Modular Structure
Each module has a buffer, which serves as a temporary memory store — similar to short-term memory.
2. Memory Systems
a. Declarative Memory
b. Procedural Memory
• Rules are matched against the current situation and fire when their conditions are met.
3. Production System
• Only one rule fires at a time — making ACT-R a serial processor at the core.
4. Buffers
Think of buffers as communication windows between the central processor and the modules.
• The procedural module reads from these buffers and uses that data to trigger rules.
• Reaction times
• Learning curves
• Activation-based retrieval: information becomes easier to retrieve the more often it's used.
Typing a word:
Applications
• Human-Computer Interaction (HCI) (to predict how users interact with systems)
Brain-Inspired Design
• For example:
Summary Table
Component Description
What is SOAR?
SOAR (pronounced "soar") is a general cognitive architecture designed to model intelligent behavior —
reasoning, learning, and problem-solving — in a way that mimics human cognition.
Originally developed in the early 1980s by John Laird, Allen Newell, and Paul Rosenbloom, SOAR is built
around the idea of a unified theory of cognition — a framework that can explain and simulate a broad
spectrum of mental activities, from perception to decision-making.
SOAR primarily uses procedural memory — its core reasoning mechanism is built around production rules.
• When conditions in working memory match, the corresponding actions are applied.
Example:
4. Decision Cycle
text
6. Learning: Chunking
• After solving a subgoal, SOAR stores a new rule (a chunk) that allows it to solve similar problems
faster in the future.
Example:
• First time: "If X happens, figure out how to do Y" → forms a subgoal.
7. Symbolic Representation
SOAR is a symbolic system — it manipulates symbols rather than using numerical weights (like neural
networks).
5. SOAR learns a new rule: “If searching for object X, check room B first.”
Applications
• Robotics
• Cognitive modeling
Summary Table
Component Description
Parallelism Single production fires per cycle Single production, with modular buffers
What is OpenCog?
OpenCog is an open-source cognitive architecture aimed at achieving AGI — intelligence that is flexible,
adaptable, and human-like across many domains.
Originally developed by Ben Goertzel and colleagues, OpenCog integrates multiple AI approaches:
• Symbolic reasoning
• Neural networks
• Probabilistic logic
• Evolutionary learning
OpenCog is a modular system with several core components working together. Here's a breakdown:
• Each atom has a truth value (confidence & strength) and attention value.
Think of it as a giant, structured memory: a semantic network that stores facts, goals, actions, etc.
Example:
InheritanceLink(Dog, Animal)
This means "a Dog is an Animal" and "Dog has the color Brown".
o "If most mammals have hearts, and dogs are mammals → dogs probably have hearts."
Example: Given data about plant growth, MOSES could evolve a rule like:
5. Attention Allocation
6. OpenPsi / CogEmotions
Models motivation, emotions, and drives (inspired by Psi theory and psychology).
InheritanceLink(Dog, Mammal)
InheritanceLink(Mammal, Animal)
OpenCog powers aspects of Sophia, the humanoid robot from Hanson Robotics.
Use Cases
• General AI research
• Educational AI tutors
• Commonsense reasoning
Summary
Component Description
What is CopyCat?
CopyCat is a cognitive model developed in the late 1980s and 1990s by Douglas Hofstadter, Melanie
Mitchell, and others at Indiana University.
It was built to explore how humans perceive patterns, make analogies, and exhibit flexible, context-
sensitive reasoning — especially in ambiguous or creative tasks.
CopyCat is not focused on problem-solving like SOAR or OpenCog, but on fluid, emergent thinking —
modeling how people "see" analogies.
CopyCat was designed for a very specific but powerful type of problem:
Example:
Why? Because:
• In abc → abd, the last letter c was changed to d (i.e., next letter).
• Pattern recognition
• Contextual judgment
Architecture Overview
CopyCat’s architecture is inspired by the mind’s fluidity, using a society of agents metaphor, not a central
decision-maker.
Core Components:
Component Role
1. Workspace
This is like short-term memory — a visual/structural layout of the strings (e.g., abc, abd, ijk) and any
discovered groupings, mappings, or relationships.
2. Slipnet
3. Codelets
No master planner — just many small interactions that lead to emergent understanding.
4. Coderack
• Uses a probabilistic selection: codelets with higher urgency or relevance are more likely to run.
5. Temperature
Problem:
abc → abd
ijk → ?
Step-by-step:
2. Codelets explore abc → abd: notice “c” changed to “d” → "successor" concept activated in Slipnet.
And just like a human, it could have considered other weird options (like changing all letters), but based
on context, it settles on the simplest elegant transformation.
Other Example
abc → abd
xyz → ?
Now the mapping is trickier — some people answer xya, or xy{next of z}.
CopyCat could:
Applications of CopyCat
Though it's not a general-purpose AI, CopyCat is inspirational and foundational for:
Summary
Component Description
Their purpose is to store and retrieve facts and reason over them to produce answers — especially in
question-answering (QA) and dialogue systems.
Think of them as a hybrid between neural nets and a searchable memory bank.
Core Idea
Memory Networks:
Architecture Overview
Component Description
Input feature map (I) Converts raw input (e.g., sentence) into internal representation
Flow in a QA Task
Example:
Input (Story):
markdown
Question:
Where is Mary?
Step-by-Step:
2. Input Question:
o The model attends to the most relevant memories (e.g., sentence 1).
4. Extract Answer:
"kitchen"
Story:
Reasoning Path:
• Answer: office
This is a form of transitive inference, and it’s a big deal in NLP tasks!
• Required supervision at each step (e.g., which memory lines are relevant)
Memory Networks were evaluated using bAbI, a set of synthetic QA tasks like:
• Two/three-step reasoning
• Yes/No questions
Applications
• Reading Comprehension
• Knowledge-based reasoning
Memory Explicit, structured memory Implicit attention over input Implicit memory via state
Reasoning Steps Multi-hop possible Limited unless externally guided Sequential reasoning only
Summary
Component Role
Want to: