0% found this document useful (0 votes)
97 views30 pages

Ccs337 Cs Unit IV

Unit IV of the Cognitive Science course covers inference models of cognition, focusing on generative and discriminative models, clustering, predictive models, and various algorithms for inference. It defines key concepts such as conditioning, causal reasoning, and Bayesian inference, while also discussing the benefits and challenges of generative models. The unit emphasizes the importance of data analysis and outlines the steps involved in the process.

Uploaded by

janciraniaids
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views30 pages

Ccs337 Cs Unit IV

Unit IV of the Cognitive Science course covers inference models of cognition, focusing on generative and discriminative models, clustering, predictive models, and various algorithms for inference. It defines key concepts such as conditioning, causal reasoning, and Bayesian inference, while also discussing the benefits and challenges of generative models. The unit emphasizes the importance of data analysis and outlines the steps involved in the process.

Uploaded by

janciraniaids
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

CCS337-COGNITIVE SCIENCE UNIT-IV

UNIT IV INFERENCE MODELS OF COGNITION 6

Generative Models – Conditioning – Causal and statistical dependence – Conditional


dependence – Data Analysis – Algorithms for Inference.

PART-A

1. What is Generative Models?


Generative models are machine learning models that create new data similar
to the data they were trained on. They are a type of artificial intelligence (AI) that
use neural networks to learn patterns in data and generate new content.
2. What is Discriminative models?

Discriminative models are used in supervised learning tasks in which the


labels or categories of the data are known. Many discriminative models are
classifiers that attempt to identify the relationships between features and labels and
then assign class labels to new data based on the conditional probability of those
labels.

3. Differentiate Generative and Discriminative models.


• In general, Generative models can generate new data instances and
Discriminative models discriminate between different kinds of data instances.

More formally, given a set of data instances X and a set of labels Y:

• Generative models capture the joint probability p(X, Y), or just p(X) if there are
no labels.

• Discriminative models capture the conditional probability p(Y | X).

4. Define Clustering models.

Clustering models are used in unsupervised learning tasks to group records


within a data set into clusters. They can identify similar items and also learn what
separates those items from other groups in the dataset.

5. Define Predictive models.

Predictive models process historical data to make predictions about future


events using machine learning and statistical analysis. They are often used to help
business leaders make data-driven decisions. Predictive models also power

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

predictive text services, facial recognition software, fraud detection and supply
chain management solutions.

6. List the Types of generative models.

The following are prominent types of generative models:

o Generative adversarial network (GAN)


o Variational autoencoders (VAEs)

o Autoregressive models

o Bayesian networks

o Diffusion models

7. Define Generative adversarial network (GAN).


• This model is based on ML and deep neural networks.
• In it, two unstable neural networks
o a generator and a discriminator
o compete against each other to provide more accurate predictions and
realistic data.
8. Define Variational autoencoders (VAEs).

VAEs are generative models based on neural network autoencoders, which


are composed of two separate neural networks -- encoders and decoders. They're
the most efficient and practical method for developing generative models.

9. Define Bayesian networks with example.

• Bayesian networks are graphical models that depict probabilistic relationships


between variables. They excel in situations where understanding cause and
effect is vital. For instance, in medical diagnostics, a Bayesian network can
effectively assess the probability of a disease based on observed symptoms.

10. List the Benefits of generative models.


Generative models offer the following advantages, which make them valuable in
various applications:
• Data augmentation
• Data distribution
• Anomaly detection
• Flexibility

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• Cost optimization
• Handling of missing data

11. List the Challenges of generative models.


Generative models provide several advantages, but they also have the
following drawbacks:
• Computational requirements
• Quality of generated outputs
• Lack of interpretability
• Overfitting
• Security
• Black box nature
• Mode collapse

12. Define Inference models of cognition


Inference models of cognition refer to theoretical frameworks that attempt to
explain how the human mind processes information and makes decisions. These
models focus on the cognitive processes involved in drawing conclusions, making
judgments, and forming beliefs based on available information.

13. Define Conditioning in Inference Models of Cognition


Conditioning in inference models of cognition refers to how cognitive
systems, including the brain, learn from experiences and use these experiences to
make predictions or inferences about the world. These models often focus on how
humans and animals adapt their behavior based on past events, either by
reinforcement or association.
14. Define Bayesian Inference with a neat diagram.
• Bayesian inference models suggest that the human mind operates based on
principles of probabilistic reasoning.
• Individuals update their beliefs and make decisions by combining prior
knowledge or beliefs with new evidence, following Bayes' theorem.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

15. Define Heuristics and Biases with an example.


• Heuristics are mental shortcuts or rules of thumb that people use to make
judgments and decisions quickly and efficiently.
• Biases refer to systematic deviations from rational or optimal decision-
making, which can arise from the use of heuristics.
• Examples of heuristics and biases include the availability heuristic, the
representativeness heuristic, and the anchoring and adjustment heuristic.

16. Difference between Heuristics and Biases.

17. What is Trial and Error?

• Trial and error is another type of heuristic in which people use a number of
different strategies to solve something until they find what works. Examples
of this type of heuristic are evident in everyday life.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• People use trial and error when playing video games, finding the fastest
driving route to work, or learning to ride a bike (or any new skill).

18. Define Causal Reasoning.


Causal reasoning is the process of identifying the relationship between a
cause and its effect. It's a fundamental cognitive process that's used in many areas
of life, including learning, decision making, and regulating emotions.

19. Define Causal Dependence.

Probabilistic programs encode knowledge about the world in the form of


causal models, and it is useful to understand how their function relates to their
structure by thinking about some of the intuitive properties of causal relations.
Causal relations are local, modular, and directed.

20. Define Statistical Dependence.

One often hears the warning, “correlation does not imply causation”. By
“correlation” we mean a different kind of dependence between events or functions—
statistical dependence.

21. What are the four types of dependencies in casual inference?

There are four types of dependencies in casual inference they are as follows:

➢ Unconditionally Independent

➢ Unconditionally Dependent

➢ Conditionally Independent

➢ Conditional dependence

22. Define Data Analysis.


Data analysis is the process of examining data to find patterns and trends,
and to draw conclusions. It can help organizations make better decisions, improve
efficiency, and predict future events.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

23. List the steps of Data Analysis.


1) Data Collection and Preprocessing:
2) Exploratory Data Analysis:
3) Statistical Inference:
4) Model Evaluation and Comparison:
5) Validation and Generalization:
6) Sensitivity Analysis and Robustness:

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

PART-B

1. Explain in detail about Generative Models.


Generative Models

Generative models are machine learning models that create new data similar
to the data they were trained on. They are a type of artificial intelligence (AI) that
use neural networks to learn patterns in data and generate new content.

Working of Generative Models


• Generative models work by identifying patterns and distributions in their
training data and then applying those findings to the generation of new data
based on user inputs. The training process teaches the model to recognize the
joint probability distributions of features in the training dataset. Then, the model
draws on what it has learned to create new data samples that are similar to its
training data.
• Generative models are typically trained with unsupervised learning techniques:
when they are fed a mass of unlabeled data and sort through it by themselves.
The models figure out the distribution of the data, which is how they cultivate
the internal logic they then use to create new data.
• During training, the model applies a loss function to measure the gap between
real-world outcomes and the model’s predictions. The goal of training is to
minimize the loss function, bringing generated outputs as close to reality as
possible.
• Content generation is a probabilistic process. Generative models do
not know things in the same way that humans do. Rather, a generative model
uses complicated mathematical equations to predict the most likely output
based on the rules it learned during training.

Generative models versus other model types

Generative models attempt to generate new data of a certain class.


Discriminative models separate items into known groups, while clustering models
figure out how to group items in a dataset. Predictive models make estimations
about future occurrences or states based on historical data.

Discriminative models

• Discriminative models are used in supervised learning tasks in which the


labels or categories of the data are known. Many discriminative models are

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

classifiers that attempt to identify the relationships between features and labels
and then assign class labels to new data based on the conditional probability of
those labels.
• For example, a discriminative model trained to differentiate between images of
fish and birds can guess whether images are more likely to be fish or birds.
Image recognition, a type of classification in machine learning, is a common
application for discriminative models.
• While generative models and discriminative models have distinct differences,
they often work together, such as in a generative adversarial network (GAN).
• In general Generative models can generate new data instances and
Discriminative models discriminate between different kinds of data instances.

Figure 4.1 difference between Discriminative and Generative models

More formally, given a set of data instances X and a set of labels Y:

• Generative models capture the joint probability p(X, Y), or just p(X) if there are
no labels.

• Discriminative models capture the conditional probability p(Y | X).

• Refer figure 4.1 for the difference between Discriminative and Generative models

Clustering models

• Clustering models are used in unsupervised learning tasks to group records within
a data set into clusters. They can identify similar items and also learn what
separates those items from other groups in the dataset.

• Clustering models lack prior knowledge of the items in the dataset, including
knowledge of how many groups there might be. A market researcher might use a
clustering model to identify buyer personas within their target demographics.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

Predictive models

• Predictive models process historical data to make predictions about future events
using machine learning and statistical analysis. They are often used to help
business leaders make data-driven decisions. Predictive models also power
predictive text services, facial recognition software, fraud detection and supply
chain management solutions.

Types of generative models

The following are prominent types of generative models:

o Generative adversarial network (GAN)


o Variational autoencoders (VAEs)

o Autoregressive models

o Bayesian networks

o Diffusion models

Generative adversarial network (GAN)

• This model is based on ML and deep neural networks. In it, two unstable neural
networks -- a generator and a discriminator -- compete against each other to
provide more accurate predictions and realistic data.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

Figure 4.2 Generative adversarial network training method

• Figure 4.2 shows the Generative adversarial network training method.


• A GAN is an unsupervised learning technique that makes it possible to
automatically find and learn different patterns in input data.
• One of its main uses is image-to-image translation, which can change daylight
photos into nighttime photos.
• GANs are also used to create incredibly lifelike renderings of a variety of objects,
people and scenes that are challenging for even a human brain to identify as
fake.

Variational autoencoders (VAEs)

• Similar to GANs, VAEs are generative models based on neural network


autoencoders, which are composed of two separate neural networks -- encoders
and decoders. They're the most efficient and practical method for developing
generative models.

• A Bayesian inference-based probabilistic graphical model, VAE seeks to


understand the underlying probability distribution of the training data so that
it can quickly sample new data from that distribution. In VAEs, the encoders

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

aim to represent data more effectively, whereas the decoders regenerate the
original data set more efficiently. Popular applications of VAEs include anomaly
detection for predictive maintenance, signal processing and security
analytics applications.

Autoregressive models

• Autoregressive models predict future values based on historical values and can
easily handle a variety of time-series patterns. These models predict the future
values of a sequence based on a linear combination of the sequence's past
values.

• Autoregressive models are widely used in forecasting and time series analysis,
such as stock prices and index values. Other use cases include modeling and
forecasting weather patterns, forecasting demand for products using past sales
data and studying health outcomes and crime rates.

Bayesian networks

• Bayesian networks are graphical models that depict probabilistic relationships


between variables. They excel in situations where understanding cause and
effect is vital. For instance, in medical diagnostics, a Bayesian network can
effectively assess the probability of a disease based on observed symptoms.

Diffusion models

• Diffusion models create data by progressively introducing noise and then


learning to reverse this process.

• They're instrumental in understanding how phenomena evolve and are


particularly useful for analyzing situations such as the spread of rumors
in social networks or the transmission of infectious diseases within a
population.

Benefits of generative models

Generative models offer the following advantages, which make them valuable in
various applications:

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• Data augmentation. Generative models can augment data sets by


creating synthetic data, which is valuable when real-world labeled
data is scarce. This improves the training of other ML models.
• Data distribution. Generative models provide insights into the
underlying distribution of the data. By modeling how data is
generated, they can help researchers and practitioners understand
the relationships and dependencies within the data, leading to better
decision-making and analysis.
• Anomaly detection. Generative models detect anomalies by learning
the distribution of normal data during the training process. They
generate new data points based on this distribution and flag any
significant deviations as anomalies. This approach effectively
identifies unusual events without needing labeled anomaly examples,
making it useful for fraud detection and equipment monitoring
applications.
• Flexibility. Generative models can be applied to various learning
scenarios, such as unsupervised, semi-supervised and supervised
learning, making them adaptable to a wide range of tasks.
• Cost optimization. Generative models reduce manual production
and research costs across industries by automating content creation.
For example, in manufacturing, generative models optimize designs,
simulate production processes and predict maintenance needs, which
cuts down on time, resources and operational costs.
• Handling of missing data. Generative models are effective in
handling incomplete data sets by inferring missing values based on
the learned distribution, enhancing analyses and predictions.

Challenges of generative models

Generative models provide several advantages, but they also have the following
drawbacks:
• Computational requirements. Generative AI systems often require a large
amount of data and computational power, which some organizations might find
prohibitively expensive and time-consuming.
• Quality of generated outputs. Generated outputs from generative models
might not always be accurate or free of errors. This could be caused by several

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

things, including a shortage of data, inadequate training or an overly


complicated model.
• Lack of interpretability. It might be challenging to comprehend how
predictions are being made by generative AI models, as these models can be
opaque and complicated. Ensuring the model is making impartial and fair
decisions can be challenging at times.
• Overfitting. Overfitting can occur in generative models, resulting in poor
generalization performance and incorrectly generated samples. It happens when
a model is unable to generalize and instead fits too closely to the training data
set. This can happen for a variety of reasons, including the training data set
being too small and lacking enough data samples to adequately represent all
potential input data values.
• Security. Generative AI systems can be used to disseminate false information
or propaganda by generating realistic and convincing fake videos, images and
text.
• Black box nature. Generative models, especially those based on deep learning,
often operate as black boxes, making it difficult to understand their decision-
making processes. This lack of interpretability can hinder trust and adoption in
critical applications, such as healthcare or finance, where understanding the
rationale behind generated outputs is crucial.
• Mode collapse. Mode collapse occurs when a generative model, such as a GAN,
fails to capture the full diversity of the training data. Instead, it becomes stuck
generating a limited set of similar outputs, often referred to as modes. This can
lead to a lack of variety and creativity in the generated content

2. What is Inference models of cognition? Explain in detail about Conditioning


in Inference Models of Cognition.
Inference models of cognition
Inference models of cognition refer to theoretical frameworks that attempt to
explain how the human mind processes information and makes decisions. These
models focus on the cognitive processes involved in drawing conclusions, making
judgments, and forming beliefs based on available information.
Conditioning in Inference Models of Cognition
Conditioning in inference models of cognition refers to how cognitive systems,
including the brain, learn from experiences and use these experiences to make
predictions or inferences about the world. These models often focus on how humans

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

and animals adapt their behavior based on past events, either by reinforcement or
association.
In the context of cognitive science and artificial intelligence, conditioning can be
divided into two primary types:
1. Bayesian Inference:
• Bayesian inference models suggest that the human mind operates based on
principles of probabilistic reasoning.
• Individuals update their beliefs and make decisions by combining prior
knowledge or beliefs with new evidence, following Bayes' theorem.
• This allows for flexible and adaptive decision-making, where beliefs can be
revised as new information becomes available.
• Example of Bayesian inference with a prior distribution, a posterior
distribution, and a likelihood function as shown in the figure 4.3. The
prediction error is the difference between the prior expectation and the peak
of the likelihood function (i.e., reality). Uncertainty is the variance of the
prior. Noise is the variance of the likelihood function.

Figure 4.3 Example of Bayesian inference with a prior distribution, a posterior


distribution, and a likelihood function.

2. Heuristics and Biases


• Heuristics are mental shortcuts or rules of thumb that people use to make
judgments and decisions quickly and efficiently.
• Biases refer to systematic deviations from rational or optimal decision-
making, which can arise from the use of heuristics.
• Examples of heuristics and biases include the availability heuristic, the
representativeness heuristic, and the anchoring and adjustment heuristic.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

Types of Heuristics

There are many different kinds of heuristics.

Availability

• The availability heuristic involves making decisions based upon how easy it
is to bring something to mind. When you are trying to make a decision, you
might quickly remember a number of relevant examples.
• For example, imagine you are planning to fly somewhere on vacation. As you
are preparing for your trip, you might start to think of a number of recent
airline accidents. You might feel like air travel is too dangerous and decide
to travel by car instead. Because those examples of air disasters came to
mind so easily, the availability heuristic leads you to think that plane crashes
are more common than they really are.

Familiarity

• The familiarity heuristic refers to how people tend to have more favorable
opinions of things, people, or places they've experienced before as opposed
to new ones. In fact, given two options, people may choose something they're
more familiar with even if the new option provides more benefits.5

Representativeness

• The representativeness heuristic involves making a decision by comparing


the present situation to the most representative mental prototype.6 When
you are trying to decide if someone is trustworthy, you might compare
aspects of the individual to other mental examples you hold.
• A soft-spoken older woman might remind you of your grandmother, so you
might immediately assume she is kind, gentle, and trustworthy. However,
this is an example of a heuristic bias, as you can't know someone trustworthy
based on their age alone.

Affect

The affect heuristic involves making choices that are influenced by an


individual's emotions at that moment. For example, research has shown that
people are more likely to see decisions as having benefits and lower risks
when in a positive mood.

Anchoring

The anchoring bias involves the tendency to be overly influenced by the first
bit of information we hear or learn.8 This can make it more difficult to
consider other factors and lead to poor choices. For example, anchoring bias
can influence how much you are willing to pay for something, causing you
to jump at the first offer without shopping around for a better deal.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

Scarcity

Scarcity is a heuristic principle in which we view things that are scarce or


less available to us as inherently more valuable. Marketers often use the
scarcity heuristic to influence people to buy certain products. This is why
you'll often see signs that advertise "limited time only," or that tell you to "get
yours while supplies last."

Trial and Error

• Trial and error is another type of heuristic in which people use a number of
different strategies to solve something until they find what works. Examples
of this type of heuristic are evident in everyday life.
• People use trial and error when playing video games, finding the fastest
driving route to work, or learning to ride a bike (or any new skill).

3. Dual-Process Theory
• Dual-process theory proposes that there are two distinct cognitive systems
involved in decision-making and reasoning.

System 1 (Fast, Intuitive, Automatic Thinking)

• Operates quickly and effortlessly.


• Based on intuition, instincts, and heuristics (mental shortcuts).
• Requires little cognitive effort and is often subconscious.
• Examples: Recognizing faces, reacting to danger, completing common
phrases, making snap judgments.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

System 2 (Slow, Analytical, Deliberate Thinking)

• Requires conscious effort and logical reasoning.


• Involves careful evaluation, critical thinking, and problem-solving.
• Used in complex decision-making and tasks requiring focused attention.
• Examples: Solving a math problem, planning a trip, evaluating arguments.
• Examples: Driving a car- System 1 helps us drive a car without consciously
thinking about walking or calculating the trajectory of our steps as shown in
figure 4.4.

Figure 4.4 Dual-Process Theory


4. Mental Models:

Figure 4.5 Mental Models


• Mental models are internal representations of the world that individuals use
to understand and reason about their environment. Refer figure 4.5.
• These models are formed based on prior knowledge, experiences, and beliefs,
and they guide how people interpret and make sense of new information.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• Mental models can be incomplete, biased, or inaccurate, leading to errors in


judgment and decision-making.
5. Causal Reasoning:
• Causal reasoning is the process of identifying the relationship between a
cause and its effect. It's a fundamental cognitive process that's used in many
areas of life, including learning, decision making, and regulating emotions.
• Causal reasoning involves understanding the relationships between causes
and effects, and using this knowledge to make inferences and predictions.
• Individuals often rely on causal models to explain and understand the world
around them, and these models can influence their decision-making and
problem-solving.
• Biases and errors in causal reasoning, such as the illusion of causality or
the tendency to overesesmate the strength of causal relationships, can lead
to flawed inferences and decisions.
➢ Inference models of cognition provide valuable insights into the cognitive
processes underlying human decision-making and problem-solving.
➢ By understanding these models, researchers and practitioners can develop more
effective strategies for improving decision making, reducing cognitive biases, and
enhancing human performance in various domains.

3. Explain in detail about Causal and statistical dependence.

In the context of cognitive inference models, "causal dependence" refers to a


true cause-and-effect relationship between two variables, meaning that one variable
directly influences the other, while "statistical dependence" simply indicates a
correlation between variables, where knowing the value of one variable provides
information about the other, but doesn't necessarily imply a causal link; essentially,
"correlation does not equal causation."
Causal Dependence
• Probabilistic programs encode knowledge about the world in the form of causal
models, and it is useful to understand how their function relates to their
structure by thinking about some of the intuitive properties of causal relations.
• Causal relations are local, modular, and directed.
• They are modular in the sense that any two arbitrary events in the world are
most likely causally unrelated, or independent.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• If they are related, or dependent, the relation is only very weak and liable to be
ignored in our mental models.
• Causal structure is local in the sense that many events that are related are not
related directly:
• They are connected only through causal chains of several steps, a series of
intermediate and more local dependencies.
• And the basic dependencies are directed: when we say that A causes B, it means
something different than saying that B causes A.
• The causal influence flows only one way along a causal relation—we expect that
manipulating the cause will change the effect, but not vice versa—
but information can flow both ways—learning about either event will give us
information about the other.
• What does it mean to believe that A depends causally on B?
• Viewing cognition through the lens of probabilistic programs, the most basic
notions of causal dependence are in terms of the structure of the program and
the flow of evaluation (or “control”) in its execution.
• Expression A causally depends on expression B if it is necessary to evaluate B
in order to evaluate A. (More precisely, expression A depends on expression B if
it is ever necessary to evaluate B in order to evaluate A.)

Example

• For instance, in this program A depends on B but not on C (the final expression
depends on both A and C):

• Note that causal dependence order is weaker than a notion of ordering in time—
one expression might happen to be evaluated before another in time (for
instance C before A), but without the second expression requiring the first. (This
notion of causal dependence is related to the notion of flow dependence in the
programming language literature.)
• For example, consider a simpler variant of our medical diagnosis scenario:

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

Output:

• Here, cough depends causally on both lungdisease and cold,


while fever depends causally on cold but not lungDisease.
• Cough depends causally on smokes but only indirectly: although cough does
not call smokes directly, in order to evaluate whether a patient coughs, we first
have to evaluate the expression lung Disease that must itself evaluate smokes.
• We haven’t made the notion of “direct” causal dependence precise: do we want
to say that cough depends directly on cold, or only directly on the
expression (cold && flip(0.5)) || ...?
• This can be resolved in several ways that all result in similar intuitions.
• For instance, we could first re-write the program into a form where each
intermediate expression is named (called A-normal form) and then say direct
dependence is when one expression immediately includes the name of another.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

Detecting Dependence Through Intervention

• The causal dependence structure is not always immediately clear from examining a
program, particularly where there are complex functions calls.

• Another way to detect (or according to some philosophers, such as Jim Woodward,
to define) causal dependence is more operational, in terms of “difference making”:
If we manipulate A, does B tend to change? By manipulate here we don’t mean an

assumption in the sense of condition .

• Instead we mean actually edit, or intervene on, the program in order to make an
expression have a particular value independent of its (former) causes.

• If setting A to different values in this way changes the distribution of values of B,


then B causally depends on A.

Output:

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

Statistical Dependence

o One often hears the warning, “correlation does not imply causation”. By
“correlation” we mean a different kind of dependence between events or
functions—statistical dependence.
o A and B are statistically dependent, if learning information about A tells us
something about B, and vice versa.
o In the language of webppl: using condition to make an assumption about A
changes the value expected for B.
o Statistical dependence is a symmetric relation between events referring to
how information flows between them when we observe or reason about them.
(If conditioning on A changes B, then conditioning on B also changes A.
Why?)
o The fact that we need to be warned against confusing statistical and causal
dependence suggests they are related, and indeed, they are.
o In general, if A causes B, then A and B will be statistically dependent. (One
might even say the two notions are “causally related”, in the sense that
causal dependencies give rise to statistical dependencies.)
o Diagnosing statistical dependence using condition is similar to diagnosing
causal dependence through intervention.
o Condition on various values of the possible statistical dependent, here A, and
see whether it changes the distribution on the target, here B:

Output:

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

o Because the two distributions on B (when we have different information


about A) are different, we can conclude that B statistically depends on A.
o Do the same procedure for testing if A statistically depends on B.
o How is this similar (and different) from the causal dependence between these
two?
o As an exercise, make a version of the above medical example to test the
statistical dependence between cough and cold.
o Verify that statistical dependence holds symmetrically for events that are
connected by an indirect causal chain, such as smokes and coughs.
o Correlation is not just a symmetrized version of causality.
o Two events may be statistically dependent even if there is no causal chain
running between them, as long as they have a common cause (direct or
indirect).
o That is, two expressions in a WebPPL program can be statistically dependent
if one calls the other, directly or indirectly, or if they both at some point in
their evaluation histories refer to some other expression (a “common cause”).

4. Explain in detail about Conditional dependence in the context of cognitive


inference models.

There are four types of dependencies in casual inference they are as follows:

i. Unconditionally Independent

ii. Unconditionally Dependent

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

iii. Conditionally Independent

iv. Conditional dependence

Unconditionally Independent

o When two variables, X and Y, are unconditionally independent, there is no direct


or indirect relationship between them, regardless of any other variables.
o Example:

• X: Number of books in a library


• Y: Distance from the Earth to the moon

X and Y are unconditionally independent because the number of books in a


library has no connection to the distance from the Earth to the moon, as the best
of my knowledge, at least. These two variables are completely unrelated in any
context.

Unconditionally Dependent

o When two variables, X and Y, are unconditionally dependent, it means they have
a direct relationship without considering any other variables.
o Example:

• X: Amount of time studied


• Y: Exam score

In this scenario, X and Y are unconditionally dependent because the amount


of time studied directly impacts the exam score, more study time generally leads to
better exam scores.

Conditionally Independent

o Two variables, X and Y, are conditionally independent given a third variable, Z, if


knowing Z makes X and Y independent of each other.
o In other words, once you know Z, X provides no additional information about Y, and
vice versa.
o Example:

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• X: Number of umbrellas sold


• Y: Number of raincoats sold
• Z: Weather condition (rainy or not)

In this case, X and Y are conditionally independent given Z. If we know the


weather condition (Z), knowing the number of umbrellas sold (X) doesn’t tell us
anything new about the number of raincoats sold (Y) because both are directly
influenced by whether it’s raining or not.

Conditional dependence

In the context of cognitive inference models, "conditional dependence" refers


to a situation where the relationship between two cognitive variables (like deciding
on an answer and the time taken to respond) is only apparent when considering a
third variable, often referred to as a "context" or "conditioning variable" - meaning
that knowing the value of this third variable significantly impacts how the two initial
variables are related to each other; essentially, the connection between the first two
variables depends on the state of the third variable.

Two variables, X and Y, are conditionally dependent if their relationship


depends on a third variable, Z. This means that the connection between X and Y is
influenced by the value of Z.

Example:

• X: Number of ice creams sold


• Y: Number of people at the beach
• Z: Temperature

Here, X and Y are conditionally dependent on Z. On a hot day (when Z is


high), both ice cream sales and beach attendance increase. Thus, the number of ice
creams sold and the number of people at the beach are related, but this relationship
hinges on the temperature.

Truth Table

To further clarify, here’s a truth table I created summarizing the different


types of dependencies and independencies between variables X, Y, and Z, along with
short explanations from the examples:

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

5. Explain in detail about Data Analysis.

Data Analysis

Data analysis is the process of examining data to find patterns and trends,
and to draw conclusions. It can help organizations make better decisions, improve
efficiency, and predict future events.

➢ Data Collection and Preprocessing:


• Researchers studying inference models of cognition often collect data from
various sources, such as experiments, surveys, or observational studies.
• This data may include measures of cognitive processes, decision-making,
problem-solving, or other relevant variables.
• Preprocessing the data, such as handling missing values, outliers, or
transforming variables, is an important step before conducting further
analysis.
➢ Exploratory Data Analysis:
• Exploratory data analysis techniques are used to gain a better understanding
of the data and identify patterns, trends, or relationships.
• This may involve visualizations, such as scatter plots, histograms, or
correlation matrices, to explore the structure of the data.
• Exploratory analysis can help researchers formulate hypotheses and identify
potential factors that may influence cognitive processes.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

➢ Statistical Inference:
• Inference models of cognition often rely on statistical techniques to test
hypotheses and draw conclusions about the underlying cognitive
mechanisms.
• Common statistical methods used in this context include regression
analysis, analysis of variance (ANOVA), structural equation modeling, and
Bayesian inference.
• These techniques allow researchers to examine the relationships between
variables, test the significance of effects, and evaluate the fit of theoretical
models to the observed data.
➢ Model Evaluation and Comparison:
• Researchers may compare different inference models of cognition by
evaluating their ability to explain the observed data.
• This may involve measures of model fit, such as R-squared, Akaike
Information Criterion (AIC), or Bayesian Information Criterion (BIC), to
assess the trade-off between model complexity and explanatory power.
• Comparing the performance of competing models can help identify the most
suitable theoretical frameworks for understanding cognitive processes.
➢ Validation and Generalization:
• To ensure the reliability and generalizability of the inference models,
researchers often conduct validation studies, such as cross-validation or out-
of-sample testing.
• This helps assess the model's ability to make accurate predictions or
inferences on new, unseen data, rather than just fitting the original dataset.
• Successful validation can increase confidence in the model's ability to
capture the underlying cognitive mechanisms.
➢ Sensitivity Analysis and Robustness:
• Researchers may perform sensitivity analyses to understand how
changes in the model's parameters or assumptions affect the inferences
drawn.
• This can help identify the critical factors that drive the model's behavior
and assess the robustness of the conclusions.
• Robust inference models are less sensitive to minor variations in the data
or modeling assumptions, making them more reliable for practical
applications.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• By leveraging data analysis techniques, researchers can develop and


refine inference models of cognition, test hypotheses, and gain insights
into the cognitive processes that underlie human decision-making,
problem-solving, and reasoning.
• The integration of data analysis and inference models is crucial for
advancing our understanding of the complex and dynamic nature of
human cognition.

6. Explain in detail about Algorithms for Inference.


Algorithms for Inference
"Algorithms for inference" in cognitive science refers to the computational
models and methods used to understand how the human mind makes inferences
or draws conclusions based on incomplete information, essentially describing the
step-by-step processes that the brain might use to reason and interpret data, often
relying on probabilistic frameworks like Bayesian inference to account for
uncertainty; it's a way to study cognitive processes by creating algorithms that
mimic how humans make sense of the world around them using limited
information.

Some of the key algorithms and techniques used in inference models of cognition:

1. Bayesian Inference:
• Bayesian inference is a powerful framework for modeling cognitive processes,
as it allows for the integration of prior knowledge and new evidence to update
beliefs. - Algorithms such as Markov Chain Monte Carlo (MCMC) methods,
variational inference, and belief propagation are commonly used to perform
Bayesian inference in cognitive models. - These algorithms enable the
computation of posterior probabilities and the exploration of complex, high-
dimensional parameter spaces.
2. Neural Networks and Deep Learning:
• Neural networks and deep learning models have become increasingly popular
in cognitive science, as they can capture complex, nonlinear relationships
between variables.
• Algorithms like backpropagation, convolutional neural networks, and
recurrent neural networks have been applied to model various cognitive
processes, such as perception, language processing, and decision-making.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• These models can learn representations from data and make inferences in a
flexible and data driven manner.
3. Probabilistic Graphical Models:
• Probabilistic graphical models, such as Bayesian networks and Markov
random fields, provide a framework for representing and reasoning about the
dependencies between variables in cognitive models.
• Algorithms like belief propagation, junction tree, and variational inference
are used to perform inference and learning in these graphical models.
• Graphical models can capture causal relationships, handle uncertainty, and
provide interpretable representations of cognitive processes.
4. Reinforcement Learning:
• Reinforcement learning algorithms, such as Q-learning, policy gradients, and
temporal difference learning, have been used to model how individuals learn
and make decisions through trial-and-error interactions with their
environment.
• These algorithms can capture the dynamic, goal-oriented nature of cognitive
processes and explain how individuals adapt their behavior based on
feedback and rewards.
5. Symbolic Reasoning:
• Symbolic reasoning approaches, such as logic-based systems and rule-based
models, have been used to represent and reason about cognitve pirocesses
in a more explicit, rule-driven manner.
• Algorithms like theorem proving, constraint satisfaction, and logic
programming have been applied in this context to model high-level cognitive
abilities, such as problem-solving, planning, and reasoning.
6. Hybrid Approaches:
• Many modern inference models of cognition combine multiple algorithms and
techniques, leveraging the strengths of different approaches.
• For example, hybrid models may integrate Bayesian inference with neural
networks or combine symbolic reasoning with reinforcement learning.
• These hybrid approaches can capture the richness and complexity of human
cognition, drawing on the complementary strengths of various computational
frameworks.

Prepared by Ms.M.Nithya, AP/AI&DS


CCS337-COGNITIVE SCIENCE UNIT-IV

• The choice of algorithms and techniques used in inference models of


cognition depends on the specific research questions, the nature of the
cognitive processes being studied, and the available data.
• Researchers often explore and compare the performance of different
algorithms to identify the most suitable approaches for modeling and
understanding human cognition.

Prepared by Ms.M.Nithya, AP/AI&DS

You might also like