0% found this document useful (0 votes)
24 views47 pages

Statistics & Psychology

Uploaded by

hrefsrc44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views47 pages

Statistics & Psychology

Uploaded by

hrefsrc44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Introduction

Mean, Median, Mode, and Range:


● Mean, median, mode, and range are essential descriptive statistics that provide insights
into the central tendency and spread of a dataset.
● The mean is calculated by summing all values and dividing by the total count.
● The median is the middle value when the data is arranged in ascending order.
● Mode refers to the most frequently occurring value.
● Range is the difference between the maximum and minimum values in the dataset.

First Quartile (Q1 - Lower Quartile):


● The first quartile, denoted as Q1, marks the boundary below which 25% of the data
points in the dataset fall.
● It represents the 25th percentile of the data.
● Finding Q1 involves arranging the dataset in ascending order and identifying the value at
the 25th percentile.

Second Quartile (Q2 - Median):


● Q2, also known as the median, divides the dataset into two equal halves.
● It represents the 50th percentile, indicating that half of the data points fall below this
value and half above.
● The median is located at the center of the dataset when arranged in ascending order.

Third Quartile (Q3 - Upper Quartile):


● Q3 signifies the 75th percentile, indicating that 75% of the data points fall below this
value.
● It's also referred to as the upper quartile.
● Calculating Q3 involves identifying the value that marks the 75th percentile of the
dataset after arranging it in ascending order.

Outliers and Interquartile Range (IQR):


● Outliers are data points significantly different from the rest of the dataset.
● The interquartile range (IQR) is a robust measure of variability, calculated as the
difference between the third quartile (Q3) and the first quartile (Q1).
● Outliers can be identified using the IQR method by determining upper and lower limits,
beyond which data points are considered outliers.

Box and Whisker Plot:


● A box and whisker plot visually represents the five-number summary of a dataset,
including the minimum, first quartile, median, third quartile, and maximum.
● It provides a concise summary of the distribution and helps identify potential outliers.
Skewness:
● Skewness measures the asymmetry of the data distribution.
● A box and whisker plot can provide visual cues for identifying skewness in the data,
which is crucial for understanding its distributional characteristics.

Dot Plot and Cumulative Relative Frequency:


● A dot plot offers a simple yet effective visual representation of individual data points
along a number line.
● Cumulative relative frequency tables provide insights into the distribution of data points
relative to certain values, aiding in the calculation of percentiles.
● Overall, the video equips viewers with the necessary tools to analyze and interpret data,
laying the foundation for more advanced statistical analysis. Through clear explanations
and practical examples, it fosters a deeper understanding of key statistical concepts
essential for data analysis and decision-making.

Statistics is a multifaceted field that can be broadly categorized into two main branches:
descriptive statistics and inferential statistics. Each serves distinct purposes in analyzing and
interpreting data.

Descriptive Statistics:
● Descriptive statistics focuses on organizing, summarizing, and presenting data in a
meaningful way.
● Various methods are employed, including graphical representations such as bar graphs,
histograms, pie charts, and line graphs, as well as tabular formats like frequency tables.
● The shape of the data distribution, whether symmetrical, skewed to the right, or skewed
to the left, provides valuable insights into the dataset's characteristics.
● Measures of central tendency, such as mean, median, and mode, offer summary
statistics that represent the typical value of the dataset.
● Measures of spread, including range, variance, and standard deviation, quantify the
dispersion or variability within the dataset.

Inferential Statistics:
● Inferential statistics involves using sample data to draw inferences or make predictions
about a larger population.
● It addresses questions that cannot be practically answered by examining an entire
population, necessitating the use of samples.
● By drawing conclusions from a sample, inferential statistics allow researchers to make
generalizations about the population.
● Confidence intervals provide a range of values within which the population parameter of
interest is likely to fall with a specified level of confidence.
● The larger the sample size, the greater the precision and confidence in the conclusions
drawn from the sample data.
Mean, Median, Mode, and Range
Mean:
● The mean is commonly known as the average of a set of numbers.
● To calculate the mean, sum up all the numbers in the dataset and then divide by the total
count of numbers.
● For instance, in the dataset {12, 7, 14, 5, 7, 11, 9}, the mean would be calculated as (12
+ 7 + 14 + 5 + 7 + 11 + 9) / 7 = 9.285.

Median:
● The median represents the middle value when the numbers in the dataset are arranged
in ascending order.
● If there are an even number of values, the median is the mean of the two middle
numbers.
● For example, in the dataset {5, 7, 7, 9, 11, 12, 14}, the median is 9.

Mode:
● The mode is the value that appears most frequently in the dataset.
● For example, in the dataset {6, 14, 8, 5, 3, 11, 8}, the mode is 8.
● Any dataset with no repetition cannot have a mode

Range:
● Range quantifies the dispersion of a dataset by measuring the difference between the
highest and lowest values.
● It provides a simple but effective measure of variability.
● For instance, in the dataset {6, 14, 8, 5, 3, 11, 9}, the range is calculated as 14 - 3 = 11.

Kristen has received scores on the first four chemistry exams and aims to achieve an average
score of 90 across all five exams.

Deriving the Equation:

● An equation is formulated to determine the missing score needed to achieve the desired
average.
● This equation is typically expressed as:
● Number of exams × Desired average − Sum of existing scores

In this context, the number of exams refers to the total number of tests (in this case, five,
because we include the test that Kristen is yet to take), the desired average is the target
average score (90), and the sum of existing scores represents the cumulative total of scores
already obtained on the first four exams.
For instance, if Kristen's scores on the first four exams are 85, 88, 92, and 89, she would need
to score 96 on the fifth exam to achieve an average of 90 across all exams. This calculation
ensures she reaches her desired average.

Finding Weighted Mean of a List of Numbers:


● One can find the weighted mean by multiplying each data point by its corresponding
weight and then dividing the sum of these products by the sum of the weights.
● This method allows for a more accurate representation of the average, considering the
varying significance of each data point.
● X = Dataset
● W = Datapoint Weight

Σ(𝑤𝑥)
𝑥𝑤 = Σ𝑤

Geometric Mean:
For example, consider a set of values representing growth rates of a certain species over
multiple years: 1.1, 1.05, 1.2, 1.15. To find the geometric mean, you would multiply these
values together and then take the fourth root:

𝑛
𝑛
𝐺= ∏ 𝑥𝑖
𝑖=1
G = geometric mean
N = amount of numbers in series
X = series iteration

There are several scenarios where using the geometric mean instead of the arithmetic mean is
preferable due to the nature of the data or the specific context of the analysis:

The geometric mean is more appropriate when dealing with quantities that have a multiplicative
relationship rather than an additive one. For example, growth rates, investment returns, or ratios
often exhibit multiplicative behavior. Using the arithmetic mean in such cases would distort the
interpretation of the data because it assumes an additive relationship.
In datasets where ratios between values are more meaningful than the differences between
values themselves, the geometric mean provides a more consistent representation. For
instance, if analyzing changes in percentages or rates over time, the geometric mean preserves
the consistency of these ratios.

The geometric mean is less sensitive to extreme values or outliers compared to the arithmetic
mean. Since it involves multiplication rather than addition, extreme values have less influence
on the final result. This makes the geometric mean more robust in datasets with outliers.

(The Arithmetic Mean between ‘80’ and ‘90’ is 85, whereas the geometric mean is 84.85)
(The Arithmetic Mean in series {0.3, 0.4, 0.35, 0.36, 0.99} is 0.48, whereas the geometric mean
is 0.432)

Harmonic Mean:
The harmonic mean is another type of average, like the arithmetic and geometric means, used
to summarize a set of numbers. However, it is distinct from the others in that it is particularly
useful when dealing with rates or ratios.

Mathematically, the harmonic mean is calculated as:

𝑛
𝐻= 1
Σ 𝑥
In simpler terms, to find the harmonic mean, you divide the number of values by the sum of the
reciprocals of each value.

The harmonic mean is more appropriate when dealing with rates, ratios, or other values that are
inversely proportional. It accurately reflects the average rate of change or efficiency in situations
where the impact of smaller values is significant.

INVERSE PROPORTION
Root Mean Square:
The root mean square (RMS) is a statistical measure that calculates the square root of the
arithmetic mean of the squares of a set of values. It is widely used in various fields, including
physics, engineering, signal processing, and statistics, to quantify the magnitude or "average"
magnitude of a set of values, particularly when dealing with varying magnitudes or fluctuations
over time.

𝑛
1 2
𝑅𝑀𝑆 = 𝑛
∑𝑥
𝑖

Standard Deviation
The standard deviation is a measure of the dispersion or spread of a set of values around the
mean. It quantifies the amount of variation or deviation from the mean value of the dataset.
Standard Deviation σ from set x with n amount of values present can be calculated with the
following formula:

1 𝑛 2
σ= 𝑛
Σ𝑖=1(𝑥𝑖 − µ)
i for iterations of x
μ is the arithmetic mean for dataset x

So, for each value of x subtract μ and square the result.


Repeat this for n iterations
Add together the entire dataset of x-μ
Multiply by n/1
Finally, find the square root of this result.
The method above will suffice in cases where direct proportion is found in a dataset, however, in
cases of inverse proportion one must use the following:

1 1
µˆ = 𝑛
Σ 𝑥

2 1 𝑛 1 2
σˆ = 𝑛
Σ𝑖=1( 𝑥 − µˆ)
𝑖

2
σˆ
σ= 4
𝑛µˆ

For Review:
Arithmetic Mean is best used in cases where the dataset has a direct proportion.
Harmonic Mean is best used in cases where the dataset has an inverse proportion.
Tables, Charts, and Graphs
Frequency Tables:
Frequency tables, also known as frequency distributions, are a way to organize and summarize
categorical or numerical data by displaying the frequency or count of each distinct value in the
dataset. They provide a clear and concise representation of the distribution of values, making it
easier to understand the overall pattern or spread of the data.

{2,4,4,7,4,7,2,8,2,2,2} | n = 11

x frequency(𝑓)

8 1

7 2

4 3

2 5

The first step in creating a frequency table is to identify all the unique values present in the
dataset. For categorical data, these are the distinct categories or groups, while for numerical
data, they are the distinct values or intervals.

We can also calculate relative frequency, and add it to the chart, where n is length of dataset x,
and 𝑓 is the frequency of target datapoint:

𝑓
𝑟𝐹 = 𝑛
→Σ(𝑟𝐹) = 1
x 𝑓 rF
8 1 0.09…

7 2 0.18…

4 3 0.27…

2 5 0.45…
To verify, the sum of all relative frequencies should be approximately 1: Σ(rF) = 1
Additionally, we can add cumulative relative frequencies to the table. This is useful for allowing
us to identify Q1 (≤0.25), Q2 (≤0.5, >0.25), Q3 (≤0.75, >0.5), and Maximum Percentile (≤1,
>0.75). It is best to sort a table by ascending frequency or x for this application to work.

𝑛
𝐶𝑟𝐹 = ∑ 𝑟𝐹
𝑖=1
CrF adds all Relative Frequency values from the first value, down to the target value. If you
were to calculate CrF for the 3rd value in a table, you would add the relative frequency of the
1st, 2nd, and 3rd value, but not any value after that. The final value in a table will always be 1.

x 𝑓 rF CrF
8 1 0.09… 0.09…

7 2 0.18… 0.27…

4 3 0.27… 0.54…

2 5 0.45… 1

IQR Outliers:
IQR Outliers are assessed by being outside of a specified range, given by the IQR, Q1, and Q3
values respectively; which are found like this:

𝐼𝑄𝑅 = 𝑄3 − 𝑄1
3
𝐼𝑄Ȓ = {< 𝑄1 − 𝐼𝑄𝑅}, 2
3
{> 𝑄3 + 2 𝐼𝑄𝑅}
To calculate the first quartile (Q1) and third quartile (Q3), you need to find the median of two
different subsets of your dataset. Here's how to calculate Q1 and Q3:

Order the Dataset: First, arrange your dataset in ascending order from smallest to largest.

Find the Median (Q2): Determine the median (Q2) of the dataset. If the dataset has an odd
number of values, the median is the middle value. If the dataset has an even number of values,
the median is the average of the two middle values.

Split the Dataset:


Split the dataset into two halves:
● For Q1: Select the lower half of the dataset, including the median if the dataset size is
odd.
● For Q3: Select the upper half of the dataset, including the median if the dataset size is
odd.

Calculate Q1: Q1 is the median of the lower half of the dataset.


● If the dataset size is odd, Q1 is the median of the values below the overall median (Q2).
● If the dataset size is even, Q1 is the median of the values below and including the
overall median (Q2).

Calculate Q3: Q3 is the median of the upper half of the dataset.


● If the dataset size is odd, Q3 is the median of the values above the overall median (Q2).
● If the dataset size is even, Q3 is the median of the values above and including the
overall median (Q2).

In summary, to calculate Q1 and Q3:


● For Q1, find the median of the lower half of the dataset.
● For Q3, find the median of the upper half of the dataset.

This process ensures that Q1 represents the median of the first 25% of the data, and Q3
represents the median of the last 25% of the data, dividing the dataset into four equal parts.
***IF there is a dataset where n∈2Z (has no direct center), the median is calculated by taking
the arithmetic mean of the two numbers closest to center, this also applies for Q1 and Q3.
Weighted mean should be used if proportions are not in halves (e.i: 0.25, where the lower value
will be assigned a weight of 1, and the higher will be assigned 0.5) Median value, Q1, and Q3
are given by:

𝑄1 = 𝑥 𝑛+1
4

𝑄2 = 𝑥 𝑛+1
2

𝑄3 = 𝑥 3(𝑛+1)
4

Box-and-Whisker Plots:
● Identify Quartiles
● The dataset is divided into quartiles: Q1, Q2 (the median), and Q3. Q1 represents the
first quartile (25th percentile), Q2 is the second quartile (50th percentile or median), and
Q3 is the third quartile (75th percentile).
● A box is drawn from Q1 to Q3, representing the middle 50% of the data (interquartile
range or IQR). The length of the box indicates the spread of the middle 50% of the data.
● A line is drawn inside the box to represent the median (Q2) of the dataset. This line
divides the box into two equal parts.
● Lines, known as whiskers, extend from the edges of the box to the minimum and
maximum values of the dataset, excluding any outliers. The length of the whiskers
indicates the range of the data outside the middle 50%.
● Outliers, which are data points that fall outside the whiskers, are represented as
individual points or dots.
● The box itself represents the interquartile range (IQR), showing where the middle 50% of
the data lies.
● The median line shows the central tendency of the dataset.
● The whiskers show the range of the data, excluding outliers.
● Outliers are identified as individual points outside the whiskers.
● Box plots are particularly useful for comparing distributions between different groups or
datasets and identifying any skewness, symmetry, or presence of outliers. They offer a
concise visual summary of the key characteristics of the data distribution.
Symmetric Distribution:
● A symmetric distribution is one where the data values are evenly distributed around the
center, resulting in a balanced shape.
● In a symmetric distribution, the mean, median, and mode are all located at the same
point, and the distribution can be divided into two equal halves by a vertical line passing
through the center. (Mode = Median = Mean)
● The classic example of a symmetric distribution is the normal distribution (bell curve),
where the data are evenly distributed around the mean, resulting in a symmetrical
shape.

Right-skewed Distribution (Positively Skewed):


● A right-skewed distribution, also known as positively skewed distribution, is
characterized by a long right tail.
● In a right-skewed distribution, the majority of data values are concentrated on the left
side (lower values), while the tail extends towards the right side (higher values).
● In a right-skewed distribution, the mean is typically greater than the median, and the
mode may be less than the median.
● Examples of data that may exhibit a right-skewed distribution include income
distributions (where a few individuals have very high incomes), response times in a
service system (where most responses are quick, but a few take much longer), and
exam scores (where a few students score exceptionally high).
Left-skewed Distribution (Negatively Skewed):
● A left-skewed distribution, also known as negatively skewed distribution, is characterized
by a long left tail.
● In a left-skewed distribution, the majority of data values are concentrated on the right
side (higher values), while the tail extends towards the left side (lower values).
● In a left-skewed distribution, the mean is typically less than the median, and the mode
may be greater than the median.
● Examples of data that may exhibit a left-skewed distribution include the distribution of
age at retirement (where most individuals retire at older ages, but a few retire earlier),
reaction times in a race (where most competitors finish quickly, but a few take longer),
and the size of earthquakes (where most earthquakes are small, but a few are very
large).

Bimodality:
Bimodality in statistics refers to a distribution that has
two distinct peaks or modes. In other words,
it describes a dataset or distribution where there
are two prominent groups or clusters of data points,
each with its own central tendency.
● In a bimodal distribution, there are two distinct
peaks or modes, which represent the most
frequent or common values in the dataset.
● These peaks are separated by a trough or
valley, indicating a clear separation between
the two groups of data points.

Bimodality can occur in distributions that are symmetric


or asymmetric. In symmetric bimodal distributions,
the peaks are of equal height and the distribution is
mirrored around a central point. In asymmetric bimodal distributions, the peaks may have
different heights and the distribution may be skewed.
Kurtosis (β):
● Kurtosis measures the peakedness or flatness of a distribution. It indicates whether the
tails of a distribution are more or less extreme (i.e., have more or fewer outliers) than
those of a normal distribution.
● Positive kurtosis indicates a distribution with heavier tails than a normal distribution
(leptokurtic), meaning the peak is higher.
● Negative kurtosis indicates a distribution with lighter tails than a normal distribution
(platykurtic), meaning the peak is lower.
● A kurtosis value of zero indicates a distribution with tails similar to those of a normal
distribution (mesokurtic).
𝑛 4
Σ𝑖=1(𝑥𝑖−𝑥̄)
β= 4
𝑛σ
Population Mean (μ):
● The population mean, often denoted by the Greek letter μ (mu), represents the average
value of a variable in an entire population.
● It is calculated by summing up all the values in the population and dividing by the total
number of values.
● The population mean provides a precise measure of the central tendency of the entire
population.
_
Sample Mean ( x ): _
● The sample mean, often denoted by ( x ), represents the average value of a variable in
a sample drawn from a population.
● It is calculated by summing up all the values in the sample and dividing by the total
number of values in the sample.
● The sample mean is an estimate of the population mean and is used to infer or estimate
characteristics of the population from which the sample was drawn.

Skewness (γ):
● Skewness is a measure of the asymmetry of the probability distribution of a real-valued
random variable about its mean. It quantifies the extent to which the distribution differs
from a symmetric distribution, such as the normal distribution.
● Positive Skewness indicates a Right-Skewed distribution, Negative Skewness indicates
Left-Skewing, and a Skewness of zero indicates symmetric distribution:
1 𝑛 3
𝑛
Σ (𝑥𝑖−𝑥̄)
𝑖=1
γ= 3

1 𝑛 2 2
(𝑛Σ (𝑥𝑖−𝑥̄) )
𝑖=1
Variance
Standard Deviation %:
For calculating standard deviation percentage, where μ is the mean, and σ is the standard
deviation:

σ
σ% = µ
MAD:
Mean Absolute Deviation (MAD), is the average distance between the points of a dataset, and
the population mean of the dataset. Formula for which is below, where x is the dataset, x̄ is the
sample mean, and n is the dataset length:

𝑛
Σ𝑖=1|𝑥𝑖−𝑥̄|
𝑀𝐴𝐷 = 𝑛
The MAD provides a measure of dispersion that is less sensitive to outliers compared to the
standard deviation. It is particularly useful when dealing with positive/negative-skewed
distributions or datasets containing extreme values.

MAD and Variance are closely related pieces of data, and should often be examined together.
Variance is σ2:

𝑛 2
2 Σ𝑖=1(𝑥𝑖−µ)
σ = 𝑛
Standard Deviation may also be used along with these values.

Standard deviation is generally used in place of MAD because it accounts for large fluctuations
and volatility changes, whereas MAD does not. MAD may be better suited for applications
where there may be incomplete, false, or misleading parts of a dataset, because it compresses
the range of outliers in the calculation. MAD is best for interpolating volatility, standard deviation
is best for complete datasets.
Margin of Error:
To calculate the margin of error using the standard normal distribution (Z-distribution), we can
use the formula for the standard error of the mean. The margin of error represents the range
within which we expect the true population parameter to lie with a certain level of confidence.

σ
𝐸𝑟𝑟 = ± Ƶ α
2 𝑛

1 2
𝑛 𝑛 𝑛
((𝑥𝑖+𝑥𝑖−1+𝑥𝑖+1)−3µ)
α= ∏ µ
𝑖=1

i.e
In dataset {2,
6, 4, 8, 9, 8, 4, 5, 3, 7, 3, 8, 9} n=13, where α is ≈0.1, and μ is 5.846, Err
2.19
would be |Ƶ 0.1 | or Err = ±1.60567262623.
2 13

This is because ≈Ƶ is found from α with the following formula:

α−µ
Ƶα = σ

We can further apply this data into a percentage of the mean:


𝐸𝑟𝑟
±| µ
· 100|

The final equation gives us the margin of error from the mean as a percentage. The margin for
error of the previously listed dataset is ±27.5% at 95% confidence (0.05 α). The acceptable
margin of error for a dataset should be between 4% and 8% for the data to be considered
usable. To lower the margin of error, increasing sample size, and lowering the confidence
interval (raising α) will be effective. The base confidence interval formula should only be used
as a benchmark value but not an absolute. The confidence interval may be raised if the error
percentage is less than 4% - 8%.

Geometrically, this can be represented as the continuous integral of a curve, with upper and
lower limits based on variables used in calculation of ≈Ƶ:

Ƶα =
α−µ
σ
{ lim 𝑓(𝑥)=α
𝑥→∞

lim 𝑓(𝑥)=µ
𝑥 → −∞

α

µ
𝑥
𝑑𝑥σ
· 𝑑𝑥

Mathematically, the accepted range can be represented as this:


σ
Ƶ
𝑛
µ
< 0. 02 ⇒ α > α⊤

σ
Ƶ
𝑛
0. 08 < µ
⇒ α < α⊤
σ
Ƶ
𝑛
0 < α < 1, 0. 02 < µ
< 0. 08 ⇒ 𝐸𝑟𝑟, α = ∃! ⊤
Clinical Statistics
Correlation:
Correlational research designs serve to explore the extent of association between two variables
without implying causation. These designs are instrumental in various fields, such as
psychology, sociology, and education, allowing researchers to investigate relationships between
different factors. Here are some key points regarding correlational research designs and
correlation coefficients:

Purpose of Correlational Research Designs:


● Correlational research designs aim to assess the relationship between two variables.
● They are used to explore how changes in one variable relate to changes in another
variable.

Examples of Correlational Research Questions:


● Investigating the correlation between creativity and academic performance.
● Examining the relationship between hours studied and exam scores.
● Analyzing the association between class attendance and grades.
● Exploring the correlation between marital satisfaction and parenting behavior.

Understanding Correlations:
● Correlations are typically represented by Pearson's correlation coefficient (denoted as r),
which ranges from -1 to 1.
● A positive correlation ( r>0 ) indicates that as one variable increases, the other variable
also tends to increase.
● A zero correlation ( r=0 ) implies no systematic relationship between the variables.
● A negative correlation ( r<0 ) suggests that as one variable increases, the other variable
tends to decrease.

Interpreting Correlation Coefficients:


● It's essential to understand that correlation does not imply causation.
● While correlation identifies associations between variables, it does not indicate a
cause-and-effect relationship.
● Misleading headlines and anecdotes often misinterpret correlations as causation,
leading to the fallacy of assuming causation from correlation.

Limitations of Correlational Research:


● Correlational studies cannot establish the direction of causality.
● Other variables, known as confounding variables, may influence the relationship
between the variables under study.
● Causation requires experimental manipulation and control over variables, which
correlational research designs lack.
The formula for Pearson's correlation coefficient r is this, where x is the first variable, y is the
second, and x̄, ȳ are the mean for their respective variable:

𝑛
Σ𝑖=1(𝑥𝑖−𝑥̄)(𝑦𝑖−ȳ)
𝑟=
𝑛 2 𝑛
Σ𝑖=1(𝑥𝑖−𝑥) ·Σ𝑖=1(𝑦𝑖−ȳ)

In this case, 0.5 ≤ |r| ≤ 1 would indicate a strong correlation, r ≅ 0 would indicate no
correlation, and 0 < |r| < 0.5 would indicate a weak correlation.

Reliability:
Reliability is a crucial concept in research and statistics, referring to the consistency and stability
of measurement. It ensures that the results obtained from a measurement tool or procedure are
dependable and trustworthy. In this context, there are four main types of reliability:

● Test-retest reliability assesses the consistency of scores obtained from the same
measurement tool or test administered to the same group of individuals at two different
points in time.
● A strong positive correlation between the scores obtained at time one and time two
indicates good test-retest reliability.
● It is particularly useful when measuring constructs or variables that are expected to
remain stable over time.

● Parallel forms reliability, also known as alternate or equivalent forms reliability, compares
two different versions or forms of the same test or measurement tool.
● It evaluates whether the scores obtained from different versions of the test are consistent
and positively correlated.
● This type of reliability is essential when researchers want to minimize practice effects or
ensure that multiple forms of a test are equally effective.

● Inter-rater reliability assesses the degree of agreement between two or more


independent raters or observers who evaluate the same set of data, such as behaviors,
performances, or responses.
● It measures the consistency in judgments or ratings made by different raters.
● Inter-rater reliability can be calculated using various statistical methods, including
percentage agreement or Cohen's kappa coefficient.
● Internal consistency evaluates the extent to which the items or questions within a
measurement tool are measuring the same underlying construct or attribute.
● It ensures that all items are contributing to the measurement of the intended construct
and are not measuring unrelated factors.
● Cronbach's alpha is a commonly used statistic to assess internal consistency, with
values closer to 1 indicating greater reliability of the scale.

Cronbach’s Alpha (ᾱ):


● Cronbach's alpha ranges from 0 to 1.
● A value closer to 1 indicates higher internal consistency reliability, suggesting that the
items in the scale are highly correlated with each other.
● Typically, a Cronbach's alpha value of 0.70 or higher is considered acceptable for
research purposes, although the threshold may vary depending on the context.
● Formula is as follows, where ĉ is the covariance.

(𝑛𝑥+𝑛𝑦)ĉ
ᾱ= 2
σ +(𝑛𝑥+𝑛𝑦−1)ĉ

𝑛
Σ𝑖=1(𝑥𝑖−𝑥̄)(𝑦𝑖−ȳ)
ĉ= 𝑛𝑥+𝑛𝑦
Hypothesis Testing:
The process begins with formulating two competing hypotheses: the null hypothesis (H0) and
the alternative hypothesis (H1 or Ha).
The null hypothesis (H0) typically represents the status quo or no effect, while the alternative
hypothesis (H1 or Ha) represents the effect we are interested in detecting.
For example, in a study testing the effectiveness of a new drug, the null hypothesis might state
that the drug has no effect, while the alternative hypothesis might state that the drug has a
significant effect.
Sampling Error:
Sampling error refers to the discrepancy between a sample statistic and the true population
parameter it represents.
It arises due to random variability in the sampling process and affects the accuracy of estimates
derived from sample data.
Sampling error is inevitable and is managed through appropriate sampling techniques and
statistical analysis methods.
Predictions and Standard of Evidence:
Specifying the level of significance (α) and choosing an appropriate test statistic to assess the
hypotheses. Significance can be determined based on a successful margin of error calculation
(as explained earlier), and is then halved, with each half placed on one side of a distribution.
Anything that occurs beyond these points (±α/2) represent improbabilities. Placement is made
based on what area of the distribution is equivalent to a α/2 ratio of the mode.

Chi-Square (χ2):
The chi-square test statistic (χ2) is calculated based on the observed frequencies in the
contingency table and the expected frequencies under the assumption of independence. The
formula for the chi-square test statistic depends on the type of chi-square test being conducted.
For the test of independence, the formula involves comparing observed and expected
frequencies in each cell of the contingency table. Another useful formula for determining the
effectiveness of Chi-Squares, is the degree of freedom, calculated below, along with
chi-square formula, from the following table:

Voting
Numbers vs.
Expected Party A Party B Party C Party D Party E
Observed 30 14 34 45 57

Expected 20 20 30 40 60

𝑑𝑓 = (𝑅 − 1)(𝐶 − 1)
Where R is the number of “rows” in a chart, and C is the number of “columns”. The df for the
above graph is 4.

𝑛 (𝑂𝑖−𝐸𝑖)
2
2
χα = ∑ 𝐸𝑖
𝑖=1
Where Oi is the iterations of the observed frequency in the chart, and Ei is the iteration of the
equivalent expected frequency. The α is the probability value found from the margin of error
calculation, or is approximated in the Critical Chi Value Graph, along with the df:
df(1) 0.004 0.02 0.06 0.15 0.46 1.07 1.64 2.71 3.84 6.64 10.83

df(2) 0.10 0.21 0.45 0.71 1.39 2.41 3.22 4.60 5.99 9.21 13.82

df(3) 0.35 0.58 1.01 1.42 2.37 3.66 4.64 6.25 7.82 11.34 16.27

df(4) 0.71 1.06 1.65 2.20 3.36 4.88 5.99 7.78 9.49 13.28 18.47

df(5) 1.14 1.61 2.34 3.00 4.35 6.06 7.29 9.24 11.07 15.09 20.52

df(6) 1.63 2.20 3.07 3.83 5.35 7.23 8.56 10.64 12.59 16.81 22.46

df(7) 2.17 2.83 3.82 4.67 6.35 8.38 9.80 12.02 14.07 18.48 24.32

df(8) 2.73 3.49 4.59 5.53 7.34 9.52 11.03 13.36 15.51 20.09 26.12

df(9) 3.32 4.17 5.38 6.39 8.34 10.66 12.24 14.68 16.92 21.67 27.88

df(10) 3.94 4.86 6.18 7.27 9.34 11.78 13.44 15.99 18.31 23.21 29.59

α0.95 α0.90 α0.80 α0.70 α0.50 α0.30 α0.20 α0.10 α0.05 α0.01 α0.001
||||| Acceptable Range

Using this data, we can conclude that the χ2 of our table is 8.10833333333, which is less than
the Chi-Square critical value at α0.05, but more than the Chi-Square critical value at α0.1
which means 0.05<α<0.1.

We can narrow this further by using Err; verifying that the theoretical prediction method we
used on this dataset works with ~91.5% certainty, and with an expanded population, would
have ~7.6% margin for error (because in this instance, n is all numbers in the expected row
added together, since each vote is a datapoint in itself).

Rejection of the Null Hypothesis:


The process of statistical decision-making involves the consideration of potential errors that can
occur. There are two main types of errors: Type I and Type II errors.

1. Type I Error:
● A Type I error occurs when a researcher rejects the null hypothesis when it is actually
true.
● In other words, it is a false alarm where the researcher mistakenly identifies a significant
effect where none exists.
● The probability of committing a Type I error is denoted by alpha (α), which is typically set
at a predetermined level such as 0.05.

2. Type II Error:
● A Type II error occurs when a researcher fails to reject the null hypothesis when there is
actually a real effect.
● This error occurs when the researcher misses a significant effect that is present in the
population.
● The probability of a Type II error is denoted by beta ( β̭ ).

3. Statistical Power:
● Statistical power is the ability of a statistical test to correctly detect a real effect when it
exists.
● It is equal to 1 minus the probability of a Type II error (1−β̭ ).
● High statistical power indicates a greater likelihood of correctly identifying a real effect,
thus minimizing the risk of missing important findings.
● Correct Decisions:
● In addition to errors, there are two correct decisions that can be made in statistical
hypothesis testing:
● Rejecting the null hypothesis when it is actually false (true positive).
● Failing to reject the null hypothesis when it is actually true (true negative).
● While researchers aim to identify significant effects (rejecting the null hypothesis), it is
also important to recognize situations where there is no significant effect (failing to reject
the null hypothesis).

T-Tests:
The focus of this topic is on the one-sample t-test and Cohen's d effect size, both of which are
fundamental concepts in statistical analysis. Here's a breakdown of the key points:

One-Sample t-test:
● The one-sample t-test is a statistical test used to determine whether the mean of a
sample differs significantly from a known or hypothesized population mean.
● It is commonly employed when the population standard deviation is unknown, making it
impractical to use the one-sample z-test.
● The test statistic for the one-sample t-test follows a t-distribution and is calculated by
comparing the sample mean to the population mean, taking into account the sample size
and sample standard deviation (s).

𝑥̄−µ
𝑡= −1
𝑠( 𝑛)
Difference from One-Sample z-test:
● The one-sample z-test is similar to the one-sample t-test but requires knowledge of the
population standard deviation.
● While the one-sample z-test may be more powerful under certain conditions, the
one-sample t-test is preferred when the population standard deviation is unknown, which
is often the case in practice.
Cohen's d Effect Size:
● Cohen's d is a measure of effect size that quantifies the magnitude of the difference
between two groups or conditions.
● It is particularly useful in comparing means across groups, providing insight into the
practical significance of the observed difference.
● Cohen's d is calculated by taking the difference between the means of two groups and
dividing it by the pooled standard deviation.

𝑥̄−µ 𝑥̄−µ
𝑑𝑡 = 𝑠
𝑑𝑧 = σ

𝑥̄−µ 1 𝑛𝑥̄ 2
𝑧= −1 𝑠= 𝑛
Σ𝑖 =1(𝑥𝑖 − 𝑥̄)
σ( 𝑛) 𝑥̄

T-Test for correlation:


A t-test for a correlation coefficient is a statistical test used to determine if the correlation
between two variables in a sample is significantly different from zero, indicating whether there is
a statistically significant linear relationship between the variables in the population from which
the sample was drawn. Assume H1 = Zρ and H0 = 0ρ:

𝑟 𝑛𝑥𝑦−2
𝑡𝑟 = 2
1−𝑟
Basic Linear Regressions:
Linear regression is a statistical method used to model the relationship between a dependent
variable and one or more independent variables by fitting a linear equation to the observed data.
It is commonly used for predictive analysis, understanding the direction and strength of
relationships between variables, and making predictions based on new data. Calculation is very
similar to calculating correlation coefficient:

𝑌' = 𝑏𝑋 + 𝑎
𝑛 𝑛
𝑛 ∑ 𝑥𝑖 ∑ 𝑦𝑖 𝑛 𝑛
∑ 𝑥𝑦𝑖− 𝑖'=1 𝑖'=1
𝑛
∑ (𝑦𝑖−𝑏) ∑ 𝑥𝑖
𝑖=1 𝑖=1 𝑖'=1
𝑏= 𝑛
2
𝑎= 𝑛
𝑛 ( ∑ 𝑥𝑖)
2 𝑖'=1
∑ 𝑥𝑖 − 𝑛
𝑖=1

***Depending on context, linear regressions may be referred to as “trendlines”, or something to


that extent.

Chaos Theory
Strange Attractors:
Unlike fixed points or limit cycles, which are relatively simple and predictable, strange attractors
are more complex and exhibit chaotic behavior. They are called "strange" because their
structure is typically intricate and non-repeating. Despite their complexity, strange attractors still
exert a degree of control over the system's behavior, pulling it toward certain regions of phase
space. Strange attractors typically have several defining characteristics:
● They are fractal in nature, meaning they exhibit self-similar patterns at different scales.
● They are sensitive to initial conditions, leading to unpredictable behavior known as
chaos.
● They have a bounded yet non-periodic structure, meaning trajectories never repeat
exactly but are confined to a certain region of phase space.

Co-Product notation:
Co-products denote the combining of two sets, without a change in any data points, within a
specified range. This can be written like this, where x and y are two different datasets:

𝑃 = 𝑥∐ 𝑦
Graphically, this can be represented as f(y) overlayed on f(x):

P, in this instance is a coproduct of the following:

1 1
𝑃 = 𝑓( 𝑥−0.1 1.7
) ∐ 𝑓( 𝑥−0 2.33
)
1+| 0.5
| 0.5+| 0.5
|
The Logistic Map Bifurcation, Bifurcation Variants:
The logistic map bifurcation diagram is a graphical representation that illustrates the behavior of
the logistic map as the parameter ȓ varies. It's one of the most iconic images in chaos theory,
providing insights into the complex and often unpredictable behavior of nonlinear dynamical
systems.

In the logistic map equation: 𝑥𝑛+1 = ȓ𝑥𝑛(1−𝑥𝑛) The parameter r represents the growth rate of the
population. By varying r from 0 to 4 and iterating the equation for each value of ȓ, we can
observe how the population dynamics change.

To create a bifurcation diagram, we typically start with a range of values for ȓ (e.g., from 0 to 4)
and an initial population value x0. For each value of ȓ, we iterate the logistic map equation a
large number of times, discarding a certain number of initial iterations to allow the system to
reach a steady state. Then, we plot the resulting population values on the vertical axis against
the corresponding values of r on the horizontal axis.

As r increases, the logistic map can exhibit different types of behavior:


Stable Equilibrium: For small values of ȓ, the population converges to a single stable
equilibrium value.

Periodic Oscillations: As ȓ increases, the system can undergo a period-doubling


cascade, where the population oscillates between two values, then four, then eight, and
so on, exhibiting periodic behavior.

Chaotic Behavior: At a certain critical value of ȓ, the system undergoes a period-doubling


bifurcation and enters a regime of chaotic behavior. In this regime, the population values
appear to be random and exhibit sensitive dependence on initial conditions.

The bifurcation diagram visually captures these different regimes of behavior. It typically shows
a sequence of bifurcation points where the system transitions from stable equilibria to periodic
oscillations, and eventually to chaotic behavior. The diagram often resembles a branching
tree-like structure, with intricate patterns emerging as ȓ increases. The formula for ȓ in its
complete form is this, where i is the precision exponent (recommend i=3):

[0...4·10]
ȓ=
10
We can now rewrite the formula like this:
σ
Ƶ
𝑛
𝑓( µ
) = ȓ𝑥(1 − 𝑥)
We can now represent the plot deviation like this:

(𝑥, 𝑦) = (ȓ, 𝑓(𝑥)) (ȓ, 𝑓1(𝑥)) = 𝑏𝑋 + 𝑎

𝐻𝑎 = ȓ, (𝑓2 ∐ 𝑓𝑁)
𝑁∈𝐼

Extrapolation range assuming f(0.08), range value can be written as:

∞ [𝑓1−𝑓...𝑁] ∞
∫ 𝑓1 ∐ 𝑓𝑁 − ∫ 𝑓1
−∞ 𝑁∈𝐼 −∞
Finally, for a complete non-null hypothesis (H1), we need to create variations to amalgamate
into the range.

−1
𝑓𝑏 = ȓ𝑥 · 𝑠𝑖𝑛(1 − 𝑥) 𝑓𝑐 = ȓ𝑥 · 𝑠𝑖𝑛 (1 − 𝑥)
2
𝑓𝑑 = ȓ𝑥 · 𝑙𝑜𝑔(1 − 𝑥) 𝑓𝑒 = ȓ𝑥(1 − 𝑥)

𝑓𝑓 = ȓ𝑥(1 − 𝑥) 𝑓𝑔 = ȓ𝑥 (1 − 𝑥)
lim 𝑓(𝑥)=1
𝑥 → −1
𝐻1 = [𝐻𝑎 ∐ 𝐻𝑏, ∐ 𝐻𝑐, ∐ 𝐻𝑑, ∐ 𝐻𝑒, ∐ 𝐻𝑓, ∐ 𝐻𝑔 ]
.∈𝐼 .∈𝐼 .∈𝐼 .∈𝐼 .∈𝐼 .∈𝐼
Ψ
φ Psychology
………………………………………………………………………………………………………………..

PART I ⸺Neurophysiology

Statistical Notes on Dualism and Monism in Psychology


Introduction to Dualism and Monism:
● Dualism posits that the mind and body are distinct entities.
● Monism asserts that all mental phenomena arise from physical processes in the body.
Dualism (Descartes):
● Descartes advocated for dualism, believing the mind to be non-material and separate from the
physical body.
● He argued that the mind controls the body.
Monism:
● Monism proposes that all mental experiences stem from physiological processes in the body.
● It suggests that there is no separate entity like the mind or soul apart from the body.
Contemporary Psychological Perspective:
● Most psychologists today align with monism versus dualism.
● Dualism persists in discussion due to the intuitive sense of separation between mind and body.
● However the mind and conscious state are evidently linked solely to physiological processes.
● Descartes' brilliance was no match for the folly of intuition.
Critique of Intuition:
● Descartes's dualism is discussed despite being refuted, highlighting the fallibility of intuition in
understanding the mind.
● Emphasis is placed on the need for scientific measurements and observations in psychological
inquiry.

Neuron Anatomy:
The physiology of
neurons involves understanding
the structure and function
of these specialized
cells that form the
basic units of the
nervous system.
Neurons are responsible
for transmitting
electrical and chemical
signals throughout the
body, facilitating–
–communication between different parts of the nervous system and enabling various bodily functions,
including perception, movement, and cognition. Anatomy is as follows:

[Σ] SOMA — Main bulbous body of the neuron, houses nucleus and genetic material, as well as:
[i] Smooth ER (endoplasmic reticulum) — Metabolizes compounds, typically lipids (but
can also metabolize certain drugs), and serves as a CA2+ ion storage and release vector,
which can trigger neurotransmitter release.
[ii] Mitochondrion — Synthesizes Adenosine Triphosphate from Adenosine Diphosphate
and inorganic phosphate using energy released from electron transport to drive the
reaction. Also drives Redox Homeostasis (prevents oxidation).
[iii] Microtubules — Microtubules serve as tracks for the movement of various cellular
cargo, including vesicles, proteins, and organelles, along the length of neuronal processes.
This intracellular transport, known as axonal transport, is crucial for delivering essential
materials to distant regions of the neuron, such as synaptic terminals.
[iv] Membrane — In neurons, the term "membrane" typically refers to the lipid bilayer
that surrounds the cell and its various compartments.
[v] Nucleolus — Synthesizes rRNA (Ribosomal RNA)
[vi] Golgi Apparatus — Modifies, sorts, and packages proteins and lipids used in
neurotransmitter production and secretion.
[vii] Ribosomes & Polyribosomes — Protein Synthesis and Processing
[viii] Rough ER — Synthesizes proteins and lipids into Membrane cells

[Δ] DENDRITE — Dendrites are specialized protrusions or branches extending from the cell
body (soma) of a neuron. They serve as the primary site for receiving incoming signals from
other neurons or sensory receptors and are crucial for integrating and processing this information
within the neuron. Dendrites play a fundamental role in neuronal communication and information
processing in the nervous system.

[A] AXON — The axon is a long, slender projection extending from the cell body (soma) of a
neuron, specialized for conducting electrical impulses, known as action potentials, away from the
cell body toward other neurons, muscles, or glands. Axons are essential for transmitting
information within the nervous system and are crucial for neuronal communication and function.
Axons are made up of the following parts:
[i] Axon Hillock — The axon hillock is the region of the neuron located between the cell
body and the beginning of the axon. It serves as the site of action potential initiation,
where electrical signals generated by synaptic inputs are integrated. The axon hillock
contains a high density of voltage-gated sodium channels, which are responsible for
initiating and propagating action potentials.
[ii] Node of Ranvier — The Node of Ranvier is crucial for the propagation of action
potentials along myelinated axons, a process known as saltatory conduction. Action
potentials are brief electrical impulses that travel along the length of the axon, allowing
neurons to communicate with one another. At the Node of Ranvier, the axon membrane is
not covered by myelin, allowing for the rapid exchange of ions between the extracellular
and intracellular environments. This keeps a (+) charge through the axon, allowing
efficient electron distribution.
[iii] Axon terminals, also known as synaptic terminals or boutons, are specialized
structures found at the ends of axons in neurons. They play a crucial role in transmitting
signals, known as action potentials, from one neuron to another or to target cells such as
muscles or glands. Axon terminals are the key sites of synaptic transmission, where
communication between neurons occurs at synapses.

[M] MYELIN SHEATH — The myelin sheath is a protective and insulating layer that surrounds
the axons of many neurons in the nervous system. One of the primary functions of the myelin
sheath is to insulate the axon, preventing leakage of electrical currents and allowing for more
efficient propagation of action potentials. The myelin sheath enables a process called saltatory
conduction, in which action potentials "jump" from one node of Ranvier to the next along the
length of the myelinated axon. At the nodes of Ranvier, where the axon membrane is exposed,
action potentials are regenerated, allowing for rapid and efficient propagation of electrical
impulses. Inside the axon, which the myelin sheath covers, are the following:
[i] Microfilaments are involved in axonal transport, the process by which cellular
components, organelles, and molecules are transported along the length of the axon.
Microfilaments play a key role in regulating neuronal morphology and cytoskeletal
dynamics. Actin filaments undergo dynamic assembly and disassembly processes,
allowing neurons to undergo structural changes in response to developmental cues,
environmental signals, and synaptic activity.
[ii] Microtubules are essential for axonal transport, the process by which cellular
components, organelles, and molecules are transported along the length of the axon.
Molecular motors, such as kinesins and dyneins, move cargo vesicles and other cellular
materials along microtubule tracks.

[Ψ] SYNAPSE — Synapses are specialized junctions that allow neurons to communicate with
each other and with other cells, such as muscles or glands, in the nervous system. Synapses play
a fundamental role in transmitting information, integrating signals, and coordinating the activity
of neurons within neural circuits. They are the basic building blocks of neuronal communication
and underlie all aspects of brain function and behavior. Synapses can be classified into two main
types based on their mode of neurotransmission: chemical synapses and electrical synapses.
Chemical synapses are the most common type of synapse and involve the release and diffusion of
neurotransmitter molecules across the synaptic cleft. Electrical synapses, also known as gap
junctions, involve direct electrical coupling between the presynaptic and postsynaptic neurons via
gap junction channels, allowing for rapid and synchronized transmission of electrical signals.
Synapses are found in 3 locations on a neuron:
[i] Axodendritic (Axon terminal connected to Dendrite or another Neuron).
[ii] Axosomatic (Axon terminal connected to Soma of another Neuron), which generally
carry out an inhibitory response by feeding the (+) ions in the affected neuron electrons,
stopping received action potential.
[iii] Axoaxonic (Axon terminal connected to Axon of another Neuron).
Neuron Types:
I. Bipolar neurons are characterized
by having two distinct processes or extensions
emanating from the cell body: one dendrite
and one axon. These neurons are commonly
found in specialized sensory organs
such as the retina of the eye, the olfactory epithelium
in the nose, and the inner ear.

II. Unipolar neurons have a single process extending from the cell body, which later
branches into two distinct processes: one that functions as a dendrite and another
as an axon. These neurons are primarily found in the peripheral nervous system (PNS),
especially in sensory ganglia such as the dorsal root ganglia. Unipolar neurons transmit sensory
information from peripheral sensory receptors, such as those for touch, temperature, and pain, to
the central nervous system (CNS).

III. Multipolar neurons (see anatomy, top left) possess multiple dendrites and a single axon emerging
from the cell body. They are the most common type of neuron in the central nervous system
(CNS), where they serve various functions, including sensory processing, motor control, and
interneuronal communication.

IV. Pyramidal neurons are a


specific type of multipolar
neuron found predominantly
in the cerebral cortex,
particularly in regions
such as the neocortex.
They are named for their
characteristic pyramid-shaped
cell bodies and are known
for their extensive dendritic
arborization and long axons.
Pyramidal neurons have a
distinctive triangular or pyramid
-shaped cell body, with a single dendrite extending upward toward the brain's surface and
multiple basal dendrites radiating outward horizontally, with one single axon.

V. Purkinje neurons, also known as Purkinje cells, are a unique type of neuron found in the
cerebellum. These neurons have a distinctive morphology characterized by a large, flask-shaped
cell body with multiple branching dendrites that extend horizontally in a plane parallel to the
surface of the cerebellum. The dendritic arborization of Purkinje neurons forms an elaborate
tree-like structure, with numerous dendritic spines protruding from the dendritic branches.
Neurotransmitters:
I. ACETYLCHOLINE (C7H16NO2+); Acetylcholine is involved in numerous functions, including
muscle contraction, autonomic nervous system regulation (e.g., heart rate, digestion), attention,
arousal, learning, and memory. It is the primary neurotransmitter released by motor neurons at the
neuromuscular junction, where it stimulates muscle contraction. Decomposed by Cholinesterase.

II. DOPAMINE (C8H11NO2); Dopamine plays essential roles in reward and motivation, movement
control, mood regulation, attention, learning, and reinforcement. It is involved in the regulation of
various brain circuits implicated in reward-seeking behavior, pleasure, and addiction.
Dysregulation of dopamine signaling is associated with disorders such as Parkinson's disease and
schizophrenia. Decomposed by Catechol-O-methyltransferase and Monoamine oxidase A.

III. SEROTONIN (C10H12N2O); Serotonin modulates mood, emotion, sleep-wake cycles, appetite,
aggression, and pain perception. It is involved in regulating mood states, anxiety, stress response,
and social behavior. Alterations in serotonin levels or signaling are associated with mood
disorders such as depression and anxiety disorders. Decomposed by Monoamine oxidase A.

IV. Γ-AMINOBUTYRIC ACID (C4H9NO2); GABA is the primary inhibitory neurotransmitter in the
central nervous system CNS. It regulates neuronal excitability, inhibits excessive neuronal firing,
and plays a crucial role in maintaining the balance between excitation and inhibition in neural
circuits. GABAergic dysfunction is implicated in conditions such as anxiety disorders, epilepsy,
and sleep disorders. Decomposed by GABA-Transaminase.

V. GLUTAMATE (C5H9NO4); Glutamate is the primary excitatory neurotransmitter in the CNS and
plays fundamental roles in synaptic transmission, learning, memory, and synaptic plasticity. It
activates glutamatergic receptors, including NMDA receptors and AMPA receptors, and is
involved in the modulation of neuronal excitability and synaptic strength. Decomposed by
Glutamate Dehydrogenase 1, Glutamate Dehydrogenase 2, Glutamate-ammonia ligase, and
Glutaminase.

VI. NOREPINEPHRINE (C8H11NO3) // EPINEPHRINE (C9H13NO3); NE and epinephrine are


catecholamine neurotransmitters that play roles in arousal, attention, stress response, mood
regulation, and cardiovascular function. They are involved in the "fight or flight" response and
regulate physiological responses to stress, such as increased heart rate and blood pressure.
Decomposed by Monoamine oxidase A.

VII. ENDORPHIN (α:C77H120N18O26S, ꞵ:C158H251N39O46S, ɣ:C83H131N19O27S) //


ENKEPHALIN (C28H37N5O7); Endorphins and enkephalins are endogenous opioid
neurotransmitters that regulate pain perception, mood, and reward. They act as natural painkillers
and are involved in mediating the analgesic effects of stress, exercise, and certain drugs.
Decomposed by aminopeptidase N and neutral endopeptidase.

*Inhibiting decomposing enzymes will lengthen and strengthen neurotransmitter effects (i.e. Monoamine
oxidase inhibitors [MAOI] allow serotonin and dopamine to rebind to receptor gated sodium channels on
postsynaptic membranes, allowing neurons stimulated by serotonin and dopamine to be activated more
frequently than normal).
Gross Neuroanatomy:

I. The FRONTAL LOBE is involved in higher cognitive functions, executive control, motor
planning, decision-making, reasoning, problem-solving, attention, and social behavior. It houses
the primary motor cortex, which controls voluntary movements, and the prefrontal cortex, which
plays a critical role in complex cognitive processes, personality, emotional regulation, and social
behavior.

II. The PARIETAL LOBE processes sensory information from the body, including touch,
temperature, pain, and proprioception (awareness of body position). It contains the primary
somatosensory cortex, which receives and interprets tactile sensations from the skin, muscles, and
joints. The parietal lobe is also involved in spatial perception, spatial awareness, navigation, and
attention to stimuli in the environment.

III. The TEMPORAL LOBE is primarily associated with auditory processing, language
comprehension, memory consolidation, and visual processing. It contains the primary auditory
cortex, which receives and processes auditory information from the ears, as well as regions
involved in language comprehension (Wernicke's area) and memory formation (hippocampus).
The temporal lobe is also involved in visual recognition and object perception.

IV. The OCCIPITAL LOBE is dedicated to visual processing and interpretation. It contains the
primary visual cortex (striate cortex), which receives visual information from the eyes via the
optic nerves and processes visual stimuli such as shapes, colors, and motion. The occipital lobe is
involved in visual perception, object recognition, spatial processing, and visual memory.

V. The INSULAR LOBE is located deep within the lateral sulcus and is involved in various
functions, including gustation (taste perception), visceral sensations, emotional processing,
empathy, and autonomic regulation. It plays roles in interoceptive awareness, emotional
awareness, and the integration of sensory and emotional information.
PART II⸺Behaviour & Conditioning

Human behavioral biology is a field of study that examines how biological processes, including
genetics, neurobiology, and endocrinology, influence human behavior. It seeks to understand how
biological factors interact with environmental and social factors to shape the way individuals think, feel,
and act. Biological processes occurring in our bodies can indeed affect our behavior in various ways. For
example, hormones such as cortisol and adrenaline, which are released in response to stress, can influence
our mood, decision-making, and social interactions. Similarly, neurotransmitters like serotonin and
dopamine play crucial roles in regulating emotions and motivation, impacting how we perceive and
respond to the world around us. Chronic pain, and other medical issues will directly affect the nervous
system. Conversely, what's happening in our brains can also have profound effects on our bodies. The
brain controls and coordinates many bodily functions, including heart rate, digestion, and immune
response, through complex neural networks and communication pathways. Changes in brain activity,
whether due to external stimuli or internal processes, can trigger physiological responses that manifest as
changes in behavior. The interconnectedness between biological processes and behavior highlights the
intricate relationship between the body and mind. These connections emphasize the importance of
considering both biological and psychological factors when studying human behavior and developing
interventions for mental health issues or behavioral disorders. Understanding how these systems interact
can provide valuable insights into human nature and inform approaches to promoting well-being and
addressing health challenges.

Behavioral evolution — in both animals and humans — is the process by which behaviors change
over time in response to environmental pressures, including competition for resources, predation, and
mate selection. In animals, behaviors evolve through mechanisms such as natural selection, where
individuals with advantageous behaviors are more likely to survive and reproduce, passing those
behaviors on to future generations. Similarly, in humans, behaviors can evolve through cultural evolution,
where learned behaviors are transmitted socially and may confer reproductive or survival advantages.
(Brev. Humans exhibit natural selection and elimination through both natural and constructed competitive
mechanisms). This cultural transmission can lead to rapid changes in behavior within human populations,
often independent of genetic evolution.

Analysis of behavior is often similar to game theory in that it involves studying how individuals
make decisions in strategic situations where the outcome depends not only on their actions but also on the
actions of others. Game theory provides a framework for understanding how individuals behave when
faced with choices that involve trade-offs between competing interests. Similarly, behavioral analysis
seeks to understand the underlying motivations and strategies driving individual and collective behaviors
in various contexts. The formation of games, whether in the context of animal behavior or human
interactions, can be seen as a microscale application of the same techniques used to design large social
systems. Games involve a set of rules and interactions between players, each seeking to achieve their own
objectives. Similarly, social systems, such as economies, governments, and institutions, are composed of
individuals or groups interacting within a framework of rules and incentives. By studying games and their
dynamics, researchers can gain insights into how larger social systems function and how they might be
designed or optimized to achieve specific goals or outcomes.
The following are types of evolutionary selection relevant to the development of social systems
and behaviors:

[I] Individual selection is a fundamental concept in evolutionary biology. It suggests that


individuals possessing genes that confer traits advantageous for survival and reproduction are
more likely to pass on those genes to the next generation. Essentially, natural selection acts on the
level of the individual organism, favoring traits that increase an individual's fitness—the ability to
survive and reproduce in a given environment. Over time, traits that enhance an individual's
reproductive success become more prevalent in the population.

[II] Kin selection expands upon the concept of individual selection by considering the role of
genetic relatedness in altruistic behavior. It posits that individuals may altruistically help their
relatives reproduce, even at the expense of their own reproduction, because they share genetic
similarities with their relatives. This behavior can be explained by the notion of inclusive fitness,
which includes both an individual's own reproductive success and the reproductive success of
relatives who share copies of the same genes. By assisting relatives in reproducing, individuals
increase the likelihood that their shared genes will be passed on to future generations.

[III] Reciprocal altruism is a form of cooperation observed in social animals where individuals
help others with the expectation of receiving help in return at some future time. Unlike kin
selection, which relies on genetic relatedness, reciprocal altruism is based on the expectation of
future benefits. Individuals engage in reciprocal altruism when they recognize that cooperating
with others can lead to mutual gains over time. This concept is often illustrated through repeated
interactions among individuals, where trust and cooperation can develop as a result of reciprocal
exchanges. Reciprocal altruism can be advantageous in environments where individuals
encounter the same individuals repeatedly and have the opportunity to remember past interactions
and adjust their behavior accordingly.

Punctuated equilibrium is a theory in evolutionary biology proposed by paleontologists Stephen Jay


Gould and Niles Eldredge in the 1970s. It suggests that evolutionary change occurs in relatively brief
periods of rapid change (punctuation), separated by longer periods of stasis (equilibrium). In contrast to
the traditional view of evolution as a slow and gradual process, punctuated equilibrium proposes that
species often remain stable for long periods without significant evolutionary change. During these periods
of stasis, species may undergo minor adaptations or variations, but overall, they exhibit relatively little
morphological or genetic change. According to punctuated equilibrium, major evolutionary changes occur
relatively rapidly and are associated with events such as environmental upheavals, habitat changes, or the
colonization of new ecological niches. These rapid bursts of evolutionary change can lead to the
emergence of new species or the evolution of novel traits within existing species. In respect to behavior,
this theory constitutes that behavior moves through periods of punctuation and equilibrium, that is
brought about based on the volatility of an individual's circumstance. Additionally, the emergence of new
behavior, thought, and thought patterns in the brain is caused by an internalized process of natural
selection directed and regulated by the association between existing thought patterns and corresponding
negative outcomes. (ex. Say there was an optimist, who was approached by a close friend about a new
business the friend planned building. The friend propositions the optimist to invest in the startup, and the
optimist does. The optimist invests a significant amount of money, despite others cautioning the optimist
that it would be best to lean on the side of caution. The friend's business eventually collapses and the
optimist loses all the money put in, say about 75% of the optimist’s savings. Due to the immense
variability brought on by a large investment proposition, and the prolonged negative circumstance caused
by the action to invest, the optimist may be inclined to adopt the behavioral archetype of a pessimist as a
response, to prevent a future loss like this occurring. Even if there were other factors beyond optimistic
and trusting behavior that contributed to the losses, the optimist will be inclined to remove/replace
whatever behavior is immediately evident to have been responsible for the loss. Conversely, if the
investment had been a success, the optimist is inclined to become more optimistic, even if ultimately, the
risk taken by the decision was unchanged. And theoretically, if the optimist had absolute certainty of
failure or success after the investment was made, there would be less behavioral change when the
loss/profit occurred). In essence, the elimination and selection of behaviors is based on conformation from
uncertainty, rather than solely punishment & reward. Punctuated equilibrium can be measured in
dopamine release during exposure to certain events that the participant considers unpredictable.

[Brev.; Behavior appears and fades in relation to the punishment/reward dynamic of operant conditioning,
mimicry of classical conditioning, and the certainty/uncertainty dynamic presented through Punctuated
Equilibrium. Human behavior, on both the societal and individual level, functions as a microcosm of
natural selection in relation to what is advantageous for survival, and those we consider kin, or those who
reciprocate altruism.]

Behavior and behavioral conditioning is a biopsychosocial phenomenon. It is influenced and dictated by


both macro and micro external mechanisms, as discussed previously, but is also influenced heavily by the
internal process of the psyche. The mechanisms of the psyche can be broken down as follows:

(i) Conscious: Mental processing we are aware of.


(ii) Subconscious: Mental processing we are unaware of, that can be accessed when required
(short/long-term memory).
(iii) Unconscious: Mental processing that we can never access.
(iv) Collective Unconscious: Certain mental processing we can never access, that is shared by a
macro-collective, all humans, primates, or even all mammalian species.

I. OUTER WORLD refers to the external reality or environment in which an individual exists. It
contrasts with the "inner world," which consists of the individual's thoughts, feelings, memories,
and unconscious processes. The outer world provides the raw material upon which the psyche
operates and interacts. It serves as a mirror reflecting aspects of the individual's inner world, such
as archetypes, symbols, and unconscious contents.

II. PERSONA (Conscious) refers to the social mask or role that an individual presents to the outside
world. It represents the public face or image that an individual constructs to interact with society,
fulfill social expectations, and adapt to various roles and situations. The persona is derived from
the Latin word for "mask," emphasizing its function as a facade or disguise. The persona is not
fixed or static but rather dynamic and flexible, adapting to different social roles, environments,
and circumstances. Individuals may develop multiple personas, each tailored to specific social
contexts or roles, such as the professional persona at work, the parental persona at home, or the
social persona in social settings.

III. EGO (Conscious) is one of the fundamental components of the psyche, representing the
conscious mind or the center of an individual's awareness and identity. Unlike Freud's
conceptualization of the ego as primarily concerned with mediating between the demands of the
id and the superego, Jung's understanding of the ego is broader and includes the totality of
conscious experience. The ego is responsible for organizing perceptions, thoughts, feelings, and
memories into a coherent sense of self, as well as for maintaining a sense of continuity and
identity over time. It acts as the focal point of conscious awareness, enabling individuals to
interact with the external world and make decisions based on rational thought and personal
values.

IV. COMPLEX (Personal Unconscious) refers to a cluster of emotionally charged thoughts, feelings,
memories, and perceptions that are organized around a common theme or pattern. Complexes are
formed through personal experiences, particularly those that are emotionally significant or
traumatic, and they can have a profound influence on an individual's thoughts, behaviors, and
relationships.

V. SHADOW (Collective Unconscious) represents the unconscious aspect of the personality that
contains repressed or suppressed qualities, traits, desires, and impulses that are deemed
unacceptable or incompatible with the conscious self-image. It is one of the most fundamental
and complex concepts in Jungian theory, playing a crucial role in the process of
individuation—the journey toward wholeness and self-realization. The shadow is formed through
the process of socialization and the internalization of cultural norms, values, and expectations.
From an early age, individuals learn to suppress certain aspects of themselves that are considered
undesirable or unacceptable by society or their own ego ideal. These may include qualities such
as aggression, selfishness, sexuality, vulnerability, or creativity, depending on cultural and
personal influences.

VI. ARCHETYPE (Collective Unconscious) is a fundamental and universal symbol or motif that is
inherited from humanity's collective unconscious. Archetypes are deeply rooted in the human
psyche and manifest in myths, legends, fairy tales, religious symbols, dreams, and cultural
narratives across different cultures and time periods. Carl Jung proposed the concept of
archetypes as part of his theory of the collective unconscious—the deeper layer of the psyche
shared by all human beings. Archetypes are symbolic patterns or prototypes that represent basic
human experiences, emotions, motivations, and themes. They are often associated with
fundamental aspects of the human condition, such as birth, death, love, power, and
transformation. Archetypes embody universal truths and patterns of human behavior that
transcend individual differences and cultural boundaries.

VII. SOUL IMAGE (Collective Unconscious) refers to a symbolic representation of the self or psyche
that reflects the deepest and most essential aspects of an individual's identity. Also known as the
"imago" or "anima/animus," the soul image embodies the totality of the unconscious psyche,
encompassing both the conscious ego and the unconscious aspects of the self. The soul image is a
complex and multifaceted archetype that manifests in various forms and symbols within dreams,
fantasies, myths, and cultural motifs. In its masculine form, known as the animus, it represents the
masculine qualities and potentials within the psyche of a woman. In its feminine form, known as
the anima, it represents the feminine qualities and potentials within the psyche of a man. Jung
believed that the soul image serves as a guide and mediator between the conscious ego and the
unconscious depths of the psyche. It acts as a bridge between the conscious and unconscious
realms, facilitating communication, integration, and transformation. Through engagement with
the soul image, individuals can access deeper layers of the psyche, gain insight into their
innermost desires and fears, and embark on the journey of individuation—the process of
becoming whole and realizing one's full potential. The soul image usually manifests as an exact
version of your physical self.

VIII. INNER WORLD refers to the realm of the psyche that lies beyond conscious awareness. It
encompasses the vast landscape of thoughts, feelings, memories, fantasies, dreams, and
unconscious processes that shape an individual's subjective experience of reality. The inner world
is contrasted with the "outer world," which consists of the external environment and social
interactions. Carl Jung proposed the concept of the inner world as part of his theory of the psyche,
which includes both conscious and
unconscious aspects.
According to Jung, the inner world is
populated by various psychic contents,
including archetypes, complexes, symbols,
and unconscious dynamics, which influence
conscious thoughts, behaviors, and perceptions.
One of the central tenets of Jungian
psychology is the idea of the collective
unconscious, a deeper layer of the psyche
shared by all human beings.
The collective unconscious contains
archetypes—universal symbols and motifs
that represent fundamental aspects of the
human experience. These archetypal images
and patterns manifest in dreams, myths, fairy
tales, and cultural narratives, reflecting deeper
psychological truths and dynamics. The inner
world is also shaped by personal experiences,
memories, and unconscious complexes—
clusters of emotionally charged thoughts,
feelings, and memories organized around a
common theme or pattern. These personal unconscious contents are often repressed or forgotten
but continue to influence conscious awareness through dreams, fantasies, and symbolic
expressions.
IX. SELF (Conscious, Unconscious, Collective Unconscious) represents the totality of the
psyche—the conscious and unconscious aspects of the individual's personality, identity, and
experience. It is the central organizing principle of the psyche, serving as a unifying force that
integrates conscious awareness with the deeper layers of the unconscious. The self is distinct from
the ego, which represents the conscious mind and the center of an individual's awareness and
identity. While the ego is concerned with everyday tasks, perceptions, and interactions with the
external world, the self encompasses a broader and deeper sense of identity that includes both
conscious and unconscious aspects of the psyche. One of the key features of the self is its
symbolic representation as the "archetypal image of wholeness." According to Jung, the self
manifests as a symbolic idea or motif that represents the individual's potential for integration,
balance, and completeness. It embodies the union of opposites—the conscious and unconscious,
the masculine and feminine, the light and dark—as well as the synthesis of various psychological
elements and functions.

[The following is the Transcript of Dr.Ana Yudin’s lecture of the Jungian Framework of Repression]

“I want to talk today about The Lies we tell ourselves about who we are. Specifically in today's
chat I'll be taking a Jungian perspective; meaning inspired by the teachings of Carl Jung, a Swiss
psychiatrist who broke off from Freud back in the day. To give you a simplified version of how Jung saw
the structure of the mind: like Freud he believed that the psyche was split up into conscious and
unconscious, and that over the course of socialization– as we grew up and matured –our ego developed.
Which started to shun things that weren't societally or familiarly accepted aspects of the self, (things) that
weren't socially acceptable. And as the ego started shunning those things to the unconscious, it also
started forming something called The Persona. The Mask we wear publicly. Who we want to show people
that we are. And then the very opposite of that Persona, everything that is ‘ego-dystonic’ meaning
everything that our ego doesn't like about us, gets personified into something called ‘The Shadow Self’.
So you have consciously the Persona and the ego working very hard to keep your ‘ego-syntonic’. And
you have under that, its very opposite, the Shadow Self, the unconscious Realm of the psyche. The Self,
according to a Jungian perspective, is the totality of all of it. It is the Persona and the Shadow, it's the
conscious and the unconscious, the ‘ego-syntonic’ and the ‘ego-dystonic’. The self is wholeness, the self
is integration. It's not just what we want to be, it's also everything we want very hard not to be! It's both
Shadow and
light.

Think about it this way: let's imagine that you're taking some sort of personality test, and it's
trying to test a specific trait: like whether you're more quiet or loud. So you have on one end if you're
100% quiet and you have on the other end if you're 100% loud, and you may like to think of yourself as
somewhere on that linear Spectrum, you might like to think that you are 82% quiet, 18% loud. What you
don't understand is that you are the entire Spectrum. You are both quiet and loud. That is the self, that is
the totality of you: because you are also sometimes what you wish not to be. And everything that you
wish not to be gets personified inside you, in the unconscious. You are the whole Spectrum; not a point on
it. So keeping this in mind, what are the lies that we tell ourselves about our identities?
One is that, as children, we start to tell kids: "You're so this, you're so that," and they start to
become these self-fulfilling prophecies. We are, in part, shaping who the children become and who their
shadows become. Let's say, for instance, that you're a mother and you have a kid who is very talented at
writing. You tell them, "Hey, you're so good at writing; you're awesome at that, little guy." The child is
going to internalize that and keep telling themselves that over the course of their life. They're going to
start to identify with being a good writer. As time goes on, if sometimes they brush up against a difficult
assignment in school where they don't do so well or encounter an English teacher that doesn't really
resonate with them, what do you think is going to happen? The shadow self gets triggered. The part of
them that lingers in the dark, everything that they wish not to be, everything that they think that they're
not, rears its head. When that happens, it's very difficult to have self-compassion because the ego wants to
be completely aligned with its values. The ego doesn't want to be the exact opposite of its values. So, we
have to be very careful, especially when speaking with children, because they're like sponges; they absorb
everything. We have to be very mindful of the things we tell them about who they are because those
stories can become very limiting. They need to be able to choose—almost like a tabula rasa—they need to
be the ones to decide what they can and cannot do.
Another way that we tell lies is by telling stories about who we are. We like to think of ourselves
as holding certain clusters of traits. For instance, I like to think of myself as a tidy person, a person
attentive to detail, and a person who is eloquent. I don't think of myself as somebody particularly good at
math. I don't think of myself as a particularly extroverted person. In this day and age, we see so many of
these limiting narratives, particularly among young people who want very badly to believe that there is
something pathological with them, who want there to be something wrong with them psychologically or
sometimes even physically. They say, "No, I can't be organized because I have ADHD," or "No, I can't be
socially savvy because I have autism spectrum disorder," or "No, I can't turn in my homework because I
have POTS." The narratives we tell about ourselves are always limiting in some way, but some are a lot
more limiting than others. Now, no one's denying that you may have ADHD, ASD, or POTS. The issue is
not in the diagnosis; the issue is in how you identify yourself. When you identify yourself with the
conscious aspects of you, you're not seeing yourself fully; you're not seeing the totality of the self. You’re
not seeing that, yeah, maybe sometimes you struggle to be socially savvy, but other times you're quite
good at getting intimate with people. The stories that we tell about ourselves are sometimes overly
negative or overly positive. In this example, when they're negative and say, "No, I can't do this because
I'm this way," it's still the ego getting its needs met. The ego, on some level, wants to be reassured; it
wants you and the people around you to reassure it so that you can feel good about yourself. The self,
with a capital S, doesn't care about those things. The self with a capital S understands that sometimes it
struggles with those things and sometimes not, and neither of them defines the self.

Now, other times the ego tells an overly positive story. It tells us that we're the greatest person in
the world; we're so good at all these different traits. I mean, there's a reason why, in almost every single
field, people tend to believe that they are better than average at what they do, which mathematically just
doesn't add up. I mean, I said I'm not good at math, but even I understand that doesn't add up. When we
tell ourselves these idealized versions of who we are, when we over-identify with a person without
realizing that it's not really who we fully are, then, when we come into contact with somebody who
activates a part of our shadows, we can be very critical and judgmental and hard on them. We can start
pointing fingers and saying, "Well, you shouldn't do that because that's not according to my values." You
know what they say: when you point one finger, you've got three others pointing back at you. We don't
like to see aspects of ourselves that we've shunned, either in ourselves or in other people. When we see
them in other people, it reminds us that we have that capability within ourselves, and we instantly want to
distract from that. We instantly want to start pointing fingers and assigning blame. For example, let's say
that you always want to be a very kind person, and you notice someone expressing their anger in a very
blunt way. You might feel tempted in that moment to say, "Wow, I don't think they should do that; that
wasn't very nice of them," because really, what you're saying in that moment is, "I have that capability
inside me too. I sometimes overreact to things when I'm angry too, and I feel a lot of shame about that.
So, in order to repress that side of me, I'm going to shame the person in front of me." When in reality,
when we come into contact with someone who activates our shadows, what we should do instead is lean
into it and wonder, "Where is this judgment coming from? What part of my shadow is this activating?
What is it that I feel ashamed of now about myself?" Because if I were to fully do the shadow work and
integrate that into the totality of myself, I wouldn't be so shocked that this person is overreacting a little
bit in the way they’re expressing their anger. I would understand that sometimes it happens. There is
anger just as there is kindness. Look, nobody's perfect at this. Nobody is free of judgment or free of
criticizing other people who are falling short of their values. It's a constant process. You constantly have
to implement this as you go about your life in order to gain more insight about yourself and integrate the
unconscious along with the conscious.

Another lie that we tell ourselves is that we can be perfect, that the persona can somehow
fine-tune itself to be beyond reproach. The brighter the light shines, the darker the shadow is going to be,
right? You don't really see shadows on overcast days when it's not very bright. The brighter you try to
make your persona, the darker your shadow is going to be as well. So, if you try to idealize yourself or
burn yourself out with perfectionism, you're going to have a monster buried under your porch. The goal is
not self-esteem. The goal is not, "Let me try to be the best person that I could possibly be." The goal is
self-compassion towards both the persona and the shadow, acknowledging that both are part of who I am.

So, what is a healthier alternative to telling ourselves these lies about who we are? We can stop thinking
of ourselves in such limited terms, in terms of, "I have these traits, and I am not good at these things." We
can instead start to acknowledge a more integrated, whole version of ourselves. We can acknowledge our
shadows; we can get to know them a little bit. We can practice compassion for them, and we can try not to
label ourselves because labels are inherently limiting. Masks can be fun—you know, I love Halloween; I
love dressing up for the occasion. Masks are an opportunity to play around with different personas, and
that is in part healthy. You should experiment with different ways of being, different traits that you want
to tinker with, and you should also have an intact ego. If you don't have an intact ego, that is essentially
psychosis; you don't see the difference between yourself and the environment. We need an ego to
function. The goal is not ego death. The goal is merely self-compassion towards the full self and not
limiting ourselves. Play around with different masks, different personas, but don't treat them like chains
that have to keep us stuck in one place forever.

It's also healthier to have compassion for the totality of others—to not label them, to not say, "This is a
bad person," or "This person is cruel," and instead see that sometimes they're cruel, sometimes they're not.
Nobody is all of one thing. Everybody is one thing and its polar opposite at once. When you notice
yourself feeling judgment towards other people, lean into it. Lean into what it means about yourself, not
about that person. I think you'll find that a lot of our identities are essentially illusions created by our ego.
And again, that's not all bad; we need the ego to survive. It's healthy to have an ego; just don't let it get to
a point where it's starting to impact your life.”

(i.) Conditioning operandi


● Operant Conditioning
○ Vicarious Conditioning
○ Counterconditioning
● Classical Conditioning
○ Second Order Conditioning
● Cognitive Conditioning
● Imprinting

(ii.) Classical Conditioning


In the early part of the 20th century, Russian physiologist Ivan Pavlov (1849–1936) was studying
the digestive system of dogs when he noticed an interesting behavioral phenomenon: The dogs
began to salivate when the lab technicians who normally fed them entered the room, even though the dogs
had not yet received any food. Pavlov realized that the dogs were salivating because
they knew that they were about to be fed; the dogs had begun to associate the arrival of the
technicians with the food that soon followed their appearance in the room. With his team of researchers,
Pavlov began studying this process in more detail. He conducted a series of experiments in which, over a
number of trials, dogs were exposed to a sound immediately before receiving food. He systematically
controlled the onset of the sound and the timing of the delivery of the food, and recorded the amount of
the dogs’ salivation. Initially the dogs salivated only when they saw or smelled the food, but after several
pairings of the sound and the food, the dogs began to salivate as soon as they heard the sound. The
animals had learned to associate the sound with the food that followed.

Pavlov had identified a fundamental associative learning process called classical conditioning.
Classical conditioning refers to learning that occurs when a neutral stimulus (e.g., a tone) becomes
associated with a stimulus (e.g., food) that naturally produces a behavior. After the association is learned,
the previously neutral stimulus is sufficient to produce the behavior.

(iii.) Operant Conditioning


In classical conditioning the subject learns to associate new stimuli with natural, biological
responses such as salivation or fear. The subject does not learn something new but rather begins to
perform in an existing behavior in the presence of a new signal. Operant conditioning, on the other hand,
is learning that occurs based on the consequences of behavior and can involve the learning of new actions.
Operant conditioning occurs when a dog rolls over on command because it has been praised for doing so
in the past, when a schoolroom bully threatens his classmates because doing so allows him to get his way,
and when a child gets good grades because her parents threaten to punish her if she doesn’t. In operant
conditioning the organism learns from the consequences of its own actions.
(iv.) Cognitive Conditioning
Cognitive conditioning refers to a process of learning that involves internal mental processes,
rather than just simple stimulus-response relationships seen in classical or operant conditioning. It
emphasizes how cognitive factors like thoughts, expectations, and perceptions play a crucial role in how
we learn and modify behavior. Unlike the more mechanistic views of classical and operant conditioning,
cognitive conditioning considers the learner as an active processor of information. Cognitive conditioning
incorporates the idea that learning involves forming schemas (mental frameworks) and categorizing new
information into these frameworks. Humans and animals learn by linking new experiences to pre-existing
knowledge, not just by forming new associations from scratch. Essentially: Cognitive conditioning is the
changing of associations and perceptions of reinforcement/punishment (i.e. a change of mindset).
Cognitive conditioning is essentially the development of a conditioned response due to creating internal
reinforcements and punishments, done with the intention of isolating the most beneficial behavior from a
set of possible conditioned behaviors. Rather than directly influencing a behavior, it is planting thoughts,
ideas, and conditioned responses to slowly tip the scales of cognition away from an undesired behavior
and toward a desired one. This is based on the idea that behaviors experience their own cycle of natural
selection; Punctuated Equilibrium. And that the subconscious selects ‘permissible’ behaviors, based on
what is seen as most beneficial in the current environment.

(v.) Imprinting
Imprinting is a type of rapid, early-life learning that occurs during a specific and often critical
period of development in certain animals, particularly birds and mammals. It is a form of attachment and
recognition learning that usually happens within a limited time window after birth or hatching, where the
young animal forms a strong bond with the first moving object it encounters, often a parent. Imprinting is
unique because it occurs quickly and tends to be irreversible once established. Imprinting occurs during a
specific, limited time frame, known as a critical period or sensitive period, shortly after birth or hatching.
This is a biologically determined phase when the young animal is most receptive to forming attachments.
In humans, the critical period for general behavioral (limbic) imprinting is 43-49 weeks after conception,
whereas the critical period for sexual imprinting is 10-13 years of age. The critical period for
psychosexual imprinting is generally closer to 10 or 11 years of age, due to the overlap of pubetic limbic
development and neuroplasticity.
(vi.) Vicarious Conditioning
Vicarious conditioning is a type of learning that occurs by observing the experiences of others,
rather than through direct personal experience. This concept is central to observational learning and social
learning theory, primarily developed by psychologist Albert Bandura. Vicarious conditioning happens
when an individual witnesses someone else being rewarded or punished for a behavior, leading them to
either adopt or avoid that behavior themselves. This is learning "by proxy" or "vicariously" through
others.

(vii.) Counterconditioning
Counterconditioning is a behavioral technique used to replace an unwanted or negative
conditioned response with a new, more desirable response by pairing the trigger for the undesirable
behavior with a positive stimulus. This method is particularly useful in treating fears, phobias, and anxiety
disorders by associating the anxiety-inducing stimulus with a more relaxed or pleasant experience,
effectively counteracting the original conditioned response.

(viii.) Second Order Conditioning


Second-order conditioning (also called higher-order conditioning) is a form of classical
conditioning in which a previously conditioned stimulus (CS) is used as a basis for learning a new
association with another neutral stimulus. Essentially, a second neutral stimulus is paired with the
conditioned stimulus to elicit the same conditioned response (CR) without the need for the original
unconditioned stimulus (US). This process allows a chain of associations to form, where new stimuli can
become conditioned even without directly being associated with the unconditioned stimulus, extending
the influence of classical conditioning to more complex behaviors.

[i.e. Pavlov walking over to the bell to ring it prompting a salivation response; the subject (dog)
has formed a recognition of a secondary pattern, understands that approaching the bell means ringing the
bell, which means access to food].

(ix.) Resistance
Resistance to conditioning refers to the difficulty or failure of an individual (or animal) to acquire
a new conditioned response through conditioning. Despite repeated attempts to establish an association
between a stimulus and a response, some individuals may show limited or no change in behavior. This
resistance can occur for a variety of reasons, including biological, cognitive, emotional, and
environmental factors. If an individual has been exposed to a neutral stimulus multiple times without any
significant outcome (i.e., the stimulus is not followed by reinforcement or punishment), they may develop
a resistance to associating that stimulus with a new response. This is known as latent inhibition. The more
familiar the stimulus, the harder it is to form new associations with it. Some behaviors and stimuli are
more easily conditioned than others due to evolutionary factors. Organisms are biologically predisposed
to learn certain associations more readily because these associations have survival value. In contrast to
biological preparedness, contrary biological predispositions (or contra-preparedness) can make
conditioning difficult. Certain stimuli and responses may go against an organism’s natural instincts,
making conditioning harder.
PART III⸺Logic and Ethics

The Philosophy of Logic


The branch of philosophy known as philosophy of logic examines the nature and applications of
logic. It looks into the philosophical issues that logic raises, such as the assumptions that are frequently
present in implicit logic theories and how they are applied. This entails defining logic and figuring out
how various logical systems relate to one another. It covers the nature of the foundational ideas that logic
uses as well as how logic relates to other academic fields. A popular description of philosophical logic is
that it is the branch of logic philosophy that examines the application of logical techniques to
philosophical issues, frequently in the form of elaborated logical systems such as modal logic. As the field
that studies the consistency and completeness of formal logical systems, metalogic is strongly associated
with the philosophy of logic. Scholarly literature contains a variety of descriptions of logic's nature. Many
people define logic as the study of the rules of reasoning, sound reasoning, legitimate inference, or logical
truth. It's a formal science that looks at how conclusions flow from premises in a topic-neutral way,
meaning it doesn't care about the particular issue being discussed. Examining the similarities and
differences between different logical formal systems and non-logical formal systems is one way to
investigate the nature of logic. In this regard, it is important to take into account whether or not the formal
system in question is complete and if it is consistent with basic logical intuitions.

Depending on whether one defines logic as the study of logical truth or valid inference, many
concepts of logic can be distinguished from one another. A further distinction among conceptions of logic
is whether the criteria of valid inference and logical truth are specified in terms of syntax (empirical) or
semantics (deontological).

Set Theory
Set theory, from a logic perspective, is a branch of mathematical logic that deals with the concept
of sets, collections of objects considered as units. It forms the foundation for much of modern
mathematics by providing a way to formalize and reason about collections, relations, and the properties of
objects. In its most basic sense, set theory allows us to think about how objects (or "elements") relate to
each other through membership in sets and how sets themselves interact.

Concept I: Sets
Sets are a collection of distinct objects, considered as an object in its own right. Sets are usually
denoted by capital letters (e.g., A, B, C) and the objects within sets are called elements. Elements are an
object that belongs to a set. If an object 𝑥 is a member of a set A, we write “𝑥 ∈ 𝐴”. If it is not a member,
we write “𝑥 ∉ 𝐴”. [i.e. Let “𝐴 = {1, 2, 3}”, then “1 ∈ 𝐴” and “4 ∉ 𝐴”]. Empty sets are sets which
contain no elements, denoted by “∅”. Subsets occur when all elements of one set are also elements of
another set. So if all elements of set A are present in set B, we write “𝐴 ⊆ 𝐵”. If “𝐴 ⊆ 𝐵” but “𝐴 ≠ 𝐵
”, we call set A a ‘proper subset’ of B, denoted with “𝐴 ⊂ 𝐵”.
Concept II: Logical Connectives
Operations in set theory often mirror the logical connectives (AND, OR, NOT, etc.) used in logic,
these are denoted with different symbols. Unions (“𝐴 ∪ 𝐵”) are the set of elements that are in either A
or B, or in both. In logic:“𝑥 ∈ (𝐴 ∪ 𝐵) ⇔ (𝑥 ∈ 𝐴) ∨ (𝑥 ∈ 𝐵)”. This corresponds to the logical
disjunction (OR).
Brev. “⟺” = “∪” = “OR”

You might also like