Group 4generating Continuous Random Variables
Group 4generating Continuous Random Variables
Submitted to :
Teacher name: Dr. Md. Murad Hossain
Assistant Professor,
Department of : Statistics
BSMRSTU, GOPALGANJ
𝐹𝑥 (𝑥) = 𝑃{𝑋 ≤ 𝑥}
= 𝑃{𝐹 −1 (𝑈) ≤ 𝑥}
Now since F is a distribution function it follows that F(x) is a monotone increasing function of
x and so the inequality “a ≤ b” is equivalent to the inequality “𝐹(𝑎) ≤ 𝐹(𝑏)”. Hence,form
Equation , we see that
= 𝐹(𝑥)
The above proposition thus shows that we can generate a random variable X from the
continuous distribution function F by generating a random number U and then setting 𝑋 =
𝐹 −1 (𝑈).
Page 1
Example : If X is an exponential random variable with rate 1, then its distribution function is
given by
𝐹(𝑥) = 1 − 𝑒 −𝑥
Let F(x) = u , then
𝑢 = 𝐹(𝑥) = 1 − 𝑒 −𝑥
1 − 𝑢 = 𝑒 −𝑥
Taking logarithms,
𝑥 = − 𝑙𝑜𝑔 (1 − 𝑢)
Hence we can generate an exponential with parameter 1 by generating a random number U and
then setting
𝑋 = 𝐹 −1 (𝑈) = −𝑙𝑜𝑔 (1 − 𝑈)
Generate random samples
● Generate a uniform random variable U between 0 and 1.
● Set 𝑋 = − 𝑙𝑜𝑔 (1 − 𝑈) .
Example : If X is an exponential random variable with rate 𝜆, then its distribution function is
given by
𝐹(𝑥) = 1 − 𝑒 −𝜆𝑥
1
Hence, an exponential random variable X with rate 𝜆 (𝑚𝑒𝑎𝑛 𝜆).
𝑢 = 𝐹(𝑥) = 1 − 𝑒 −𝜆𝑥
1 − 𝑢 = 𝑒 −𝜆𝑥
Taking logarithms,
𝜆𝑥 = − 𝑙𝑜𝑔 (1 − 𝑢)
1
𝑥 = − 𝑙𝑜𝑔 (1 − 𝑢)
𝜆
1
Hence we can generate an exponential with parameter 𝜆 (𝑚𝑒𝑎𝑛 𝜆) by generating a random
number U and then setting
Page 2
1
𝑋 = 𝐹 −1 (𝑈) = − 𝑙𝑜𝑔 (1 − 𝑈)
𝜆
Since U ~ Uniform[0,1], we can also use 1 – U ~ Uniform[0,1], so
1
𝑋 = − 𝑙𝑜𝑔 (1 − 𝑈)
𝜆
Generate random samples
● Generate a uniform random variable U between 0 and 1.
1
● Set 𝑋 = − 𝜆 𝑙𝑜𝑔 (1 − 𝑈) .
Exercises 1: Give a method for generating a random variable having density function
𝑒𝑥
𝑓(𝑥) = , 0≤𝑥≤1
𝑒−1
Solution: To generate a random variable with this density function, we can use the inverse
transform method.
The given PDF is,
𝑒𝑥
𝑓(𝑥) = , 0≤𝑥≤1
𝑒−1
There the CDF is F(x),
𝑥 𝑥
𝑒𝑡 𝑒𝑥 − 1
𝐹(𝑥) = ∫ 𝑓(𝑡) 𝑑𝑡 = ∫ 𝑑𝑡 =
0 0 𝑒−1 𝑒−1
𝑒𝑥 − 1
𝐹(𝑥) = , 0≤𝑥≤1
𝑒−1
Page 3
Let U be a uniform(0,1) random variable and F(x) = u , then
𝑒𝑥 − 1
𝑈=
𝑒−1
𝑈(𝑒 − 1) = 𝑒 𝑥 − 1
𝑒 𝑥 = 𝑈(𝑒 − 1) + 1
𝑥 =𝑙𝑛 (𝑈(𝑒 − 1) + 1)
Generate Random Samples
To generate a random variable X with the given f(x):
1. Generate a uniform random variable U between 0 and 1.
2. Set 𝑥 =𝑙𝑛 (𝑈(𝑒 − 1) + 1) .
Page 4
Example :- Give a method to generate a random variable having density function
𝑥−2
𝑖𝑓 2 ≤ 𝑥 ≤ 3
2
𝑓(𝑥) = 𝑥
2−3
𝑖𝑓 3 ≤ 𝑥 ≤ 6
{ 2
Solution: To generate a random variable with this density function, we can use the inverse
transform method
The PDF is,
𝑥−2
𝑓1 (𝑥) = 𝑓𝑜𝑟 2 ≤ 𝑥 ≤ 3
2
The CDF is,
𝑥 (𝑥 − 2)2
𝑡−2
𝐹1 (𝑥) = ∫ 𝑑𝑡 =
2 2 4
Let U be a uniform(0,1) random variable and F(x) = u , then
(𝑥 − 2)2
𝑈 =
4
2√𝑈 = (𝑥 − 2)
𝑥 = 2√𝑈 + 2
Generate Random Samples
To generate a random variable X with the given f(x):
1. Generate a uniform random variable U between 0 and 1.
2. Set 𝑥 = 2√𝑈 + 2
Page 5
1 5𝑥 − 𝑥 2 /6 − 15
𝑈 = +
4 6
30 ∓ √900 − 4(36𝑈 − 81)
𝑥 =
60
Solution: To generate a random variable with this density function, we can use the inverse
transform method
The PDF is,
1
𝑓(𝑥) = 𝑥 + ,0 ≤ 𝑥 ≤ 1
2
Then the CDF is ,
𝑥
1 𝑥2 + 𝑥
𝐹(𝑥) = ∫ (𝑡 + ) 𝑑𝑡 =
0 2 2
Let U be a uniform(0,1) random variable and F(x) = u , then
𝑥2 + 𝑥
𝑈 =
2
2𝑈 = 𝑥 2 + 𝑥
𝑥 2 + 𝑥 − 2𝑈 = 0
−1 + √1 + 8𝑈
𝑥 =
2
Page 6
Part-2
Rejection Method:
Proposal distribution:
A proposal distribution g(x) in rejection sampling is a probability distribution used to
generate candidate values for a target distribution f(x). The proposal distribution should be
simple to sample from and should be scaled by a constant c so that it envelopes f(x), allowing
us to accept or reject samples based on how well they match the target distribution.
Page 7
Assumptions:
1. Proposal Distribution: There exists a probability density g(x) from which it is easy
to sample.
𝑓(𝑥)
2. Constant c: There is a constant c ≥ 1 such that 𝑔(𝑥) ≤ 𝑐 for all x. This ensures that
f(x) is always bounded above by c ⋅ g(x).
Figure 5.1. The rejection method for simulating a random variable X having density function
f.
Explanation:
𝑓(𝑥)
The ratio 𝑈 ≤ gives the probability of accepting the candidate sample Y. If Y is in a
𝑐 . 𝑔(𝑥)
region where f(x) is high relative to g(x), it is more likely to be accepted, thereby
approximating f(x) as the distribution of accepted samples.
Page 8
The Rejection Method has the following properties:
1. Density Matching: The distribution of accepted samples X has the target density f(x).
2. Geometric Iteration: The number of iterations required follows a geometric
1
distribution with success probability 𝑐 .
2. Exponential Distribution:
For the exponential distribution defined as:
g(x) = λ𝑒 −λx ; x≥ 0
• Generate a uniform random number U between 0 and 1.
1
• Transform U to obtain Y: Y = − λ ln (𝑈)
3. Beta Distribution:
For the Beta distribution defined as:
𝑥 α−1 ( 1−𝑥 )β−1
g(x) = ; for x ∈ [0,1]
𝐵(α ,β)
Page 9
1 1
• Transform to obtain Y: Y = 𝑈1α (1 − 𝑈1)β
3.Gamma Distribution:
λ𝑘 𝑥 𝑘−1 𝑒 −λx
g(x) = ; for x ≥ 0
𝛤 (𝑘)
Xi ~ 𝐸𝑥𝑝𝑜𝑛𝑒𝑛𝑡𝑖𝑎𝑙 ( λ ) ; i = 1,2,……..…,k
4. Normal Distribution:
For the Normal distribution defined as:
( 𝑥−μ) 2
1 −
g(x) = √(2𝜋𝜎2 ) 𝑒 2𝛿2
Page 10
4. Financial Modeling: Assists in simulating risk variables with empirical or complex
distributions.
5. Engineering & Reliability: Generates variables for reliability testing and failure
analysis with non-standard probabilities.
Example 1: Let us use the rejection method to generate a random variable having density
function,
f (x) = 20x (1 − x) 3 ; 0<x<1
Since this random variable (which is beta with parameters 2, 4) is concentrated in the interval
(0, 1), let us consider the rejection method with
g(x) = 1 ; 0<x<1
Solution:
To determine the smallest constant c such that f (x)/g(x) _ c, we use calculus to
determine the maximum value of,
𝑓(𝑥)
= 20x (1 − x)3
𝑔(𝑥)
𝑑 𝑓(𝑥)
( ) = 20 [(1 − x)3 − 3x (1 − x)2]
𝑑𝑥 𝑔(𝑥)
Page 11
1
Setting this equal to 0 shows that the maximal value is attained when x = 4 and Thus,
𝑓(𝑥) 1 3 135
≤ 20 (4) (4)4 = =c
𝑔(𝑥) 64
Hence,
𝑓(𝑥) 256
= x(1 − 𝑥)3
𝑐𝑔(𝑥) 27
3
Example 2: Suppose we wanted to generate a random variable having the gamma (2,1)
density
1
F(x) = K 𝑥 2 𝑒 −𝑥 ; x>0
3
Where K=1/Г(2)= 2/Гπ Because such a random variable is concentrated on the positive axis
3
and has mean 2, it is natural to try the rejection technique with an exponential random
variable with the same mean. Hence, let
2𝑥
2
g(x)= 3 𝑒 3 ; x>0
Solution:
1 −𝑥
𝑓(𝑥) 3𝐾
= 2 𝑥2 𝑒 3
𝑔(𝑥)
3
that is, when x=2 Hence,
𝑓(𝑥) 3𝐾 3 1/2
c = Max = ( ) 𝑒 −1/2
𝑔(𝑥) 2 2
33/2
= (2𝜋𝑒)1/2 , Since K = 2/√π
Page 12
Since,
1
𝑓(𝑥) 2𝑒
= ( 3 )1/2 𝑥 2 𝑒 −𝑥/3
𝑐𝑔(𝑥)
3
we see that a gamma (2,1) random variable can be generated as follows:
step 1: Generate a random number U1 and set Y=-3/2 log U1.
step 2: Generate a random number U2.
step 3: If U2 < (2𝑒𝑦/3)1/2 𝑒 −𝑦/3 , set X = Y. Otherwise, return to Step 1.
𝐾𝑒 −𝜆𝑥 𝑥 𝛼−1 𝐾
f(x) = = 𝜇 𝑥 (𝛼−1) 𝑒 (𝜇−𝜆)𝑥
𝜇𝑒 𝜇𝑥
𝑓(𝑥)
lim 𝑔(𝑥) =∞
𝑥→0
Thus, showing that the rejection technique with an exponential can not be used inthis case. As
the gamma density reduces to the exponential when α = 1, let us
suppose that α > 1. Now, when μ ≥ λ,
𝑓(𝑥)
lim =∞
𝑥→0 𝑔(𝑥)
and so we can restrict attention to values of μ that are strictly less than λ. With
such a value of μ, the mean number of iterations of the algorithm that will be
required is,
𝑓(𝑥) 𝐾
c(𝜇) = Max 𝑔(𝑥) = Max 𝜇 𝑥 (𝛼−1) 𝑒 (𝜇−𝜆)𝑥
𝑥 𝑥
To obtain the value of x at which the preceding maximum occurs, we differentiate and set equal
to 0 to obtain,
Page 13
0 = 𝑥 (𝛼−1) 𝑒 (𝜇−𝜆)𝑥 − (𝜆 − 𝜇)𝑥 (𝛼−1) 𝑒 (𝜇−𝜆)𝑥
𝛼−1
x = 𝜆−𝜇
𝐾 𝛼−1
= 𝜇 (𝜆−𝜇)𝛼−1 𝑒 1−𝛼
Hence, the value of μ that minimizes c(μ) is that value that maximizes 𝜇 (𝜆 − 𝜇)𝛼−1
Differentiation gives
𝑑
{𝜇(𝜆 − 𝜇)𝛼−1 } =(𝜆 − 𝜇)𝛼−1 − (𝛼 − 1) {𝜇(𝜆 − 𝜇)𝛼−2 }
𝑑𝑥
Setting the preceding equal to 0 yields that the best value of μ satisfies,
λ − μ = (α − 1) μ
or, μ = λ/α
That is, the exponential that minimizes the mean number of iterations needed by
the rejection method to generate a gamma random variable with parameters α and
λ has the same mean as the gamma; namely, α/λ.
To generate a standard normal random variable Z (i.e., one with mean 0 and variance 1, note
first that the absolute value of Z has probability density function
2 2 /2
f(x) = 2𝜋 𝑒 −𝑥 ; 0<x<∞
We start by generating from the preceding density function by using the rejection
method with g being the exponential density function with mean 1—that is,
g(x) = 𝑒 −𝑥 ; 0<x<∞
Solution:
𝑓(𝑥) 2 2 /2
= √𝜋 𝑒 −𝑥
𝑔(𝑥)
Page 14
and so the maximum value of f (x)/g(x) occurs at the value of x that maximizes
x − x2/2. Calculus shows that this occurs when x = 1, and so we can take
𝑓(𝑥) 𝑓(1) 2𝑒
c = Max 𝑔(𝑥) = =√( 𝜋 )
𝑔(1)
Because,
𝑓(𝑥) 𝑥2 1
= exp {𝑥 − − 2}
𝑐𝑔(𝑥) 2
(𝑥−1)2
= exp {-
2
it follows that we can generate the absolute value of a standard normal random
variable as follows:
Page 15
Otherwise, go to Step 1.
step 4: Generate a random number U and set
1
𝑌1 𝑖𝑓 𝑈 ≤ 2
Z={ 1
−𝑌1 𝑖𝑓 𝑈 > 2
The random variables Z and Y generated by the foregoing are independent with
Z being normal with mean 0 and variance 1 and Y being exponential with rate 1.
(If you want the normal random variable to have mean μ and variable σ2, just take μ + σ Z).
𝑥𝑒 −𝑥 𝑥𝑒 −𝑥
f(x) = ∞ =6𝑒 −5 ; x≥5
∫5 𝑥𝑒 𝑥 𝑑𝑥
where the preceding integral was evaluated by using integration by parts. Because a gamma (2,
1) random variable has expected value 2, we will use the rejection method based on an
exponential with mean 2 that is conditioned to be at least 5. That is, we will use
1 −𝑥/2
𝑒
2
g(x) = ; 𝑥≥5
𝑒 −5/2
Solution:
𝑓(𝑥) 𝑒 5/2
= 𝑥𝑒 −𝑥/2 ; x≥ 5
𝑔(𝑥) 3
𝑓(𝑥) 𝑓(5)
C = max {𝑔(𝑥)} = 𝑔(5) = 5/3
𝑥≥5
Page 16
step 1: Generate a random number U.
step 2: Set Y = 5 − 2 log(U).
step 3: Generate a random number U.
5
𝑒2
step 4: if U ≤ 𝑌 𝑒 −𝑌/2 , set X = Y and stop; otherwise return to step 1.
5
Page 17
Part-3
Polar Method : The polar method is a specific technique for generating normally
distributed random variables , also known as the Marsaglia polar method.
Let X and Y be independent standard normal random variables and let R and
denote the polar coordinates of the vector (X, Y).
R2=X2+Y2
𝑌
tanΘ= 𝑋
Since X and Y are independent, their joint density is the product of their individual
densities and is thus given by
𝑥2 𝑦2
1 − 1 −
F(x,y) = 𝑒 2 𝑒 2
√2𝜋 √2𝜋
(𝑥2 +𝑦2 )
1 −
=2𝜋 𝑒 2
Page 18
Y = R sin Θ=√−2𝑙𝑜𝑔𝑈1 sin(2πU2 )
When the polar method used for generating independent standard normals:
𝑉1 𝑉1
cosΘ = = 1
𝑅
(𝑉12 +𝑉22 )2
It now follows from the Box-Muller transformation that we can generate independent
standard normals by generating a random number U and setting
1
𝑉1
X = (−2𝑙𝑜𝑔 𝑈)2 1
(𝑉12 +𝑉22 )2
1
𝑉2
Y = (−2𝑙𝑜𝑔 𝑈)2 1
(𝑉12 +𝑉22 )2
Page 19
−2𝑙𝑜𝑔𝑆 −2𝑙𝑜𝑔𝑆
X=√ V1 , Y = √ V2
𝑆 𝑆
Page 20
Part-4
Generating a Poisson process helps to model and analyze systems where random events occur
at a steady average rate, such as:
1. Define the Rate Parameter (λ): This is the average rate at which events are expected
to occur.
2. Generate Event Times:
o Interarrival Times: In a Poisson process, the times between consecutive events
(interarrival times) are exponentially distributed with mean 1/λ.
o You can sample these interarrival times from an exponential distribution and
add them successively to get the actual timestamps of each event.
3. Repeat Until Desired: Continue generating events until you reach a desired end time
or a specified number of events.
Example
Suppose we want to simulate a Poisson process for customer arrivals at a store, where
customers arrive at an average rate of 5 per hour:
1. Set λ=5.
2. Draw the time between each customer arrival from an exponential distribution with
mean 1/λ=1/5.
3. Sum the interarrival times to get cumulative arrival times of each customer.
Generating a Poisson process involves simulating the first n event that occur randomly over
time, with a constant average rate (λ).To do so we make use of the result that the times between
Page 21
successive events for such a process are independent exponential random variables each with
rate (λ). Thus, one way to generate the process is to generate these interarrival times. So if we
generate n random numbers U1, U2,……..Un and set Xi = -1/λ log Ui, then Xi can be regarded
as the time between the (I – 1)st and the ith event of the Poisson process. Since the actual time
of the jth event will equal the sum of the first j interarrival times, it thus follows that the
genetrated values of the first n events times are ∑Xi, j = 1,2,…….n.
If we wanted to generate the first T time units of the Poisson process, we can follow the
preceding procedure of successive generating the interarrivals times, stopping when their sum
exceeds T. That is, the following algorithm can be used to generate all the event times occurring
in (0, T) of a poisson process having rate (λ). In the algorithm t refers to time, I is the number
of events that have occurred by time t and S(I) is the most recent event time.
The final value of I in the preceding algorithm will represent the number of events that occur
by time T, and the values S(1),…………S(I) will be the I event times in increasing order.
Another way to simulate the first T time units of a poisson process with rate λ starts by
simulating N(T), the total number of events that occur by the time T. Because N(T) is a poisson
process with mean λT, this is easily accomplished by one of the approaches. If the simulated
value of N(T) is n, then n random numbers U1,…………Un are generated and
{TU1,…………TUn} are taken as the set of event times by time T of the poisson process. N(T)
= n, the unordered set of event times are distributed as the set of n independent uniform (0, T)
random variables.
To verify that the preceding method works, let N(t) equal the number of values in the set
{TU1,…………….,TUn(t)} that are less than t. we must now argue that N(t), 0 ≤ t ≤ T, is a
poisson process. To show that it has independent and stationary increments, let I1,………..Ir
be r disjoint time intervals, I =1 ,……r and say it is type r+1 if it is does not lie in any of the r
intervals. Because the Ui, I ≥1 are independent , it follows that each of the poisson number of
events N(T) is independently classified as being of one of the types 1,………….r+1, with
respective probabilities P1,………Pr+1, where Pi is the length of the interval Ii divided by T
when I ≤ r, and Pr+1 = 1 - ∑Pi. N1,……….Nr, the numbers of events of the disjoint intervals,
are are independent Poisson random variables, with E[Ni] equal to λ multiplied by the length
of the interval Ii; which established that N(t), 0 ≤ t ≤ T, has stationary as well as independent
Page 22
increments. Because the number of events in any interval of length h is poisson distributed with
mean λh,
We have,
And
𝑃{𝑁(ℎ)≥2} 1− 𝑒 −𝛾ℎ −𝛾ℎ𝑒 −𝛾ℎ
lim = lim =0
ℎ→0 ℎ ℎ→0 ℎ
1. Intensity (λ) is defined as the average number of events per unit area (e.g., events per
square meter).
2. Random Locations: The location of each event within the area is uniformly distributed.
Suppose we want to simulate events over a rectangular area of width W and height H, with an
event rate of λ events per unit area.
Step-by-Step Simulation:
Explanation
1. Intensity Setting: The parameter λ represents the density of events per unit area.
Page 23
2. Uniform Distribution: Each point’s (x,y) coordinates are chosen uniformly within the
area, reflecting the Poisson process's assumption that events are uniformly and
independently distributed.
Page 24
Part-5
Example
Consider the number of incoming phone calls to a customer support center, where the rate of
calls changes based on the time of day. Suppose the rate function λ (t) for the number of calls
per minute is defined as follows:
This varying rate over time indicates a non-homogeneous Poisson process. The expected
number of calls in each period can be calculated by integrating λ (t) over the respective
intervals.
Page 25
Let's walk through a mathematical example for simulating a non-homogeneous Poisson
process (NHPP). In statistical simulations, we often use a method called thinning to simulate
events from a non-homogeneous Poisson process.
Page 26
Example Setup
Suppose we want to simulate a NHPP with a time-varying intensity function given by:
where:
Page 27
Example Calculation
Page 28
Part-6
Summary of Random number Generation of Continuous Variable
Inverse transform Algorithm:
Let U be a The Inverse Transform Sampling Algorithm is a method used in simulations to
generate random samples from a specific probability distribution. Here’s a summary of the
process:
1. Generate Uniform Random Variable: Start by generating a uniform random number
from the interval [0, 1].
2. Cumulative Distribution Function (CDF): Determine the cumulative distribution
function of the target distribution. This function represents the probability that a random
variable is less than or equal to .
3. Inverse CDF: Compute the inverse of the CDF, . This gives you the corresponding value
in the target distribution.
4. Sample Generation: The result is a sample from the desired distribution.
This method is particularly effective when the inverse CDF can be calculated easily. It allows
for efficient random sampling from various distributions, including exponential, uniform, and
others, facilitating simulations in fields such as finance, engineering, and statistics.
Rejection method:
The reverse method of simulation, often referred to as the Reverse Acceptance-Rejection
Sampling or simply the Reverse Method, is a technique used to generate samples from a target
distribution by utilizing a known distribution. Here’s a concise summary:
1. Target Distribution: Identify the probability distribution from which you want to
sample.
2. Proposal Distribution: Select a simpler proposal distribution that is easy to sample from
and covers the support of the target distribution.
3. Sample Generation: Generate samples from the proposal distribution .
4. Acceptance Criterion: For each sample , compute the acceptance probability using the
ratio , where is a constant that bounds the ratio for all .
5. Accept or Reject: Generate a uniform random number from [0, 1]. Accept the sample
if is less than the acceptance probability; otherwise, reject it and repeat the sampling
process.
6. Output Samples: Accepted samples will approximate the target distribution.
This method is useful for complex distributions where direct sampling is difficult, and it
leverages the flexibility of simpler distributions to facilitate effective sampling.
Polar Method:
Page 29
The Polar Method, also known as the Polar Coordinate Method, is a technique for generating
pairs of independent standard normal random variables. Here’s a summary of the process:
1. Uniform Sampling: Generate two independent uniform random variables and from the
interval .
2. Check Radius: Compute . If is greater than or equal to 1, repeat the sampling for and
(this ensures that the points lie within the unit circle).
3. Transform to Normal: Once , compute the polar coordinates
This method is efficient for generating normal variates and is particularly advantageous
because it avoids the use of trigonometric functions, making it computationally efficient.
Poisson process :
In simulation modeling, the Poisson process is utilized to represent and analyze events that
occur randomly and independently over time or space. Here’s a concise summary:
Key Characteristics:
1. Event Rate (λ): Defines the average number of events occurring in a fixed interval (time
or space).
2. Independence: The occurrence of one event does not influence the timing of another.
3. Memory lessness: The time until the next event follows an exponential distribution,
meaning that the past does not affect future intervals.
Key Characteristics:
1. Variable Rate (λ(t)): The rate of events, denoted λ(t) , is a function of time, meaning it
can increase or decrease throughout the observation period.
2. Independence of Events: Like the homogeneous Poisson process, the events in an NHPP
are independent of each other.
3. Non-Constant Intensity: The probability of event occurrence is proportional to the
instantaneous rate, which can change based on the time or other influencing factors.
Page 30
References:
1. Simulation (Fifth Edition): Sheldon M. Ross Epstein (Department of Industrial and
Systems Engineering University of Southern California)
2. Chatgpt & Google
Page 31