0% found this document useful (0 votes)
5 views

Advanced Problem Solving A1

The document presents an assignment on evaluating random number generators through Monte Carlo methods, focusing on three problems: estimation of π, single-dimensional integration, and multi-dimensional integration. It discusses the theoretical background, detailed steps, and error analysis for each task, highlighting the importance of sample size and the impact of random number generator quality on accuracy. The findings confirm that the error decreases as 1/N, emphasizing the trade-offs between computational cost and accuracy in Monte Carlo simulations.

Uploaded by

simreteab.mekbib
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Advanced Problem Solving A1

The document presents an assignment on evaluating random number generators through Monte Carlo methods, focusing on three problems: estimation of π, single-dimensional integration, and multi-dimensional integration. It discusses the theoretical background, detailed steps, and error analysis for each task, highlighting the importance of sample size and the impact of random number generator quality on accuracy. The findings confirm that the error decreases as 1/N, emphasizing the trade-offs between computational cost and accuracy in Monte Carlo simulations.

Uploaded by

simreteab.mekbib
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

ADDIS ABABA UNIVERSITY

ADDIS ABABA INSTITUTE OF TECHNOLOGY

Advanced Problem Solving (ArIn-6011)

Assignment 1: Evaluating Random Number Generators through


Monte Carlo Methods

Name ID Number

Simreteab Mekbib GSR/4500/17

Submitted to: DR. Beakal Gizachew

January 19, 2025


Introduction
Monte Carlo methods are a class of numerical techniques that use random sampling to approximate
solutions to mathematical and physical problems. These methods are particularly powerful for solving
problems with high-dimensional integrals, stochastic processes, and optimization tasks that are otherwise
computationally challenging to address using deterministic approaches. The Monte Carlo technique is
widely used in various fields such as physics, finance, engineering, and computer science due to its
simplicity and scalability.

The core idea of Monte Carlo methods is to replace complex calculations with statistical sampling. By
generating random samples from a defined probability distribution and applying them to the problem at
hand, we can obtain approximate solutions with quantifiable error bounds. The accuracy of these
approximations depends on the number of random samples (N) and the variability (standard deviation) of
the function being sampled.

In this assignment, I explore the application of Monte Carlo methods to three distinct problems:

1.​ Estimation of π\piπ: A classical problem where random sampling is used to approximate the
value of π\piπ.
2.​ Single-Dimensional Integration: Using Monte Carlo sampling to evaluate integrals in one
dimension.
3.​ Multi-Dimensional Integration: Extending Monte Carlo methods to approximate integrals in
higher dimensions.

Furthermore, we conduct a detailed error analysis for these implementations, guided by the theoretical
relationship that the error decreases as ∼ 1/ 𝑁​. Through experimental verification, I aim to validate this
theoretical behavior and analyze the impact of factors such as the quality of random number generators,
dimensionality of the problem, and computational trade-offs on the results.
Task 1: Estimation of π Using Monte Carlo Methods

Theory Recap

●​ Monte Carlo simulation approximates π by estimating the ratio of the area of a circle to the area
of the enclosing square.
●​ Formula:

Detailed Steps

1.​ Generate N random x,y pairs in [0,1].


2.​ Count points that satisfy x2+y2 ≤ 1.
3.​ Estimate π using the formula above.

Python Code
Output

Analysis

1.​ Initial Observations:


○​ At lower values of N, the estimated value of π fluctuates significantly. This is due to the
high variance in the random sampling process when the number of points is small.
○​ For example, with very few points (N=1 to N=10), the estimation deviates drastically
from the true value of π.
2.​ Convergence Behavior:
○​ As N increases, the estimated value of π starts converging toward the true value
(π≈3.14159).
○​ The red dashed line at y = π represents the true value of π, and the estimated values (blue
curve) gradually approach this line.
3.​ Accuracy with Large N:
○​ Beyond a certain threshold of N (e.g., N>104), the estimation stabilizes, and the
fluctuations around the true value become negligible.
○​ This demonstrates that the Monte Carlo method becomes more accurate with a larger
number of random points due to the Law of Large Numbers.
4.​ Logarithmic Scale:
○​ The x-axis uses a logarithmic scale, which allows visualization of the estimation behavior
across several orders of magnitude for N. This scale emphasizes the rapid convergence of
π as N increases.
5.​ Implications for Monte Carlo Simulations:
○​ The results confirm that the Monte Carlo method provides an accurate estimate of π when
sufficient random samples are used.
○​ However, the trade-off is computational cost, as larger N requires more processing power
and time.
Task 2: Numerical Integration Using Monte Carlo Methods

Theory Recap

●​ For a function f(x) over [a,b]:

Detailed Steps
2
−𝑥
1.​ Define the function f(x)= 𝑒 over [0,1].
2.​ Generate N random samples in [0,1].
3.​ Compute the integral using the Monte Carlo formula.
2 2
−(𝑥 + 𝑦 )
4.​ Extend to two dimensions for f(x,y)= 𝑒 over [0,1]×[0,1].

Python Code ( Single-dimensional integral )

Output
Analysis

Observations:

1.​ Convergence Behavior:


○​ For lower numbers of points (N<100), the estimated integral fluctuates significantly,
reflecting the high variance in Monte Carlo sampling with fewer points.
○​ As N increases (N≥1000), the estimated value stabilizes and converges closer to the true
value. This demonstrates the law of large numbers, where the Monte Carlo estimate
becomes more accurate with larger samples.
2.​ True Value:
○​ The red dashed line represents the true value of the integral. This serves as the benchmark
for the accuracy of the Monte Carlo method.
3.​ Accuracy vs. Number of Points:
○​ For larger N, the estimates approach the true value consistently, showcasing the reliability
of Monte Carlo integration for single-dimensional functions when sufficient samples are
used.
4.​ Logarithmic Scale:
○​ The x-axis uses a logarithmic scale to clearly display results for a wide range of N, from
small sample sizes to very large ones (100 to 108).
5.​ Error Behavior:
○​ The deviation from the true value decreases as N increases, showing that the method
becomes more precise with more samples.

Key Insights:

●​ Random Sampling:
○​ The fluctuations for small N are due to the randomness of sampling. A small number of
samples may not adequately represent the function's behavior over the integration
domain.
●​ Stability:
○​ Once N exceeds 1000, the integral estimate stabilizes, providing a value very close to the
true result.
●​ Efficiency:
○​ While Monte Carlo integration is simple to implement and works well even for complex
functions, achieving high accuracy requires a large number of samples, especially in
higher dimensions.

Recommendations:

●​ For higher accuracy in single-dimensional integrations, use N≥10000 for reliable results.
●​ If computational resources are limited, consider variance reduction techniques (e.g., stratified
sampling or importance sampling) to improve accuracy with fewer samples.

Python Code ( Monte Carlo integration for multidimensional functions )


Output
Analysis

Observations:

1.​ Convergence Trend:


○​ At lower numbers of points (N<100), the estimated integral exhibits significant variation,
reflecting high uncertainty due to limited sampling.
○​ As N increases (N≥1000), the estimates stabilize and converge closer to the true value of
0.557746, demonstrating the effectiveness of Monte Carlo integration over larger sample
sizes.
2.​ True Value Representation:
○​ The red dashed line indicates the true value of the integral (0.557746). It serves as a
benchmark to compare the accuracy of the estimated integral.
3.​ Error Behavior:
○​ The error is larger for smaller sample sizes, which is evident from the significant
deviation of the estimated values from the true value for N<100.
○​ For N≥1000, the error decreases substantially, and the estimates consistently align closely
with the true value.
4.​ Dimensional Impact:
○​ Compared to single-dimensional integration, convergence is slightly slower in the
multidimensional case. This behavior aligns with expectations, as higher dimensions
require more samples to achieve the same level of accuracy due to the curse of
dimensionality.
5.​ Logarithmic Scale:
○​ The x-axis uses a logarithmic scale to represent the wide range of sample sizes (100 to
108) effectively.

Key Insights:

1.​ Sample Size:


○​ For multidimensional integration, a much larger sample size (N≥104) is required to
achieve accurate estimates compared to single-dimensional integration.
2.​ Variance:
○​ The higher variance observed for small N highlights the challenge of adequately
sampling the domain in multiple dimensions with limited points.
3.​ Accuracy:
○​ The Monte Carlo method converges reliably with large sample sizes, even in higher
dimensions, as shown by the stabilization of estimates for N≥104.

Recommendations:

1.​ Larger Sample Sizes:


○​ Use N≥105 for multidimensional integrals to ensure accurate and reliable results.
2.​ Variance Reduction:
○​ To improve convergence and reduce fluctuations for smaller N, consider employing
techniques such as:
■​ Stratified Sampling
■​ Importance Sampling
■​ Quasi-Monte Carlo Methods (e.g., Sobol sequences)
3.​ Computational Resources:
○​ Ensure sufficient computational power when handling multidimensional integrals, as
larger N is necessary for accuracy.
Task 3: Error Analysis

Verification Steps

1.​ Calculate the standard deviation (σ) of the sampled function values.
2.​ Verify error decreases as:

3.​ Experimentally validate this relationship.

Python Code (Error analysis for estimating π)

Output
Python Code (Error analysis for 1D integration)
Output

Python Code (Error analysis for 2D integration)


Output

Python Code (Plot all errors on a log-log scale)


Output
Key Observations from the Graph

1.​ Error Behavior Across Methods:


○​ The plot shows three error curves for:
■​ Pi Estimation Error: Using random points inside a square to approximate π.
2
−𝑥
■​ 1D Integration Error: Using Monte Carlo integration for f(x)= 𝑒 over [0,1].
2 2
−(𝑥 + 𝑦 )
■​ 2D Integration Error: Using Monte Carlo integration for f(x,y)= 𝑒 over
[0,1]×[0,1]
○​ These curves closely align with the theoretical error bound (1/ 𝑁​) shown as the dashed
red line, confirming the expected behavior.
2.​ Comparison of Methods:
○​ The Pi Estimation Error curve is slightly higher than the integration errors, likely
because the randomness in generating points within a circle introduces more variance.
○​ Errors for 1D and 2D integrations are similar, but the 2D error converges more slowly
due to higher-dimensional integration being more computationally intensive and prone to
variance.
3.​ Convergence:
○​ All error curves consistently decrease as N increases, confirming the theoretical 1/ 𝑁​
dependency.
○​ For very large N, errors are extremely small, demonstrating the effectiveness of Monte
Carlo methods with sufficient samples.
Explanation of the Code

1.​ Error Calculation:


○​ For each method (Pi estimation, 1D integration, and 2D integration), the error is
computed as:

Error = 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑠​/ 𝑁 ​

This follows directly from the theoretical model.

○​ The estimates for each N are computed 10 times to get a reliable error estimate.
2.​ Plotting:
○​ The results are visualized on a log-log scale, which is suitable for analyzing power-law
relationships like 1/ 𝑁​.
○​ The theoretical line (1/ 𝑁​) is plotted alongside the experimental results for comparison.

Discussion
1.​ Quality of Random Number Generators:
○​ High-quality random number generators ensure the uniformity of sampled points,
reducing variance in estimates and ensuring proper convergence.
○​ Poor-quality generators may introduce biases, leading to higher errors or incorrect
convergence.
2.​ Trade-offs:
○​ Computational Cost: Increasing N reduces error but increases computation time.
○​ Accuracy: For applications requiring high precision, larger N is necessary, but
diminishing returns may be observed due to the slow convergence rate (1/ 𝑁​).
3.​ Dimensionality:
○​ As dimensionality increases, the integration error converges more slowly due to the
"curse of dimensionality." This explains why the 2D integration error is slightly higher
than the 1D error for the same N.
Conclusion

The experimental results align well with theoretical expectations, verifying that the error for Monte Carlo
methods decreases as 1/ 𝑁​. The quality of random number generators and dimensionality significantly
impact the convergence behavior. The trade-off between computational cost and accuracy must be
carefully considered when applying Monte Carlo methods to real-world problems.

You might also like