1. Random Sampling
The foundation of Monte Carlo methods is generating random numbers and observing the fraction of numbers that obey some property or properties.
Use random point generation to estimate the value of pi
Current estimate of pi: 0.000000
This simulation uses the Monte Carlo method to estimate pi. The left diagram shows the square and quarter circle, with green points inside the circle and red points outside. The right chart shows how the estimate converges to pi as more points are added.
Monte Carlo simulations are computational algorithms that use repeated random sampling to solve problems and obtain numerical results. These methods are particularly useful for optimizing, integrating and generating draws from complex probability distributions.
The foundation of Monte Carlo methods is generating random numbers and observing the fraction of numbers that obey some property or properties.
As the number of randomly generated points increases, the simulation result tends to converge to the true value, in accordance with the law of large numbers.
Monte Carlo methods typically converge with error, where n is the number of samples, making them relatively slow but reliable.
The statistical error in Monte Carlo simulations can be estimated and reduced by increasing the number of iterations.
For our pi estimation example:
Area ratio method:
Therefore:
The error in this estimate decreases as:
where n is the number of points sampled.