Basic probability is an introduction to the foundational ideas in probability theory.
Probability is the measure of the likelihood that an event will occur. It is quantified as a number between 0 and 1, where 0 indicates impossibility and 1 indicates certainty. A simple example is the tossing of a coin that has two sides: head and tail. We can describe the probability of this event in terms of the observed outcomes or the expected results.
For a "fair" coin, the probability of head equals the probability of tail. However, for an "unfair" or "weighted" coin the two outcomes are not equally likely. Change the "weight" of the coin by dragging and dropping the expected probability and see how this affects the observed outcomes.
The expected value of an experiment is the probability-weighted average of all possible values. It is defined mathematically as the following:
$$E[X] = \sum_{x \in X}xP(x)$$The law of large numbers states that the average result from a series of trials will converge to the expected value. Roll the die to see convergence to its expected value.
Change the theoretical probability of the die to see how that changes the average and expected value.
One of the main goals of statistics is to estimate unknown parameters. An estimator uses measurements and properties of expectation to approximate these parameters. To illustrate this idea we will estimate the value of pi, \( \pi \) by randomly dropping samples on a square that has a circle inscribed in it. We will define the following estimator \( \hat{\pi} \), where \( m \) is the number of samples within our circle and \( n \) is the total number of samples dropped.
\(\hat{\pi} = 4\dfrac{m}{n}\) |
\( m= \) 0.00 \( n= \) 0.00 |
\( \hat{\pi}= \) |
An estimator's accuracy and precision is quantified by the following properties:
The bias of an estimator is the difference between the estimator's expected value and the true value of the parameter being estimated. It informally measures how accurate an estimator is. For an estimator \( \theta \), bias is defined mathematically as:
$$B(\hat{\theta}) = E(\hat{\theta}) - \theta$$In our example, \( \hat{\pi} \) is unbiased, which means its bias is 0.
Variance is the expectation of the squared deviation of an estimator from its expected value. It informally measures how precise an estimator is. For an estimator \( \theta \), this is defined mathematically as:
$$var(\hat{\theta}) = E[(\hat{\theta} - E(\hat{\theta}))^2]$$In our example, the variance of our estimator \( \hat{\pi} \) is
Mean squared error (MSE) of an estimator is the sum of the estimator's variance and bias squared. For an estimator \( \theta \), this is defined mathematically as:
$$MSE(\hat{\theta}) = var(\hat{\theta}) + B(\hat{\theta})^2$$In our example, the mean squared error of our estimator \( \hat{\pi} \) is