Illinois State University Mathematics Department


MAT 312: Probability and Statistics for Middle School Teachers

Dr. Roger Day (day@math.ilstu.edu)



Introduction to Probability


Introduction to Probability

We now turn to the topic of probability, the art and science of determining the likelihood that some event will occur. We first distinguish between two types of probabilities that we will calculate: experimental probability and theoretical probability.

Experimental Probability

Experimental probability describes the determination of numerical probability through the use of existing data, or simulation of a real or imagined event. We may express the probability that a seed for a new variety of salad tomato will sprout by conducting experiments on such seeds and reporting the number of sprouts compared to the total number of seeds planted. Based on 986 seeds sprouting out of 1000 sown, for example, we may express the experimental probability as a ratio (the probability is 986/1000 that a seed will sprout), as a decimal fraction (the probability is 0.986 that a seed will sprout), or as a percent (the probability is 98.6% that a seed will sprout).

Very much related to the use of experimental results, we may also determine experimental probabilities by examining existing data. To calculate the probability that Sammy Sosa will hit a home run in his first at bat in any particular game, we can examine data related to that. We might learn that in the 2,764 first at-bats he's had, Sammy has hit 24 homeruns. We now call on any of the three representations illustrated in the previous paragraph to express the probability in question: 24/2764, 0.00868, or 0.868%.

A simulation provides a way to generate outcomes that can be used to calculate experimental probabilities. Suppose that Crammits Cereal Company offers one of five cartoon surprises in each box of BigByte Cereal, and we want to determine the probability of getting all five surprises in just the first 8 boxes that we purchase. We could actually purchase the cereal and record which type of cartoon surprise is in the box, and do this for one or more sets of eight boxes. The problem, however, is that this could become costly and use more time than we have available. Other situations may actually be dangerous for us to carry out, such as situations involving driving cars at high speeds or working with explosive materials. In these situations, we may choose to simulate the actual event and use the results of the simulation to determine experimental probabilities. We will eventually describe in detail the process of planning and executing a simulation using a variety of models.

Theoretical Probability

Whereas experimental probability is largely based on what has already happened, through experiments, actual events, or simulations, theoretical probability is based on examining what could happen when an experiment is carried out. We use counting techniques, models, geometrical representations, and other mathematical calculations and techniques to determine all things that could happen in an experiment. We express theoretical probabilities by comparing outcomes that meet specific requirements to all possible outcomes of an experiment.

For example, we can determine the theoretical probability of a 3 appearing when a fair die is rolled by determining all the things that can happen and compare that to ways that a 3 could appear. For most of us, this is a straightforward determination, for there are six possible outcomes when a fair die is rolled and exactly one of those outcomes results in a 3. Because all six outcomes are equally likely, we compare the one way to get a 3 to the six outcomes that are possible when the experiment is conducted. We then express the probability of getting a 3 as a ratio (1/6), as a decimal fraction (0.167), or as a percent (16.7%).

What happens when two dice are rolled and we consider the sum of the two face-up sides? If we are interested in the probability that a 4 occurs, the process described in the previous paragraph generates a probability ratio of 1/11. That is, a sum of 4 is one of the 11 possible outcomes, where the set {2,3,...,12} represents all possible outcomes. This, however, is incorrect.

The distinction is in whether the outcomes we compare are equally likely. For one die, each of the values in the set {1,2,3,4,5,6} is equally likely to occur, so the ratio 1/6 is an appropriate and correct statement of the probability that any single value will occur. However, the values in the set {2,3,...,12} are not equally likely to occur as sums when two dice are rolled. We first must determine all the ways that these sums can occur and then compare that to the number of ways that a specific sum can occur. There are several ways we can do this. Here's one of them.

The first table below shows all the ways the sums could occur, that is, all posible results of the experiment. We can be sure of that because in the table we have paired each possible outcome of one die with each possible outcome of another die (colors are used simply to distinguish between the two dice). The second table summaries the information from the first table. It shows 11 different outcomes,the number of ways each outcome can occur, and the probability of each outcome.

Sums Resulting From Two Fair Dice
Face-Up Side of Blue Die
1
2
3
4
5
6
Face-Up Side of Red Die
1
2
3
4
5
6
7
2
3
4
5
6
7
8
3
4
5
6
7
8
9
4
5
6
7
8
9
10
5
6
7
8
9
10
11
6
7
8
9
10
11
12

Rolling Two Dice: What Can Happen?
Sum of Two Face-Up Sides
Number of Ways Outcome Can Occur
Probability of Each Outcome
2
1
1/36
3
2
2/36
4
3
3/36
5
4
4/36
6
5
5/36
7
6
6/36
8
5
5/36
9
4
4/36
10
3
3/36
11
2
2/36
12
1
1/36
Result: 11 different outcomes
Total: 36 ways

Another way to represent what can happen when we roll to dice and determine the sum of the face-up sides is to create a tree diagram. Here's a tree diagram for the dice sums situation.

The tree diagram presents the same information as the first table above. The tree-branches representation helps illustrate how one component of an experiment (here, the result showing on the first die) is associated with another component of an experiment (here, the result showing on the second die). If each component of an experiment is equally likely, the outcomes showing at the far right of the diagram are equally likely. This diagram shows 36 outcomes, but only 11 different outsomes when we consider the sum of the two dice. We can use the information in the tree diangram to determine probabilities, just as we did with the table.

We can also use the multiplication property to help us count the number of outcomes for our dice-summing experiment. There are 6 possible outcomes for the first die and 6 possible outcomes for the second, so there are 6*6=36 outcomes when the two tasks are completed together. Again, we must look further ro determine whether the 36 outcomes are all different. We know here that they are not.

Here are four different experiments. For each, determine all outcomes that are possible for the experiment, determine the number of different outcomes that are possible, and then determine whether each of the different outcomes are equally likely. For each experiment, create at least one model or representation to show what could happen.

  • Experiment 1: Flip a fair coin and record whether the coin lands heads up or tails up.
  • Experiment 2: Roll a fair die and record the result and then flip a coin and record the result.
  • Experiment 3: Three fair coins are flipped simultaneously and the head/tail result is recorded.
  • Experiment 4: Gina is at the free-throw line to attempt two free throws.

Terms, Symbols, and Properties

So that we can be efficient and clear in our discussions and calculations associated with probability, we will identify some terminology and symbolism to help us. We also point out some fundamental properties of probability.

outcomes: the possible results of an experiment

equally likely outcomes: a set of outcomes that each have the same likelihood of occurring.

sample space: the set of all possible outcomes to an experiment.

uniform sample space: a sample space filled with equally likely outcomes.

nonuniform sample space: a sample space that contains two or more outcomes that are not equally likely.

event: a collection of one or more elements from a sample space.

expected value: the long-run average value of the outcome of a probabilitic situation; if an experiment has n outcomes with values a(1), a(2), . . . , a(n), with associated probabilities p(1), p(2), . . . , p(n), then the expected value of the experiment is

a(1)*p(1)+ a(2)*p(2) + . . . + a(n)*p(n).

There is further discussion of the use and computation of expected value.

random event: an experimental event that has no outside factors or conditions imposed upon it.

P(A): represents the probability P for some event A.

probability limits: For any event A, it must be that P(A) is between 0 and 1 inclusive.

probabilities of certain or impossible events: An event B certain to occur has P(B) = 1, and an event C that is impossible has P(C) = 0.

complementary events: two events whose probabilities sum to 1 and that share no common outcomes. If X and Y are complementary events, that P(A) + P(B) = 1.

mutually exclusive events: two events that share no outcomes. If events C and D are mutually exclusive, then P(C or D) = P(C) + P(D); if two events are not mutually exclusive, then P(C or D) = P(C) + P(D) - P(C and D).

independent events: two events whose outsomes have no influence on each other. If E and F are independent events, than P(E and F) = P(E) * P(F).

conditional probability: the determination of the probability of an event taking into account that some condition may affect the outcomes to be considered. The symbol P(A|B) represents the conditional probability of event A given that event B has occurred. Conditional probability is calculated as P(A|B) = P(A and B)/P(B).

geometrical probability: the determination of probability based on the use of a 1-, 2-, or 3-dimensional geometric model.

 




Return to MAT 312 Homepage