1. Probability and Two-Way Tables Intro

For this sub-competency you will be introduced to the basics of probability.

Basic Probability Rules

Probability will play a huge role later in this course when we start investigating the probability of obtaining certain results from a sample. An unusual event is one that has a low probability of occurring. This is not a precise definition, because how low is “low?” Typically, probabilities of 5% or less are considered low. Recall that 5% means 5 per 100 or 5 times out of 100. Therefore, an event E with a 5% chance of occurring means that in repeated trials we would expect to see E happen in only 5 trials out of every 100.  Thus, events with probabilities of 5% or lower are considered unusual. However, this cutoff point can (and will) vary by the context of the problem.

Probability is basically the science of chance behavior. Chance behavior is unpredictable in the short run but has a regular and predictable pattern in the long run. This is why we will use probability to gain useful results from random samples and randomized comparative experiments…although we don’t know exactly what we’ll see from our sampling or experimentation, if we repeat the process over and over, we gain some confidence in the outcomes we’ll see.

Here are some definitions you want to be familiar with. An experiment is a repeatable process where the results are uncertain. An outcome is one specific possible result from the experiment. The set of all possible outcomes is the sample space.

Example

A basketball player shoots three free throws. What are the possible sequences of hits (H) and misses (M)? The experiment in this case is a basketball player shooting 3 free throws. A possible outcome of this experiment is the sequence HHM (hit, hit, miss). The sample space of this experiment is:

S  =  {HHH, HHM, HMH, HMM, MHH, MHM, MMH, MMM}

Note that there are 8 outcomes in this sample space, as each free through has 2 possibilities (hit or miss). So 2 ⋅ 2 ⋅ 2 = 23 = 8. You can often create a sample space using a graphical approach, as shown:

Graph

2. Creating a Probability Model

If the proportion of occurrences of an outcome settles down to one value over the long run, that one value is then defined to be the probability of that outcome. Probabilities can be expressed as fractions (5/8), decimals (0.625), or percents (62.5%). There are two main rules that probabilities must satisfy for a given experiment:

  1. The probability of any event must be greater than or equal to 0 and less than or equal to 1. In symbols: 0 ≤ P ≤ 1. For example, it does not make sense to say that there is a “–30%” chance of rain, nor does it make sense to say that there is a “140%” chance of rain.
  2. The sum of the probabilities of all possible outcomes must equal 1. In other words, if we examine all possible outcomes from an experiment, one of them must occur! It does not make sense to say that there are two possible outcomes, one occurring with probability 20% and the other with probability 50%. What happens the other 30% of the time?

If an event is impossible, then its probability must be equal to 0 (i.e. it can never happen). If an event is a certainty, then its probability must be equal to 1 (i.e. it always happens).

A probability model is a mathematical description of long-run regularity consisting of a sample space S and a way of assigning probabilities to events. Probability models must satisfy both of the above rules. There are two main ways to assign probabilities to outcomes from a sample space:

  • The empirical method, in which an experiment is repeated over and over until you have an idea what the probabilities are for each outcome.
  • The classical method, which relies on counting techniques to determine the probability of an event.
Example

A basketball player shoots three free throws. We are interested in creating a probability model for the number of free throws that a basketball player makes when shooting three in a row. Recall from above that the sample space for this event is:

S  =  {HHH, HHM, HMH, HMM, MHH, MHM, MMH, MMM}

 If we count the numbers of hits (H) for each possible outcome, we would get:

=  {3, 2, 2, 1, 2, 1, 1, 0}

The probability model for the number of free throws made, assuming this player has an equal chance of making (hitting) or missing the free throw, is:

Hits Probability (Fraction) Probability (Decimal) Probability (Percent)
0 1 out of 8 = 1/8  0.125 12.5%
 1  3 out of 8 = 3/8 0.375 37.5%
 2  3 out of 8 = 3/8  0.375  37.5%
3  1 out of 8 = 1/8  0.125  12.5%

 

 

3. Combining Probabilities

In this section we learn about adding probabilities of events that are disjoint, i.e., events that have no outcomes in common. Two events are disjoint if it is impossible for both to happen at the same time. Another name for disjoint events is mutually exclusive. This section is relatively straightforward, so these notes will be rather short.

In the following discussion, the capital letters E and F represent possible outcomes from an experiment, and P(E) represents the probability of seeing outcome E.

For disjoint events, the outcomes of E or F can be listed as the outcomes of E followed by the outcomes of F. The Addition Rule for the probability of disjoint events is:

(E or F)=(E) + (F)

Thus we can find P (E or F) if we know both P (E) and P (F). This is also true for more than two disjoint events. If E, F, G, are all disjoint (none of them have any outcomes in common), then:

P (E or F or G or …) = P (E) + P (F) + P (G) + ⋯

The addition rule only applies to events that are disjoint. If two (or more) events are not disjoint, then this rule must be modified because some outcomes may be counted more than once. For the formula (E or F) = (E) + (F), all the outcomes that are in both E and F will be counted twice. Thus, to compute P (E or F), these double-counted outcomes must be subtracted (once), so that each outcome is only counted once.

The General Addition Rule is:

P (E or F) = P (E) + P (F) – P (E and F),

where P (E and F) is the set of outcomes in both E and F. This rule is true both for disjoint events and for non-disjoint events, for if two events are indeed disjoint, then P (E and F) = 0, and the General Addition Formula simply reduces to the basic addition formula for disjoint events.

Example

When choosing a card at random out of a deck of 52 cards, what is the probability of choosing a queen or a heart? Define:

E = “choosing a queen”
F = “choosing a heart”

E and F are not disjoint because there is one card that is both a queen AND a heart, so we must use the General Addition Rule. We know the following probabilities using the classical (counting, equally-likely outcomes) method:

P (E) = P (queen) = 4/52
P (F) = P (heart) = 13/52
P (E and F) = P (queen of hearts) = 1/52

Therefore,

Equation
Finally, it is often easier to calculate the probability that something will not happen rather than determining the probability that it will happen. The complement of the event E is the “opposite” of E. We write the complement of outcome E as Ec. The complement E^c consists of all the outcomes that are not in that event E

For example, when rolling one die, if event = {even number}, then E= {odd number}. If  event = {1,2}, then Fc = {3, 4, 5, 6}.

It should make sense that the probability of the complement Ec occurring is just 1 minus the probability that event E occurs. In formula form:

P(Ec ) = 1 – P(E)

4. Probability of Independent Events

Two events E and F are independent if the occurrence of E in a probability experiment does not affect or alter the probability of event F occuring. In other words, knowing that E occurred does not give any additional information about whether F will or will not occur; knowing that F occurred does not give any additional information about the occurance of E. Therefore, events E and F are independent if they are totally unrelated. For example, if you are flipping a fair coin (this means the probability of getting a heads is 50% and getting a tails is 50%), does knowing that you just flipped a tails tell us anything about what will happen the next time we flip the coin? No! The coin has no “memory” to speak of. Even if you flipped 10 heads in a row, the probability of flipping heads on the 11th toss is still 50%.

If the two events are not independent, then they are said to be dependent. If two events are dependent, it does not mean that they completely rely on each other; it just means that they are not independent of each other. In other words, there is some kind of relationship between E and F, even if it is just a very small relationship. For example, you are asked to pull one card from a standard deck of 52 cards. Let E = {red card} and F = {black card}. Suppose you pull a red card from the deck. Does knowing this provide any information about whether F occurred? Yes! If we pulled a red card, then we know we didn’t pull a black card, so therefore F could not have occurred!

Let’s run a different experiment by pulling two cards from a standard deck without replacement. If the first card pulled is a red card, does that change the probability that we will pull a black card for the second card? Most definitely, because now there is one fewer red card in the deck, which actually increases the probability that the second card is black (even though the change in probability is small).

The Multiplication Rule for Independent Events

The Multiplication Rule for independent events states:

P (E and F) = P (E) ⋅ P (F)

Thus we can find P (E and F) if we know P (E) and P (F). This is also true for more than two independent events. So if E, F, G, … are all independent from each other, then:

P (E and F and G and ⋯) = (E) ⋅ P (F) ⋅ (G) ⋯

 The ELISA is a test to determine whether the HIV antibody is present in a patient’s blood. The test is 99.5% effective. This means that the test will accurately come back negative if the HIV antibody is not present. The probability of a test coming back positive when the antibody is not present (known as a false positive) is 100% – 99.5% = 0.5% = 0.005. Suppose the ELISA is given to 5 randomly selected people who do not have the HIV antibody.

(a)

What is the probability that the ELISA comes back negative for all five people? First, testing each individual with the ELISA is an independent event, because knowing the results of the test for one person gives us no information about what the result will be for the next person. Therefore:

P (all 5 tests are negative) = (0.995) ⋅ (0.995) ⋅ (0.995) ⋅ (0.995) ⋅ (0.995)
= (0.995)5
≈ 0.9752

Therefore, there is a 97.52% that all 5 individuals will test negative for the HIV antibody when all 5 patients are indeed HIV-negative.

(b)

What is the probability that the ELISA comes back positive for at least one of the five people? First of all, “at least one” means 1 or 2 or 3 or 4 or 5 of the people receives a positive test. Another way to say “all 5 tests are negative” is “none of the 5 tests is positive.” In symbols, if E = {all 5 have a negative ELISA}, then we could also just as well say = {none of the 5 have a positive ELISA}. Therefore, we know the compliment of E to be E= {at least one of the 5 has a positive ELISA}. Using the fact that (Ec) = 1 – P(E), we see:

(at least one of the 5 tests positive) = 1 – (all 5 have negative tests)
= 1 – (0.995)5
≈ 1 – 0.9752
≈ 0.0248

There is a 2.48% chance of at least one of the 5 individuals getting a false positive reading. This is an usual event (since the probability value is very low), as it should be, as false positive results can cause an individual undue emotional stress and result in additional (often extremely expensive) testing.

5. Conditional Probability

In this section we learn about events that are not independent of one another. When this happens, knowing additional information can actually change the probability of a future event happening. How can this occur? Aren’t probabilities supposed to be fixed?

The easiest example deals with dice. Let’s suppose you close your eyes and roll a die. Without opening your eyes, what is the probability that you rolled the number 5? That’s easy,

Equation

 

Let’s change up the experiment a bit. You close your eyes and, after rolling your die, a friend in the room tells you that you rolled an odd number. Now, what is the probability that you rolled the number 5? Since there are only three odd numbers on a die, {1,3,5}, you now have a 1 in 3 chance of rolling a 5. In symbols:

Equation

If your friend tells you that an even number showed up, what is the probability that you rolled a 5? It can’t happen since 5 is an odd number.

So what is happening in these cases? Well, you are learning some additional information that leads us to change the probability of an event occurring. In effect, knowing additional information changes the sample size we use to compute the probabilities. Therefore, the probability of our event occurring must change.

The notation P(FE) means “the probability of F occurring given that (or knowing that) event E already occurred.” For the above dice example, F = {roll a 5}, and = {result is an odd number}, and we found that P(FE) = 33.33%.

Conditional probabilities are useful when presented with data that comes in tables, where different categories of data (say, Male and Female), are broken down into additional sub-categories (say, marriage status).

To compute the probabilities of dependent data, we use the Conditional Probability Rule. In symbols:

Equation

where P(E) is the probability of event E occurring and (E) is the number of ways that event E can occur.

Example

Consider studying the possibilities of gender for a 2-child family. The sample space for all possible outcomes is = {GG, GB, BG, BB}, where birth order is important…there is a first child and then a second child. Assume that each child is equally likely to be male or female. Each of the items in our sample space can be thought of as the outcome of a chance experiment that selects at random a family with two children. Think about the following questions:

  1. What is the probability of seeing a family with two girls, given that the family has at least one girl?
  2. What is the probability of seeing a family with two girls, given that the older sibling is a girl?

To most people, these questions seem to be the same. However, if we fill in the probabilities you’ll see they are different!

For Question 1, we want to compute:

(family has two girls | family has an older girl) 

Using the Conditional Probability Rule we see this probability is equal to:

Equation

For Question 2, we want to compute:

P (family has two girls | family has an older girl) 

Using the Conditional Probability Rule we see this probability is equal to:

Equation

Did you notice how the sample space for Question 2 changed? Since we knew the older child was a girl, we had to eliminate the outcome of {BG} since this family had a boy first.

Computing Probabilities Using the General Multiplication Rule

Earlier you saw the multiplication rule for independent events, which is:

P (E and F) = P (E) ⋅ P (F)

Is there such a rule if events E and F are dependent? With a slight modification, we get the General Multiplication Rule:

(E and F) = (E) ⋅ (FE)