Archives for December 2014

1. Probability and Two-Way Tables Intro

For this sub-competency you will be introduced to the basics of probability.

Basic Probability Rules

Probability will play a huge role later in this course when we start investigating the probability of obtaining certain results from a sample. An unusual event is one that has a low probability of occurring. This is not a precise definition, because how low is “low?” Typically, probabilities of 5% or less are considered low. Recall that 5% means 5 per 100 or 5 times out of 100. Therefore, an event E with a 5% chance of occurring means that in repeated trials we would expect to see E happen in only 5 trials out of every 100.  Thus, events with probabilities of 5% or lower are considered unusual. However, this cutoff point can (and will) vary by the context of the problem.

Probability is basically the science of chance behavior. Chance behavior is unpredictable in the short run but has a regular and predictable pattern in the long run. This is why we will use probability to gain useful results from random samples and randomized comparative experiments…although we don’t know exactly what we’ll see from our sampling or experimentation, if we repeat the process over and over, we gain some confidence in the outcomes we’ll see.

Here are some definitions you want to be familiar with. An experiment is a repeatable process where the results are uncertain. An outcome is one specific possible result from the experiment. The set of all possible outcomes is the sample space.


A basketball player shoots three free throws. What are the possible sequences of hits (H) and misses (M)? The experiment in this case is a basketball player shooting 3 free throws. A possible outcome of this experiment is the sequence HHM (hit, hit, miss). The sample space of this experiment is:


Note that there are 8 outcomes in this sample space, as each free through has 2 possibilities (hit or miss). So 2 ⋅ 2 ⋅ 2 = 23 = 8. You can often create a sample space using a graphical approach, as shown:


1. Intro

In this unit you will learn about measures of position, or location, within a data set. One important measure of position, which will be used extensively later in the course, tells us the position of a data value relative to the standard deviation. Other measures tell us location in terms of groups (or percents) of the data set.

Once you have created a distribution of your data, you can use its shape, center, and spread to tell the story of your underlying data.

The most important idea that you need to take from this unit is that of a probability density curve, the graphical representation of a continuous random variable. When you are looking at a histogram of continuous data, you can almost imagine a smooth curve making the same shape as the histogram’s bars. For example, if you think back to the example from Unit 1 concerning a state’s residents living in poverty, we produced the following histogram:

Percentage of a State's Population in Poverty

 A smooth curve that has (roughly) the same shape as this histogram would be something like:

Smooth Curve

The smooth curve that represents our histogram is called a density curve and it has some cool properties. First, it is always on or above our horizontal axis. Since our vertical axis represents a count or percentage of data falling in a particular class, there can’t be a negative amount of data in a class. The other property is that the total area under an entire density curve is 1 (or 100%). Since a density curve represents our data, ALL of our data…or 100% of it…must be included in the distribution. We’re going to routinely utilize the result that an area (or region) under a density curve represents the probability of obtaining results falling in that area. So remember, AREA = PERCENTAGE OR PROBABILITY. Keep reminding yourself: AREA = PERCENTAGE OR PROBABILITY.

Again, the main purpose of a density function is to be a smooth and continuous representation of our actual data. Because the density function is a “model” of our data, we will use Greek letters such as μ and σ to represent the mean and standard deviation of the density curve. Statistics is full of symbols; it is most important to remember that x  and s represent the mean and standard deviation, respectively, of a SAMPLE, while μ and σ represent the mean and standard deviation, respectively, of a POPULATION. The density curve is a stand-in for our population.

To begin, let’s investigate the distributions of two continuous random variables, the uniform distribution and the normal distribution, which will be the focus of our statistical studies from here on out.


2. Creating a Probability Model

If the proportion of occurrences of an outcome settles down to one value over the long run, that one value is then defined to be the probability of that outcome. Probabilities can be expressed as fractions (5/8), decimals (0.625), or percents (62.5%). There are two main rules that probabilities must satisfy for a given experiment:

  1. The probability of any event must be greater than or equal to 0 and less than or equal to 1. In symbols: 0 ≤ P ≤ 1. For example, it does not make sense to say that there is a “–30%” chance of rain, nor does it make sense to say that there is a “140%” chance of rain.
  2. The sum of the probabilities of all possible outcomes must equal 1. In other words, if we examine all possible outcomes from an experiment, one of them must occur! It does not make sense to say that there are two possible outcomes, one occurring with probability 20% and the other with probability 50%. What happens the other 30% of the time?

If an event is impossible, then its probability must be equal to 0 (i.e. it can never happen). If an event is a certainty, then its probability must be equal to 1 (i.e. it always happens).

A probability model is a mathematical description of long-run regularity consisting of a sample space S and a way of assigning probabilities to events. Probability models must satisfy both of the above rules. There are two main ways to assign probabilities to outcomes from a sample space:

  • The empirical method, in which an experiment is repeated over and over until you have an idea what the probabilities are for each outcome.
  • The classical method, which relies on counting techniques to determine the probability of an event.

A basketball player shoots three free throws. We are interested in creating a probability model for the number of free throws that a basketball player makes when shooting three in a row. Recall from above that the sample space for this event is:


 If we count the numbers of hits (H) for each possible outcome, we would get:

=  {3, 2, 2, 1, 2, 1, 1, 0}

The probability model for the number of free throws made, assuming this player has an equal chance of making (hitting) or missing the free throw, is:

Hits Probability (Fraction) Probability (Decimal) Probability (Percent)
0 1 out of 8 = 1/8  0.125 12.5%
 1  3 out of 8 = 3/8 0.375 37.5%
 2  3 out of 8 = 3/8  0.375  37.5%
3  1 out of 8 = 1/8  0.125  12.5%



3. Combining Probabilities

In this section we learn about adding probabilities of events that are disjoint, i.e., events that have no outcomes in common. Two events are disjoint if it is impossible for both to happen at the same time. Another name for disjoint events is mutually exclusive. This section is relatively straightforward, so these notes will be rather short.

In the following discussion, the capital letters E and F represent possible outcomes from an experiment, and P(E) represents the probability of seeing outcome E.

For disjoint events, the outcomes of E or F can be listed as the outcomes of E followed by the outcomes of F. The Addition Rule for the probability of disjoint events is:

(E or F)=(E) + (F)

Thus we can find P (E or F) if we know both P (E) and P (F). This is also true for more than two disjoint events. If E, F, G, are all disjoint (none of them have any outcomes in common), then:

P (E or F or G or …) = P (E) + P (F) + P (G) + ⋯

The addition rule only applies to events that are disjoint. If two (or more) events are not disjoint, then this rule must be modified because some outcomes may be counted more than once. For the formula (E or F) = (E) + (F), all the outcomes that are in both E and F will be counted twice. Thus, to compute P (E or F), these double-counted outcomes must be subtracted (once), so that each outcome is only counted once.

The General Addition Rule is:

P (E or F) = P (E) + P (F) – P (E and F),

where P (E and F) is the set of outcomes in both E and F. This rule is true both for disjoint events and for non-disjoint events, for if two events are indeed disjoint, then P (E and F) = 0, and the General Addition Formula simply reduces to the basic addition formula for disjoint events.


When choosing a card at random out of a deck of 52 cards, what is the probability of choosing a queen or a heart? Define:

E = “choosing a queen”
F = “choosing a heart”

E and F are not disjoint because there is one card that is both a queen AND a heart, so we must use the General Addition Rule. We know the following probabilities using the classical (counting, equally-likely outcomes) method:

P (E) = P (queen) = 4/52
P (F) = P (heart) = 13/52
P (E and F) = P (queen of hearts) = 1/52


Finally, it is often easier to calculate the probability that something will not happen rather than determining the probability that it will happen. The complement of the event E is the “opposite” of E. We write the complement of outcome E as Ec. The complement E^c consists of all the outcomes that are not in that event E

For example, when rolling one die, if event = {even number}, then E= {odd number}. If  event = {1,2}, then Fc = {3, 4, 5, 6}.

It should make sense that the probability of the complement Ec occurring is just 1 minus the probability that event E occurs. In formula form:

P(Ec ) = 1 – P(E)

4. Probability of Independent Events

Two events E and F are independent if the occurrence of E in a probability experiment does not affect or alter the probability of event F occuring. In other words, knowing that E occurred does not give any additional information about whether F will or will not occur; knowing that F occurred does not give any additional information about the occurance of E. Therefore, events E and F are independent if they are totally unrelated. For example, if you are flipping a fair coin (this means the probability of getting a heads is 50% and getting a tails is 50%), does knowing that you just flipped a tails tell us anything about what will happen the next time we flip the coin? No! The coin has no “memory” to speak of. Even if you flipped 10 heads in a row, the probability of flipping heads on the 11th toss is still 50%.

If the two events are not independent, then they are said to be dependent. If two events are dependent, it does not mean that they completely rely on each other; it just means that they are not independent of each other. In other words, there is some kind of relationship between E and F, even if it is just a very small relationship. For example, you are asked to pull one card from a standard deck of 52 cards. Let E = {red card} and F = {black card}. Suppose you pull a red card from the deck. Does knowing this provide any information about whether F occurred? Yes! If we pulled a red card, then we know we didn’t pull a black card, so therefore F could not have occurred!

Let’s run a different experiment by pulling two cards from a standard deck without replacement. If the first card pulled is a red card, does that change the probability that we will pull a black card for the second card? Most definitely, because now there is one fewer red card in the deck, which actually increases the probability that the second card is black (even though the change in probability is small).

The Multiplication Rule for Independent Events

The Multiplication Rule for independent events states:

P (E and F) = P (E) ⋅ P (F)

Thus we can find P (E and F) if we know P (E) and P (F). This is also true for more than two independent events. So if E, F, G, … are all independent from each other, then:

P (E and F and G and ⋯) = (E) ⋅ P (F) ⋅ (G) ⋯

 The ELISA is a test to determine whether the HIV antibody is present in a patient’s blood. The test is 99.5% effective. This means that the test will accurately come back negative if the HIV antibody is not present. The probability of a test coming back positive when the antibody is not present (known as a false positive) is 100% – 99.5% = 0.5% = 0.005. Suppose the ELISA is given to 5 randomly selected people who do not have the HIV antibody.


What is the probability that the ELISA comes back negative for all five people? First, testing each individual with the ELISA is an independent event, because knowing the results of the test for one person gives us no information about what the result will be for the next person. Therefore:

P (all 5 tests are negative) = (0.995) ⋅ (0.995) ⋅ (0.995) ⋅ (0.995) ⋅ (0.995)
= (0.995)5
≈ 0.9752

Therefore, there is a 97.52% that all 5 individuals will test negative for the HIV antibody when all 5 patients are indeed HIV-negative.


What is the probability that the ELISA comes back positive for at least one of the five people? First of all, “at least one” means 1 or 2 or 3 or 4 or 5 of the people receives a positive test. Another way to say “all 5 tests are negative” is “none of the 5 tests is positive.” In symbols, if E = {all 5 have a negative ELISA}, then we could also just as well say = {none of the 5 have a positive ELISA}. Therefore, we know the compliment of E to be E= {at least one of the 5 has a positive ELISA}. Using the fact that (Ec) = 1 – P(E), we see:

(at least one of the 5 tests positive) = 1 – (all 5 have negative tests)
= 1 – (0.995)5
≈ 1 – 0.9752
≈ 0.0248

There is a 2.48% chance of at least one of the 5 individuals getting a false positive reading. This is an usual event (since the probability value is very low), as it should be, as false positive results can cause an individual undue emotional stress and result in additional (often extremely expensive) testing.

5. Conditional Probability

In this section we learn about events that are not independent of one another. When this happens, knowing additional information can actually change the probability of a future event happening. How can this occur? Aren’t probabilities supposed to be fixed?

The easiest example deals with dice. Let’s suppose you close your eyes and roll a die. Without opening your eyes, what is the probability that you rolled the number 5? That’s easy,



Let’s change up the experiment a bit. You close your eyes and, after rolling your die, a friend in the room tells you that you rolled an odd number. Now, what is the probability that you rolled the number 5? Since there are only three odd numbers on a die, {1,3,5}, you now have a 1 in 3 chance of rolling a 5. In symbols:


If your friend tells you that an even number showed up, what is the probability that you rolled a 5? It can’t happen since 5 is an odd number.

So what is happening in these cases? Well, you are learning some additional information that leads us to change the probability of an event occurring. In effect, knowing additional information changes the sample size we use to compute the probabilities. Therefore, the probability of our event occurring must change.

The notation P(FE) means “the probability of F occurring given that (or knowing that) event E already occurred.” For the above dice example, F = {roll a 5}, and = {result is an odd number}, and we found that P(FE) = 33.33%.

Conditional probabilities are useful when presented with data that comes in tables, where different categories of data (say, Male and Female), are broken down into additional sub-categories (say, marriage status).

To compute the probabilities of dependent data, we use the Conditional Probability Rule. In symbols:


where P(E) is the probability of event E occurring and (E) is the number of ways that event E can occur.


Consider studying the possibilities of gender for a 2-child family. The sample space for all possible outcomes is = {GG, GB, BG, BB}, where birth order is important…there is a first child and then a second child. Assume that each child is equally likely to be male or female. Each of the items in our sample space can be thought of as the outcome of a chance experiment that selects at random a family with two children. Think about the following questions:

  1. What is the probability of seeing a family with two girls, given that the family has at least one girl?
  2. What is the probability of seeing a family with two girls, given that the older sibling is a girl?

To most people, these questions seem to be the same. However, if we fill in the probabilities you’ll see they are different!

For Question 1, we want to compute:

(family has two girls | family has an older girl) 

Using the Conditional Probability Rule we see this probability is equal to:


For Question 2, we want to compute:

P (family has two girls | family has an older girl) 

Using the Conditional Probability Rule we see this probability is equal to:


Did you notice how the sample space for Question 2 changed? Since we knew the older child was a girl, we had to eliminate the outcome of {BG} since this family had a boy first.

Computing Probabilities Using the General Multiplication Rule

Earlier you saw the multiplication rule for independent events, which is:

P (E and F) = P (E) ⋅ P (F)

Is there such a rule if events E and F are dependent? With a slight modification, we get the General Multiplication Rule:

(E and F) = (E) ⋅ (FE)

2. Uniform Distribution

One simple, basic example of a continuous random variable is one where the random variable X can take any value in a given interval with an equally likely probability. The distribution of such a random variable is the uniform distribution.

Image you show up for work one morning and are told there will be a fire alarm drill sometime during the eight-hour day. Fire drills don’t make sense if everyone knows when the drill will take place, so all you know is that sometime during the day, a drill will take place. This means that at every moment there is an equally likely chance that the fire drill will take place. Together with the information that the drill will happen, i.e., there is a 100% = 1 probability that it will occur, we get the following distribution:


Why is the probability fixed at 1/8? Use the facts that (1) there are 8 hours during which the drill can take place, and (2) there is a 100% probability of the drill occurring. Since the uniform distribution is a rectangle, and the area of any rectangle is A(length× (width), we get:

 1 = 8 × height

and solving for height gives us:


 Now you can determine the probabilities of the drill taking place during any time interval you choose. For example, the probability that the drill will occur during your lunch hour (from 12:00 p.m. to 1:00 p.m.) is simply the area of the region shown in red:unit3_03


Once again, remember that AREA = PERCENTAGE OR PROBABILITY. The discussion of probability density curves always starts with the uniform distribution, because everyone knows how to calculate areas of rectangles. And it’s easy to see how the concepts of area and probability are linked.

3. The Normal Distribution

For the majority of the remainder of this class, we’ll be focusing on variables that have a (roughly) normal distribution. For example, data sets consisting of physical measurements (heights, weights, lengths of bones, and so on) for adults of the same species and sex often follow a similar pattern: most individuals are clumped around the average or mean of the population, with numbers decreasing the farther values are from the average in either direction.

Normal Distribution Chart

The shape of any normal curve is a single-peaked, symmetric distribution that is bell-shaped. A normally distributed random variable, or a variable with a normal probability distribution, is a continuous random variable that has a relative frequency histogram in the shape of a normal curve. This curve is also called the normal density curve. The actual functional notation for creating the normal curve is quite complex:


where μ and σ are the mean and standard deviation of the population of data.

What this formula tells us is that any mean μ and standard deviation σ completely define a unique normal curve. Recall that μ tells us the “center” of the peak while σ describes the overall “fatness” of the data set. A small σ value indicates a tall, skinny data set, while a larger value of σ results in a shorter, more spread out data set. Each normal distribution is indicated by the symbols N(μ,σ) . For example, the normal distribution N(0,1) is called the standard normal distribution, and it has a mean of 0 and a standard deviation of 1.

Properties of a Normal Distribution

  1. A normal distribution is bell-shaped and symmetric about its mean.
  2. A normal distribution is completely defined by its mean, µ, and standard deviation, σ.
  3. The total area under a normal distribution curve equals 1.
  4. The x-axis is a horizontal asymptote for a normal distribution curve.

A graphical representation of the Normal Distribution curve below:

Sec03. NormalDis

Because there are an infinite number of possibilities for µ and σ, there are an infinite number of normal curves. In order to determine probabilities for each normally distributed random variable, we would have to perform separate probability calculations for each normal distribution.

Sec03.Normal Dis2

One amazing fact about any normal distribution is called the 68-95-99.7 Rule, or more concisely, the empirical rule. This rule states that:

  • Roughly 68% of all data observations fall within one standard deviation on either side of the mean. Thus, there is a 68% chance of a variable having a value within one standard deviation of the mean
  • Roughly 95% of all data observations fall within two standard deviations on either side of the mean. Thus, there is a 95% chance of a variable having a value within two standard deviations of the mean
  • Roughly 99.7% of all data observations fall within three standard deviations on either side of the mean. Thus, there is a 99.7% chance of a variable having a value within three standard deviations of the mean

A graphical representation of the empirical rule is shown in the following figure:


Image from:


Suppose a variable has mean μ = 17   and standard deviation σ = 3.4. Then, according to the empirical rule:

  • Approximately 68% of individual data values will lie between: 17 – 3.4 = 13.6 and 17 + 3.4 = 20.4. In interval notation we write: (13.6, 20.4).
  • Approximately 95% of individual data values will lie between 17 – 2⋅3.4 = 10.2 and 17 + 2⋅3.4 = 23.8. In interval notation we write: (10.2, 23.8).
  • Approximately 99.7% of individual data values will lie between 17 – 3⋅3.4 = 6.8 and 17 + 3⋅3.4 = 27.2. In interval notation we write: (6.8, 27.2).

The results from the third bullet point illustrate how a data value of, say, 2.1 (which is less than 6.8) or a data value of, say, 33.2 (a value greater than 27.2) would both be very unusual, since almost all data values should lie between 6.8 and 27.2.

Back to the Standard Normal Curve

All normal distributions, regardless of their mean and standard deviation, share the Empirical Rule. With some very simple mathematics, we can “transform” any normal distribution into the standard normal distribution. This is called a z-transform.


Sec03. StdNorm3

Using the z-transformation, any data set that is normally distributed can be converted to the same standard normal distribution by the conversion:


where X is the normally distributed random variable, and Z is a random variable following the standard normal distribution.

Notice when X = μ that Z = (μ – μ)/σ = 0, which explains how Z transforms our mean to 0.

Properties of the Standard Normal Distribution

  1. The standard normal distribution is bell-shaped and symmetric about its mean.
  2. The standard normal distribution is completely defined by its mean, µ = 0, and standard deviation,  σ = 1.
  3. The total area under the standard normal distribution curve equals 1.
  4. The x-axis is a horizontal asymptote for the standard normal distribution curve.

Sec03. StdNorm

1. Intro

We are now getting into the more computational part of statistics. When we describe a distribution, there are three main characteristics to identify: its shape, center, and spread. We already learned about the many shapes distributions can take. The center and spread of a distribution are numerical summaries of a data set. There are many ways to describe both the center and spread of a data set.

2. Measures of Center

In this section we learn three measures of central tendency, i.e., three ways to identify the “center” of a data set. You should already be somewhat familiar with these basic ideas.

The Mode

The mode of a variable is a simple measure of center that describes the most frequent (recurring) observation in a data set. For example, in a data set such as {0,1,2,2,2,3,4,8}, the value 2 would be the mode of the data set since it appears the most times. If you refer back to the exam score data shown previously, none of the data values appears more than once. This means that the exam score data has NO mode. The mode is rarely used in serious statistics studies.

The Arithmetic Mean

The arithmetic mean of a variable is often what people mean by the “average. ” To calculate, simply add up all the values and divide by how many there are. In statistics, the symbol x̄ (pronounced x-bar) is used to denote the mean of a sample.

As an example, the data shown in the table are the first exam scores for all 17 students from a calculus class. You should verify that the average is:


The mean is only valid for quantitative data and can be thought of as a balance point for the set of data. For example, if you had only three exam scores, such as 81, 85, and 83, you can see how “borrowing” two exam points from the 85 and applying them to the 81 would make all three scores be 83. That is your average value.

The Median

The median of a variable, typically denoted by the capital letter M, is another measure of the “center.” The median is simply the numerical value of the data value that occupies the physical middle location of your ordered data set.

After just a moment of thinking, it’s clear that the calculation of the median of a variable is slightly different depending on if there are an odd number of data points, or if there are an even number of data points. (Think about the middle value of 3 data points and the middle value of 4 data points.)

To calculate the median of a data set, arrange the data in order and count the number of observations, n. If the value of n is odd, then there will be data value that is exactly in the middle. That data value is the median M. If n is even, then there will be two values on either side of the exact middle. In this case, the median is defined to be the average of these two data values. In either case, the location of the mean can be found through the simple calculation:


Please be careful to note that the median is NOT the value of the fraction
The value of this fraction is simply the location of the median.
Returning to the example of the first exam scores above, we first sort the scores in ascending order, as shown. Since there are 17 scores, an odd number, there will be an exact middle score. If your data set has just a few values, it is easy to find the middle. However, just to be sure, the location of the median comes from the formula

Thus, the 9th data value gives us our median, M = 76.5.

Now suppose that the student who scored a 46 wasn’t enrolled in class in the first place. That would mean the population has n = 16 members (just ignore the 46). In that case, the location of the median would be

Obviously there is not a score in the 8.5th location. Thus, we take the two scores from the 8th and 9th positions, 76.5 and 77, and find their average:

Therefore, the population median would be M = 76.75.

On the TI-83/84 Calculator:

First we enter the data into the lists. To do this we hit the STATS Key and select the first option 1:Edit. Then we enter the data into the first list and hit ENTER after every value.

Sec02. ti-83 d

Then we hit STAT again and then arrow over to the CALC menu and select 1: 1-Var Stats, hit ENTER and then choose your list to be L1 and hit ENTER on the Calculate. We notice that the value of the mean is shown as well as the size of our data set n=13.

Sec02. ti-83

If we arrow down further we see some more statistics including the median. Observe that the calculator does not give us the mode. For more details on using the calculator to do statistics check out the Technology Guide on D2L.

Sec02. ti-83a

Comparing the Mean and Median Values

Although both the mean and median are measures of the center of a data set, it is rarely the case that the two will be exactly equal. In fact, there are times when the two will be very different. How the mean and median relate to each other tells is a lot about the distribution of the underlying data set.

Basically, it’s important to know that the mean is a measure of center that is highly sensitive to changes in the data values. Think about the process of taking an average… every data value is used in the computation. If simply one data value changes, then the average will change. The change in the value of the mean will be much more drastic if one of the outlying data values changes. In essence, the mean is not resistant to changes to extreme values.

On the other hand, the median is a measure of center that is not very sensitive to changes in the data values. Really only one or two data points determine the value of the mean. The values of the other data points do not factor at all into the median. They serve only as placeholders, which lead to the middle value. Because of this, the median is resistant to changes in extreme values.


Data Mean Median
{1, 5, 13, 20, 28} μ=13.4 M = 13
{1, 5, 13, 20, 280} μ=63.8 M = 13
{1, 5, 13, 20} μ=7.8 M = 9

Notice how drastically the mean changes in each case, while the median stays either the same or changes just slightly.

Because of the sensitivity of the mean, it gets pulled in the direction of the tails for skewed data sets. Basically, if the distribution is:

  • Symmetric: the mean will usually be close to the median
  • Left (Or Negative) Skew: the mean will usually be smaller than the median
  • Right (or Positive) Skew: the mean will usually be larger than the median

The following picture illustrates the graphical relationship of the mean, median, and mode in the three types of data: