All of the learning assets listed below are contained in this course and provide learners with multiple ways to learn statistical concepts:
1. Introduction to Statistics
We are bombarded with statistics every day. But what do all these numbers mean and who really uses them? This lesson introduces you to the many uses of statistics in the real world of science and industry. You will meet and hear the stories of how people utilize statistics to better understand the world around them. Their stories bring the field of statistics into focus.
2. Displaying Data
This lesson begins by explaining statistics and how it is used by scientists. You will learn that statistics is not a meaningless compilation of data, rather it is a tool that helps us evaluate real events that have great impact on our everyday lives. You will learn about the types of data that researchers collect, how the data is displayed, and finally, how data is initially interpreted.Scientists collect data so that they can analyze it in a meaningful way to understand it, and make predictions from that analysis. To do this, some sort of order needs to be imposed on the data collected.
One of the first things scientists look at is the data to determine how it is distributed. Researchers report statistical data by using various types of graphical displays. We all have seen examples of this on a daily basis. One of these examples is the colored map we often see on the evening news or the Internet of the United States summarizing voter counts and surveys. With the seemingly endless public-official election cycle, most people know whether they reside in a “red” or “blue” state.
This lesson will explain and demonstrate how to create these displays and how to interpret the meaning they convey
3. Describing Distributions
Professionals of all types must make sense out of the huge amounts of data they collect on a daily basis. Researchers first try to impose some sort of order on their data by looking at the average values and the extent to which their numbers vary from day to day or month to month. Business owners face the challenge of making sure they have enough staff to accommodate customers while not having so many that it hurts their bottom line. Using the right descriptions of their data enables management to schedule work forces appropriately.Statisticians show us how the proper descriptions of our data can help us understand its meaning and predict what the best actions will be in the future. We learned the general terms of data in lesson 2. In this lesson, we will learn how statisticians calculate with numbers. Many of these calculations we already know and are familiar with, such as the average. But there are many more we are unfamiliar with that are critical to understanding the language of statistics. Learning these exact measures and their meaning can help a physician diagnose a patient, or a business stay competitive and prosper.
4. Normal Distributions
As you have seen in earlier lessons, statisticians have visual and numerical ways of describing a set of data. Histograms are one of the most common ways to display numerical data. By looking at a histogram, we can easily see the mean and spread of the data. In this lesson, you will learn how the visual impression of a histogram is translated into curves that closely describe the data.These curves are of the utmost importance in statistics. Researchers not only analyze these curves of the distribution, they also compare the mean and spread of these distributions. Precise descriptions of a data set are of little use if they cannot be compared somehow to other distributions. In this lesson, you will learn how statisticians “standardize” distributions of a certain kind so that they can make accurate comparisons of descriptive statistical data. Certain common distributions form a very familiar pattern in statistics. As you will see, it is this common regular distribution that can be standardized to compare different distributions, which aids researchers in comparing different populations.
5. Scatterplots & Correlation
Questions about the relationship between two variables are asked all the time. Will time spent in a certain rehab program really increase an addict’s chance of not relapsing? Will extra minutes spent training athletes result in better performance or increase fatigue and decrease performance? Is the increased carbon dioxide in the atmosphere really increasing the earth’s temperature? All these questions can be addressed by statistics and analyzed initially through a display called a scatterplot.The scatterplot is a graphical display of two variables of interest on the same subject. The data appears to “scatter” in the display and statisticians look at the clustering of the data to discern any patterns that may appear. Making sense out of these patterns can help us predict what future subjects may experience or explain a relationship, or correlation, between the variables. While these plots can be powerful tools for statisticians to unravel patterns in the data, one must be careful to remember that correlation does not prove causation.
We have all had the experience watching our friends and family play sports or games. Based on prior performance, we predict in our heads how they will do in the next game, and we are often right. By keeping track of variables like bowling scores or chips in a poker game, one can predict how well someone will do in the next event.Statisticians are much more exact on how they predict what will happen to a given set of variables. We learned this in the previous lesson on scatterplots. There, the explanatory variable (the variable believed to cause an affect) was plotted on the x-axis, and the response variable (the other variable, which represents the outcome) was plotted on the y-axis. Graphing the data in that way allows one to more easily see if any relationship or trend exists between the variables.
One common association that may appear on a scatterplot occurs when the response variable y changes at the same rate as the explanatory variable x. When this happens, the scatterplot of the association reveals data points distributed around what appears to be straight line, something known as a linear relationship. In this lesson, you will take a closer look at linear relationships, and discover how statisticians use them to make predictions through a powerful statistical method called regression.
7. Two-Way Tables
Researchers are often interested in variables that are categorical in nature. These often involve people who fall into certain groups, such as age, gender, political affiliation, or religion. These groups can also be defined as yes or no categories, especially in medicine. Patients get well or remain ill, survive cancer or don’t, improve on mobility tests or decline.Statisticians display the data in a table in order to investigate relationships that may exist between the groups. To do this, they often transform the data into counts or percents. One must be careful to remember that, although this transformation creates interval or ratio values, like percentages, the underlying data is still categorical. Clear patterns in the data will appear if a relationship between the categories exists. In future lessons, we will take a close look at statistical tests that can be performed on these groups to quantify the significance of any relationship found.
8. Producing Data: Sampling
Polls are something we encounter everyday in the media. We are constantly bombarded with numbers of how many people approve of the President’s performance or how many people prefer one brand of soda over another. Politicians and corporations want to know what we are thinking and doing. But they can’t ask all of us all the time. Instead they pick a few people, a sample, that can be easily surveyed. Just as one can taste a small sample of ice cream flavors at an ice cream counter, so can we sample a few people to find out what the public at large is thinking.This lesson describes how statisticians are best able to accurately describe a population by taking a proper sample. Whether it’s examining a small piece of coral reef to determine the state of miles of ocean floor life or polling the right voters to determine how a candidate’s religion or race will affect their popularity, statisticians have to sample properly toget an accurate picture of the larger population. In this lesson you will learn the rules that must be followed to sample properly. You will also learn how a truly random sample can be generated to ensure that a sample really does represent the entire population.
9. Producing Data: Experimentation
Every day, the media bombard us with the results of recent statistical studies. Statements like, “Studies show our
products are better than ever…” appear frequently in news releases and advertisements. Many of these studies claim results based on experiments. But what constitutes a proper experiment? How do we know the results are reliable and have real meaning? How does one design an experiment to ensure that the research objective is properly addressed? Resolving these issues are part of the statistician’s role in research.This lesson describes how researchers collect data by designing and conducting experiments. First, you will learn how poorly designed studies can lead to flawed data with very little meaning. You will then see how researchers turn to statisticians to design experiments that account for any factors that could mask or confound any real, meaningful result. Proper experimental design is a powerful tool for researchers to use in seeking a greater understanding of the world around us.
10. More on Experimentation
Experimental studies are powerful tools that researchers use to determine cause and effect relationships. These studies are picked up in the media and disseminated to people all over the world. The results of many of these studies provide information critical to decisions we make. Whether it’s deciding on what car to buy, which candidate to vote for, or which medical treatment regimen to follow, we all rely on studies to give us the information we need to make decisions and to accurately understand our world.This lesson introduces some advanced methods used by statisticians to get the most accurate results from studies involving human subjects. Convincing scientific data comes from well-designed studies. Previous lessons have focused on the basic principles of sound study design, including randomization. This lesson goes a step further by showing how statisticians examine and control the impact of multiple variables on human subjects, including any confounding, or lurking, variables. In this lesson, you will learn the advanced study design techniques that must be followed to conduct a clinical trial that will yield accurate results. These advanced design elements are used every day by researchers to ensure that a study’s findings reflect as accurately as possible how the factors in question affect observed outcomes.
11. Introduction to Inference
Inferences are something we make every day. Every time we draw a conclusion based on evidence, we are making an inference. For example, if we meet someone new, we draw conclusions about that person based on what we see and hear. If we see a long line outside of a local dance club, we might assume that it is an exclusive club and probably full of VIPs. When we hear a couple shouting at each other, we may conclude that they are arguing. When statistician gathers information, the analysis includes making a graphic display and correlate the data. All of these examples involve the practice of inference.Statistical inference is how researchers draw conclusions about populations based on data from samples. The real purpose of analyzing data is to gain some new understanding, direction, or conclusion about the world around us. This lesson reveals the initial steps that statisticians use to infer meaning from the data they collect. In this lesson, you will learn the basics of how inference leads to conclusions; a process that, at times, is more art than science. You will begin to see that statistical inference is based on certain assumptions, which will be more clearly defined in future lessons. You will revisit concepts related to sampling, and see how their use allows researchers to examine populations. Accurately describing populations is the goal of statistics, and that involves inference.
“What are the chances?” We think this many times a day. Why is it raining when the weatherman said there was only a five percent chance of rain? Why do people play the lottery if the chance of winning is so low? In mathematics, the science of probability describes such chance behavior. As statisticians strive to make sense out of data, a major part of their endeavor is the use of probability in making predictions and explaining results.This lesson introduces several ways statisticians use probability to describe real world events. In previous lessons, you learned how researchers create sound statistical studies. In this lesson, you’ll learn that no matter how well a study is designed, chance can alter its findings. Despite facing overwhelming odds against them, people still win the lottery. Even in casino games of chance, players manage to beat the odds and win big. But how are the odds of such events determined? This lesson seeks to answer that question in a very precise way.
13. Sampling Distributions & the Central Limit Theorem
In previous lessons, you learned about the proper techniques researchers use when sampling a population, and how they infer meaning out of sample data. In this lesson, you will learn more about what investigators do with the numerical data they collect. You’ll see how statisticians use the average of values observed in samples to accurately determine the average in the population from which the sample was drawn. You will also learn about a theory that is central to all of statistics and which allows researchers to make approximations of population values based on a small sample of data.This lesson builds on what you learned about probability and random sampling to introduce a method that ascertains what will happen if a population is sampled many times. The result of this process, you’ll learn, is a new distribution derived from the samples. Statisticians analyze this new distribution to make precise estimations about the population under study. Once you understand this distribution, it opens the door into understanding the powerful theories behind statistics including the one (referred to above) that is central to the science of statistics. Previous lessons covered the concepts of descriptive statistics, such as the mean, the mode and the median. That was just the beginning. This lesson will show you how this central theory can reveal a great deal about a population based on the data from just a few samples. Mastering the concepts of the new distribution and this important theory will help you to understand the power of inference investigated in future lessons.
14. Confidence Intervals
Confidence intervals are something we see in the news almost daily, most frequently as polling results. They are the
most commonly reported statistical results. Whether it involves who will win the next race for governor or what percentage of the population favors off shore drilling, the polls invariably report a percentage plus or minus some number. That range is the confidence interval, the subject of this lesson. It tells viewers the range of accuracy for that poll; in other words, by how much the results may be off. If the numbers in a poll are close and fall within this range, you know that the race or debate is not over.This lesson gives you the statistical background to fully understand how these numbers are calculated, what the numbers mean and how they help statisticians to be confident about the results they report. The lesson builds on what you have learned about how to use sampling data to infer population parameters, such as the mean. Mastering these concepts will deepen your understanding of inference and prepare you to learn how to conduct more sophisticated tests in future lessons.
15. Tests of Significance
Testing ideas lies at the heart of statistics. Researchers must have unifying criteria to determine whether the results they see are evidence of real effects or difference, or whether they are simply due to chance. You have learned in previous lessons about probability and how to calculate it. But how do you know when chance accounts for sample results? What is the cutoff? To answer that question, statisticians have developed tests to determine whether findings are likely to be real or the result of chance. This lesson explains the reasoning behind such tests.In this lesson, you will continue to examine statistical inference and how statisticians use it to reject or accept claims about a population gleaned from sample data. In doing so, you will build on what you learned about statistical inference in previous lessons with regard to confidence intervals. Also in this lesson, you will encounter a new measurement whose value allows researchers to critically evaluate claims. The measurement offers a way to quantify judgments, and its use provides the basis for additional tests you will learn about in future lessons.
16. Course Lesson Title
Statistics is a powerful tool for discovering how the world works, but we must be cautious when interpreting what the numbers mean when we calculate a confidence interval or perform a test of significance. Results are not always as clear cut as they seem. This lesson explores some of the errors that can occur when interpreting the results of these two statistical procedures. The sampling errors you learned about in early lessons are just one pitfall that awaits the careless statistician. In this lesson, you will encounter two scenarios in which hypotheses are evaluated incorrectly even though the numbers seem to tell a different story. These scenarios reveal two types of errors that can occur when interpreting results using a test of significance. The cautions learned in this lesson will continue to apply in future lessons when more complex tests of significance are introduced.
17. Cautions About Inference
This lesson pursues a more realistic test of significance than the Ƶ test discussed in Lesson 15. In that lesson, you learned to use the Ƶ test to determine whether or not a sample population mean is significantly different from the claimed population mean. The Ƶ test assumes that we know the population standard deviation σ. In reality, we rarely know σ. In this lesson, you will learn how researchers get around that problem by using a different test with more realistic parameters.Statisticians often focus on comparing the mean of a sample data set, such as a score on some standardized test or a measured value like blood pressure, to the mean of the population. Comparing these numbers can provide evidence, for example, that an intervention performed on the sample group is effective, but the comparison requires a value for the standard deviation of the population. Because the true standard deviation of that population is rarely known, statisticians instead use the standard deviation of the sample to perform a new test based on this substitution. This popular test allows researchers to evaluate single sample means with more confidence. In future lessons, you’ll learn how the same test can be applied to multiple sample groups as well.
18. Comparing Two Means
In this lesson we learn about a second and important t procedure, the two-sample t test. This test is one of the most widely used statistical tests. It is used to compare the means of two groups of data. It is a quantitative assessment that deals with real values such as pulse rates or cholesterol levels between two groups of patients receiving different treatments.Statisticians often make judgments about the means of two groups. Comparing these numbers to each other can provide evidence that an intervention performed on one group is more effective than a comparable treatment or no treatment on another group. Two-sample tests allow researchers to compare groups directly without having to estimate a population value such as the population standard deviation. This popular test allows statisticians to easily make conclusions about comparative groups with confidence. We will see in future lessons how this test can be expanded to look at the variation seen in more than two sampled groups as well.
19. Inference for Proportions
This lesson returns to the topic of proportions, applying what you have learned about confidence intervals, margins of error, and tests of significance to this type of data. As you may recall, instead of examining a quantitative value like the population mean µ, researchers often want to examine a categorical value, such as the proportion of a sample group that meets a certain criterion. For instance, they might want to study what proportion of voters intend to cast their ballots for a certain candidate of interest, or what proportion of students in an incoming college class are of a certain ethnic background. The answers generated by these questions are often expressed in terms of percentages, since a percentage is a proportion.Categorical data is gathered when statisticians want to draw conclusions about the composition of the population under study. Making inferences about a single sample group proportion allows researchers to then estimate what the population is like. By making certain assumptions about the proportion sample data, statisticians can use their knowledge of the Normal distribution to draw conclusions about the population from which the sample is drawn, just as we drew conclusions about the population mean based on the sample mean in previous lessons. For instance, these procedures can be used to predict the outcome of an election, or perhaps to help medical personal change unhealthy behaviors. In this lesson, you’ll learn how to work with a one-sample proportion; in the next lesson, you’ll learn how these procedures can be expanded to look at two sample groups that compare how those populations differ in their makeup.
20. Comparing Two Proportions
This lesson extends what you learned in Lesson 19 about single-sample proportions to the concept of two-sample proportions. Researchers use two-sample proportions to compare one group to another, which often involves comparing one treatment to another. In Lesson 18, you learned how to use the two-sample method with means from quantitative data. Now you will learn how to use the same method with data expressed in terms of a proportion of defined successes, including how to find confidence intervals and do tests of significance.Again, the idea is to make judgments about the population under study. With two-sample proportions, researchers aim to compare two populations by once again using the assumptions of the Normal distribution. Knowing the proportional makeup of two target populations is yet another important tool statisticians use. In this lesson, you’ll see how this idea works when applied to how successful certain student populations are, or to treatments meant to help prevent dementia among different populations of the elderly.
21. Choosing Inference Procedures
This lesson is a review of what you have learned so far in the course. You have learned about proper study design, the types of variables under study, and the data studies generate. You have learned about inference and tests of significance, and that there are specific formulas to be used with certain types of data. You have also learned the basic assumptions behind these formulas. But when faced with several study designs, and different ways of collecting and reporting data, how do you know which test to use?This lesson describes a step-by-step method one can use to determine the type of statistical test needed to interpret a data set. While it may seem intimidating at first to choose the right test given the multiple options you have learned, this lesson will make the path clearer. Statisticians also struggle with these decisions. Taking a logical approach to choosing a test based on the design of the study, and the number and type of variables involved, will make it easier to decide what analysis to do.
22. Chi-Square Tests
This lesson goes a step farther in what can be done with categorical data. Researchers often look at questions that have more than two categorical outcomes, or they may study more than two groups at a time. This type of analysis was introduced in Lesson 7, which focused on two-way tables. You learned then that categorical data often falls into multiple classes or outcomes. Researchers want to know how those classes or outcomes compare to each other as a whole. In addition, they want to know if the results of their comparison are statistically significant.Making such inferences requires yet another type of statistical test, one that is introduced in this lesson and that has its own unique distribution. As you proceed through the lesson, you will see how this new test enables a successful company to properly track its inventory, and researchers to determine how the makeup of visitors to our national parks affects public policy.
23. Inference for Regression
This lesson returns to the topic of regression introduced in Lessons 5 and 6. While researchers benefit from knowing the correlation between two variables, they also want to be able to apply the same inferences to a correlation as they do for other quantitative measures, such as the mean. It’s often possible to see the correlation between two quantitative variables in a scatterplot, but, in addition, researchers want to make estimates for the population as a whole just as they do when they estimate population means and proportions.Making inferences about the correlation of variables from a scatterplot requires yet another statistical test, a test that has a distribution you have seen before and that is based on some new assumptions about the correlation. This test gives us the ability to estimate the correlation in the population with a certain amount of confidence, and is yet another important tool statisticians use. In this lesson, you’ll see how this test is applied to the relationship between a pesticide and eggshell thickness to determine if a species is able to survive or not. The answer to that question comes from testing the linear relationship between two important quantitative variables.
24. Multiple Regression: Building the Model
This lesson expands on the last lesson on regression to consider cases in which there is more than one predictor variable. Statisticians often do regressions where more than one factor is known to affect a response variable. While one can often see the effects of multiple variables, in this lesson, you will learn how researchers quantify the exact effects of those variables on a single outcome. Statisticians also want to know whether or not all the variables under study actually have an impact on the response variable.Making these inferences about the correlation of multiple variables from several scatterplots requires another completely different kind of statistical test. This test has a unique distribution based on assumptions about the relationship between the multiple predictors and the response variable. Understanding how multiple variables can be used to predict the value of a response variable is one more important tool that statisticians use. In this lesson, the new tool will be illustrated through a single real life example in which scientists want to predict the flow of electricity through the power grid. This example will show you how the new test, with its new distribution, can allow researchers to infer how multiple variables can be used to make predictions. Along the way, you will also learn the conditions required for this analysis.
25. Multiple Regression: Refining the Model
This lesson further examines multiple regression by closely examining the predictor variables involved in the regression model. Researchers often want to simplify a multiple regression to focus on just a few variables, or even on a single variable. There are several reasons why they might do this. One is that they may not have the resources to continuously measure all of the explanatory variables. Or they may want to focus on one or two explanatory variables that are the best predictors of the response variable. In this lesson, you will learn how researchers determine which predictor(s) to keep and which ones to discard as they build a more refined model.When statisticians refine multiple regression models, they use the t statistical test to evaluate the individual slope coefficients. This is a test with which you are already quite familiar. In this lesson, you’ll learn how the t test gives researchers a way to identify which predictors to keep and which to remove from the regression model. The lesson features a real life example of how scientists use these refinement techniques to keep the electric power grid going and also to predict how many solar panels customers need to produce their own electricity. Remember that the purpose of multiple regression is to accurately predict the value of a response variable based on the value of one or more explanatory, or predictor, variables. In this lesson, you will also learn how to apply the principles of inference to predicted values.
26. Logistic Regression
This lesson deals with a special type of regression in which the response variable is categorical and the explanatory variable is a quantitative variable. It is often the situation where researchers want to predict the outcome of a categoricalvariable, such as the survival of a patient given a variation of some measurable variable or the pass/fail of a device under changing conditions. Neither linear nor multiple regression will yield an answer in these cases. In this lesson, you will learn how statisticians do evaluate the success or failure of a categorical variable given a single predictive explanatory variable. In doing so, you will learn about yet another regression technique.
27. One-Way ANOVA
This lesson introduces another type of statistical test that evaluates the means of numerical data. In previous lessons, we used the two-sample t test to evaluate differences between the means of two groups. But many problems that researchers examine involve more than two groups. For instance, a study on smoking might require researchers to look at smokers, quitters, and lifelong nonsmokers; or a study about psychological health based on marriage status might involve looking at married, single, and divorced individuals. In this lesson, you’ll learn how statisticians examine the differences among the means of three or more groups using a test statistic you have seen before: the F test. This test was used to evaluate the variation seen in multiple regression. Now, you will learn another important application of this test: evaluating the means of multiple groups.
28. Contrasts: Comparing Means
This lesson continues the analysis of multiple means begun in the previous lesson on ANOVA. It introduces yet another type of statistical test to the means of numerical data. The one-way ANOVA F test gives us valuable information about the groups under study by telling us whether or not the means of the groups are equal. As powerful as ANOVA is, however, it does not tell us which group is different, by how much, and whether the difference is significant. To discover that information, we need to compare the means of the groups.In this lesson, you will learn how statisticians use computer software to simultaneously examine the differences among the means of three or more groups. As you learned in the ANOVA lesson, we cannot use separate two-sample t tests to compare these multiple groups because that test does not account for the fact that the groups are being evaluated together at the same time. In this lesson, you will learn how statisticians evaluate the significance and magnitude of the unequal means identified by a significant ANOVA test.
29. Two-Way ANOVA
F test to evaluate the differences in means between multiple groups that were cases of just one explanatory variable. But in many situations, there are at least two explanatory variables that affect the value of a response variable. In these situations, researchers often want to determine which variable is causing a change in the response variable, or whether the two explanatory variables may be working together to produce an effect on the response variable. The analysis is done through the use a different form of the ANOVA F test using computer software to make the calculations. While it makes for a more complex analysis, this powerful technique can allow researchers to investigate the simultaneous impact of two conditions on a response variable.
30. Bootstrap Methods & Permutation Tests
In this lesson, you’ll learn what statisticians do when they need to analyze data that does not meet the strict requirements for the tests covered so far in this course. In our previous lessons, the tests have required Normal distributions and proper sample sizes. But sometimes these conditions simply cannot be met. For instance, some investigations involve collecting data on human subjects with serious illnesses where it would be simply impossible or unethical to keep sampling in order to meet all the criteria for traditional statistical analysis. Other times, the sample sizes are limited due to expense or the availability of subjects to study. Rather than simply discard the data or abandon the study, statisticians have gotten around the problem by using the power of computers to simulate the necessary conditions in a way enables them to make reliable estimates and draw valid conclusions. While these methods make for a far more complex analysis, they allow researchers to investigate important data sets they could not have otherwise analyzed.
31. Nonparametric Tests
So far, all of the tests introduced in this course have required that the data have a Normal distribution. However, what about those studies in which the distribution is not Normal? Most of the tests studied thus far are robust with regard to Normality, and can be used as long as the sample size is large enough. However, there are times when a study has a small sample size that does not take a Normal distribution. What happens then?This lesson introduces the methods statisticians use under such circumstances. In this lesson, you will learn a different method for analyzing the data and testing the results for significance. You will also learn how this new method compares to the tests you have already learned. These new procedures have their limitations, but they, nonetheless, give researchers another way to look at data.
32. Statistical Process Control
Throughout this course, you’ve seen many real world examples of how statistics is applied when researchers want to gain new information. In this lesson, you’ll see how the statistical procedures you’ve learned are applied to business and manufacturing processes. While you will not be learning any new tests of significance in this lesson, it does apply what you have learned about descriptive statistics to events in the workplace.Companies keep careful track of how they expend resources in the day-to-day operations that create goods and provide services. This lesson reveals how statisticians use the data gathered about a company’s operations to make sure its work is done with the least variation possible and to detect any unwanted variation that may affect productivity. This applied use of statistics has kept companies competitive for decades. Descriptive statistics, such as the mean, the range, and the standard deviation, can give a very accurate picture of an entire operation when these values are tracked graphically over time. Embedded in the graphical display of these descriptive statistics are signals that indicate whether the process is running smoothly or whether it’s in need of correction to produce a product with a desired quality or specification.
Betty Anderson, M.S., Associate Professor, Mathematics, Howard Community College
Diane L. Benner, M.S., Associate Dean, Mathematics, Science, & Allied Health Division, Harrisburg Area Community College
Keith Bower, M.S., Statistician, www.KeithBower.com
Matthew A. Carlton, Ph.D., Associate Professor, Department of Statistics, California Polytechnic State University, San Luis Obispo
Bruce J. Collings, Ph.D., Professor of Statistics, Department of Statistics, Brigham Young University
Patti B. Collings, M.S., Assistant Teaching Professor, Department of Statistics, Brigham Young University
Mary Ellen Davis, M.S., Associate Professor, Mathematics, Computer Science, Engineering, Georgia Perimeter College, Clarkston Campus
Robert L. Gould, Ph.D., Academic Administrator & Undergraduate Vice-Chair, Department of Statistics, University of California, Los Angeles
Karen McGaughey, Ph.D., Assistant Professor, Department of Statistics, California Polytechnic State University, San Luis Obispo
Mary Mortlock, M.S., AP Statistics Teacher, The Harker School
Kathy Mowers, M.A.T., Mathematics Professor & Coordinator, Owensboro Community & Technical College
Linda Myers, Ph.D., Professor, Mathematics, Computer Science, Harrisburg Area Community College
Robert L. Raymond, Ph.D., Professor Emeritus, Computer & Information Sciences Department, University of St. Thomas, Minnesota
Daren Starnes, Master Teacher, The Lawrenceville School
Alex Aldrich, Inside Sales Coordinator, REC Solar
Sandeep Arora, Transmission Engineer, California ISO
Tim Barnett, Ph.D., Climatologist, Scripps Institution of Oceanography, University of California, San Diego
Keith M. Bower, M.S., Statistician
Allan Brandt, Ph.D., Professor of the History of Science, Harvard University
Mike Bullock, M.B.A., Managing Master Black Belt, Six SigmaTM, Quest Diagnostics
Lucius D. Bunton III, J.D., Judge, Western District of Texas, U. S. District Court
Jeff Burtt, Field Office Supervisor, Times/Bloomberg Poll
C. Wayne Callaway, M.D., P.C., Endocrinologist
Karen Chaudiere, M.B.A., Six SigmaTM Black Belt, Quest Diagnostics
Paul Chodas, Ph.D., Principal Engineer, Near-Earth Object Program, NASA Jet Propulsion Laboratory
Daniel Clegg, M.D., Professor of Rheumatology, University of Utah
Bruce Codley, Statistician, Risk Assessment, Bell Communications Research
Bruce Jay Collings, Ph.D., Professor of Statistics, Brigham Young University
Tim Crane, Purchasing Manager, REC Solar
Jill Darling, Associate Director, Times/Bloomberg Poll
Gianluca Del Rossi, Ph.D., Professor of Sports Medicine, University of Miami
Dr. W. Edwards Deming, Management Theorist & Statistician
Jim Detmers, Vice President, Operations, California ISO
George Dickison, M.S., Director, Natural Resource Center, National Park Service
Nolan Doesken, M.S., Colorado State Climatologist, Colorado State University
Bonnie J. Dunbar, Ph.D., President & CEO, Seattle Museum of Flight
Dennis Eggett, Ph.D., Professor, Statistics Department, Brigham Young University
Gregg Fishman, Public Information Officer, California ISO
Eric Frank, Ph.D., Dean, V.P. of Academic Affairs, Occidental College
Lawrence Garfinkle, Director of Cancer Prevention, American Cancer Society
Dennis Gaushell, Load Forecasting Analyst, California ISO
Spencer Guthrie, Ph.D., Assistant Professor, Brigham Young University
Dana Hall, Statistician, San Diego State University
Linnea S. Hall, Ph.D., Executive Director, Western Foundation of Vertebrate Zoology
James Halverson, M.D., Internist, Ojai Valley Medical Group
Stanley Heshka, Ph.D., St. Luke’s – Roosevelt Hospital
John Hostetter, Design Engineer, REC Solar
Sylvia Hurtado, Ph.D., Director, Higher Education Research Institute, University of California, Los Angeles
Gary LaFree, Ph.D., Statistician, University of New Mexico
Wing Lam, Co-founder, Wahoo’s Fish Taco
Eric Larson, M.D., Executive Director Group Health Cooperative
Michael Lind, Area Sales Manager, REC Solar
Jim Loftis, Ph.D., Civil & Environmental Engineering, Colorado State University
Raúl E. López, Ph.D., Research Meteorologist, National Severe Storms Laboratory, NOAA
Carol Mansfield, Ph.D., Senior Economist, RTI International
Amy Miller Bohn, M.D., Family Physician, University of Michigan Health System
Ethan Miller, Director of Implementation, REC Solar
Irwin Miller, Statistician
Cheryl Millet, Supervisor, Specimen Management, Quest Diagnostics
Connie Moore, Supervisor, AT&T
Jennifer Moore, M.S., Graduate Assistant/Researcher, Colorado State University
Philip R. Nader, M.D., Professor of Pediatrics, University of California, San Diego Medical Center
Yukie Nishinaga, Marketing Manager,REC Solar
Greg O’Neill, M.S., Chief, Lakewood Office, U.S. Geological Survey
Jason Oppler, Inside Sales Manager, REC Solar
Richard Overholt, M.D., Clinical Professor of Surgery, Tufts College Medical School
Bruce Peacock, Ph.D., Economist, National Park Service
Matt Perez, Special Agent FBI
David Pierce, Ph.D., Climatologist, Scripps Institution of Oceanography
Susan Pinkus, Director, Times/Bloomberg Poll
John Pryor, M.A., Director CIRP, University of California, Los Angeles Higher Education Research Institute
Domenic Reda, Ph.D., Director/Cooperative Studies Program, Department of Veterans’ Affairs
Shane Reese, Ph.D., Associate Professor of Statistics, Brigham Young University
Maile Rogers, M.S., Instructor, Brigham Young University
Forest Rohwer, Ph.D., Professor of Molecular Biology, San Diego State University
Stuart Sandin, Ph.D., Coral Reef Ecologist, Scripps Institution of Oceanography
Frank Scoblete, Author, “Golden Touch Dice Control Revolution”
Jason Shaw, Human Resource Administrator, REC Solar
Zack Shelley, M.S., Program Director, Big Thompson Watershed Forum
Joseph Signorile, Ph.D., Professor of Physiology, University of Miami
Jennifer Smith, Ph.D., Assistant Professor, Scripps Institution of Oceanography
Steven Smriga, Scripps Institution of Oceanography, University of California, San Diego
Michael Tamada, Director of Institutional Research, Occidental College
Carl Thelander, M.S., Chief Executive Officer, Bio Resource Consultants
Dennis Tolley, Ph.D., Professor of Statistics, Brigham Young University
Li Wang, Ph.D., Statistician, R & D Service, Veterans Administration Health Services
Linda Wegley, Graduate Student, San Diego State University
Ernest Wynder, M.D., Past President, American Health Foundation
Donald Yeomans, Ph.D., Manager, Near-Earth Object Program, NASA-JPL