MANE 3332.03
Chapter Seven
Handouts
- Chapter 7 Slides
- Chapter 7 Slides marked
Chapter 7 Overview
-
Chapter 7 contains a detailed explanation of point estimates for parameters
-
Much of this chapter is of a highly statistical nature and will not be covered in this course
-
Key concepts we will discuss are:
-
Statistical inference
-
Statistic
-
Sampling distribution
-
Point estimator
-
Unbiased estimate
-
MVUE estimator
-
Central limit theorem
-
Sampling distributions
-
Statistical Inference
-
Montgomery gives the following description of statistical inference.
The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. There methods utilize the information contained in a sample from the population in drawing conclusions. This chapter begins our study of the statistical methods used for inference and decision making.
-
Statistical inference may be divided into two major areas: parameter estimation and hypothesis testing
Point Estimate
-
Montgomery states that "In practice, the engineer will use sample data to compute a number that is in some sense a reasonable value (or guess) of the true mean. This number is called a point estimate."
-
Discuss examples
-
A formal definition of a point estimate is
A point estimate of some population parameter \(\theta\) is a single numerical value \(\hat{\theta}\) of a statistic \(\hat{\Theta}\). The statistic \(\hat{\Theta}\) is called the point estimate.
-
Notice the use of the "hat" notation to denote a point estimate
Statistic
-
Point estimate requires a sample of random observations, say \(X_1,X_2,\ldots,X_n\)
-
Any function of the sampled random variables is called a statistic
-
The function of the random variables is itself a random variable
-
Thus, the sample mean \(\bar{x}\) and the sample variance \(s^2\) are both statistics and random variables
Properties of point estimators
-
We would like point estimates to be both accurate and precise
-
An unbiased estimator addresses the accuracy criteria
-
A minimum variance unbiased estimator addresses the precision criteria
Unbiased Estimator
-
The point estimator \(\hat{\Theta}\) is an unbiased estimator for the parameter \(\theta\) if \(\(E\left(\hat{\Theta}\right)=\theta\)\)
-
If the point estimator is not unbiased, then the difference
is called the bias of the estimator \(\hat{\Theta}\)
MVUE
-
Montgomery gives the following definition of a minimum variance unbiased estimator (MVUE)
If we consider all unbiased estimators of \(\theta\), the one with the smallest variance is called the minimum variance unbiased estimator
-
An import fact is that the sample mean \(\bar{x}\) is the MVUE for \(\mu\) when the data comes from a normal distribution
Accuracy vs. Precision

Sampling Distribution
- The probability distribution of a statistic is called a sampling distribution
Central Limit Theorem
-
Definition of the Central Limit Theorem is
If \(X_1,X_2,\ldots,X_n\) is a random sample of size \(n\) taken from a population (either finite or infinite) with mean \(\mu\) and finite variance \(\sigma^2\), and if \(\overline{X}\) is the sample mean, the limiting form of the distribution of \(\(Z=\frac{\overline{X}-\mu}{\sigma/\sqrt{n}}\)\) as \(n\rightarrow\infty\), is the standard normal distribution
-
Important result because for sufficiently large \(n\), the sampling distribution of \(\overline{X}\) is normally distribution
-
This is a fundamental result that will be used extensively in the next four chapters of the textbook.