

Applied
Nonparametric Statistics 

Introduction 

Statistical methods for the
analysis of data are generally classified as being either parametric
or nonparametric. Parametric procedures are characterized by
assumptions, such as normality, whose validity is frequently
questionable in many applications. On the other hand, nonparametric
procedures employ simpler models, yet these models can be more
robust and powerful than their parametric counterparts.
If a wrong decision is
costly, statistical methods should not be based on assumptions that
appear to be invalid. Hence, nonparametric methods are essential
tools for statistical analyses. There are numerous nonparametric
procedures and the following list shows only some well known
parametric procedures followed by a list of their corresponding
nonparametric counterparts.
Parametric
Procedure
Nonparametric
Procedure
These nonparametric procedures
and many others form the basis for this short course. 
What You Will Learn 

Course participants will
gain an understanding of the value of checking test assumptions and
applying the appropriate nonparametric procedure when these
assumptions are questionable. In particular, attendees will learn
how to:
 Check for test
assumptions
 Apply
nonparametric procedures
 Compare parametric
and nonparametric test results
 Calculate
robustness and power
 Interpret test
results
 Communicate
results to decision makers

Course Content 

The following topics will
be covered:
 Overview of
nonparametric and parametric statistics
 Displaying data as
an empirical distribution function (needed for many
nonparametric procedures)
 Lilliefors
(graphical) test for normality
 Distribution of
the sample mean when the population is not normal, in
particular, this refers to the use of ranks in nonparametric
tests and also provides strong rational for the use of control
charts to monitor the mean in SPC
 Binomial test 
comparing the success rate of a new process against a known
value or standard including sample size requirements for a given
level of significance and a desired power
 Confidence
interval for the median (optional depending on participant
needs)
 Median test
 Wilcoxon
signedranks test for paired data including the null
distribution for small sample sizes, the effect of ties in the
ranks and large sample approximation based on the familiar
paired ttest.
 WilcoxonMannWhitney
rank sum test for two independent samples including the null
distribution for small sample sizes, the effect of ties in the
ranks and large sample approximation based on the familiar
twosample ttest.
 Contingency tables
(optional depending on time and participant needs)
 Goodnessoffit
test (optional depending on time and participant needs)
 Spearman’s rank
correlation including the null distribution for small sample
sizes, the effect of ties in the ranks and large sample
approximation based on Pearson’s product correlation
coefficient.
 Monotone
regression (regression using ranks)—demonstrated for simple
linear regression, but easily extendable to multiple linear
regression (optional depending on time and participant needs)
 KruskalWallis
test for onefactor experiments—includes the null distribution
for small sample sizes, the effect of ties in the ranks, and
large sample approximation based on the familiar Ftest from a
onefactor AOV.
 Friedman test for
blocked experiments using process equipment performance
including the null distribution for small sample sizes, the
effect of ties in the ranks, and large sample approximation
based on the familiar Ftest from a twofactor AOV.
 Analysis of
covariance (optional depending on time and participant
needs)
 Rank
transformations and interaction




