DISCLAIMER
The postings on this blog are my own (except as noted) and do not necessarily represent the positions, strategies or opinions of my current, past, and future employers, cats and other family members, relatives, Facebook friends, real friends, Charlie Sheen, people who sit next to me on public transportation, or myself when I’m in my right mind.
Recent Posts
Archives
 February 2022
 November 2020
 October 2020
 September 2020
 August 2020
 July 2020
 May 2020
 January 2019
 December 2018
 October 2018
 September 2018
 November 2017
 September 2017
 May 2017
 February 2017
 January 2017
 December 2016
 September 2016
 August 2016
 July 2016
 June 2016
 January 2016
 July 2015
 February 2015
 January 2015
 December 2014
 November 2014
 October 2014
 January 2014
 September 2013
 May 2013
 April 2013
 February 2013
 August 2012
 July 2012
 June 2012
 February 2012
 May 2011
 April 2011
 March 2011
 February 2011
 January 2011
 December 2010
 November 2010
 October 2010
 September 2010
 August 2010
 July 2010
 June 2010
 May 2010
RSS Links
Feedburner

Blogroll
Recent Posts from: Random TerraBytes
Meta
The Most Important Statistical Assumptions
Posted in Uncategorized
Tagged data scrubbing, Representativeness, statistical assumptions
3 Comments
What to Look for in Data – Part 2
What to Look for in Data – Part 1 discusses how to explore data snapshots, population characteristics, and changes. Part 2 looks at how to explore patterns, trends, and anomalies. There are many different types of patterns, trends, and anomalies, but graphs are always the best first place to look.
Patterns and Trends
There are at least ten types of data relationships – direct, feedback, common, mediated, stimulated, suppressed, inverse, threshold, and complex – and of course spurious relationships. They can all produce different patterns and trends, or no recognizable arrangement at all.
There are four patterns to look for:
 Shocks,
 Steps
 Shifts
 Cycles.
Shocks are seemingly random excursions far from the main body of data. They are outliers but they often reoccur, sometimes in a similar way suggesting a common, though sporadic cause. Some shocks may be attributed to an intermittent malfunction in the measurement instrument. Sometimes they occur in pairs, one in the positive direction and another of similar size in the negative direction. This is often because of missed reporting dates for business data.
Steps are periodic increases or decreases in the body of the data. Steps progress in the same direction because they reflect a progressive change in conditions. If the steps are small enough, they can appear to be, and be analyzed as, a linear trend.
Shifts are increases and/or decreases in the body of the data like steps, but shifts tend to be longer than steps and don’t necessarily progress in the same direction. Shifts reflect occasional changes in conditions. The changes may remain or revert to the previous conditions, making them more difficult to analyze with linear models.
Cycles are increases and decreases in the body of the data that usually appear as a waveform having fairly consistent amplitudes and frequencies. Cycles reflect periodic changes in conditions, often associated with time, such as daily or seasonal cycles. Cycles cannot be analyzed effectively with linear models. Sometimes different cycles add together making them more difficult to recognize and analyze.
Simple trends can be easier to identify because they are more familiar to most data analysts. Again, graphs are the best place to look for trends.
Linear trends are easy to see; the data form a line. Curvilinear trends can be more difficult to recognize because they don’t follow a set path. With some experience and intuition, however, they can be identified. Nonlinear trends look similar to curvilinear trends but they require more complicated nonlinear models to analyze. Curvilinear trends can be analyzed with linear models with the use of transformations.
There are also more complex trends involving different dimensions, including:
 Temporal
 Spatial
 Categorical
 Hidden
 Multivariate
Temporal Trends can be more difficult to identify because Timeseries data can be combinations of shocks, steps, shifts, cycles, and linear and curvilinear trends. The effects may be seasonal, superimposed on each other within a given time period, or spread over many different time periods. Confounded effects are often impossible to separate, especially if the data record is short or the sampled intervals are irregular or too large.
Spatial Trends present a different twist. Time is onedimensional; at least as we now know it. Distance can be one, two, or threedimensional. Distance can be in a straight line (“as the crow flies”) or along a path (such as driving distance). Defining the location of a unique point on a twodimensional surface (i.e., a plane) requires at least two variables. The variables can represent coordinates (northing/easting, latitude/longitude) or distance and direction from a fixed starting point. At least three variables are needed to define a unique point location in a threedimensional volume, so a variable for depth (or height) must be added to the location coordinates.
Looking for spatial patterns involves interpolation of geographic data using one of several available algorithms, like moving averages, inverse distances, or geostatistics.
Categorical Trends are no more difficult to identify than any trend except you have to break out categories to do it, which can be a lot of work. One thing you might see when analyzing categories is Simpson’s paradox. The paradox occurs when trends appear in categories that are different from the overall group. Hidden Trends are trends that appear only in categories and not the overall group. You may be able to detect linear trends in categories without graphs if you have enough data in the categories to calculate correlation coefficients within each.
Multivariate Trends add a layer of complexity to most trends, which are bivariate. Still, you look for the same things, patterns and trends, only you have to examine at least one additional dimension. The extra dimension may be an additional axis or some other way of representing data, like icon type, size, or color.
Anomalies
Sometimes the most interesting revelations you can garner from a dataset are the ways that it doesn’t fit expectations. Three things to look for are:
 Censoring
 Heteroskedasticity
 Outliers
Censoring is when a measurement is recorded as <value or >value, indicating that the measurement instrument was unable to quantify the real value. For example, the real value may be outside the range of a meter, counts can’t be approximated because there are too many or too few, or a time can only be estimated as before or after. Censoring is easy to detect in a dataset because they should be qualified with < or >.
Heteroskedasticity is when the variability in a variable is not uniform across its range. This is important because homoscedasticity (the opposite of heteroskedasicity) is assumed by probability statements in parametric statistics. Look for differing thicknesses in plotted data.
Influential observations and outliers are the data points that don’t fit the overall trends and patterns. Finding anomalies isn’t that difficult; deciding why they are anomalous and what to do with them are the really tough parts. Here are some examples of the types of outliers to look for.
How and Where to Look
That’s a lot of information to take in and remember, so here’s a summary you can refer to in the future if you ever need it.
And when you’re done, be sure to document your results so others can follow what you did.
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data Analysis at amazon.com, barnesandnoble.com, or other online booksellers.
Posted in Uncategorized
Tagged cats, censored data, curvilinear, cycles, data relationships, graphs, heteroskedasticity, homoskedasticity, interpolation, linear, outliers, shifts, shocks, simpson's paradox, statistics, stats wit cats, steps, time series, transformations, what to look for
Leave a comment
WHAT TO LOOK FOR IN DATA – PART 1
Some activities are instinctive. A baby doesn’t need to be taught how to suckle. Most people can use an escalator, operate an elevator, and open a door instinctively. The same isn’t true of playing a guitar, driving a car, or analyzing data.
When faced with a new dataset, the first thing to consider is the objective you, your boss, or your client have in analyzing the dataset.
Objective
How you analyze data will depend in part on your objective. Consider these four possibilities, three are comparatively easy and one is a relative challenge.
 Conduct a Specific Analysis – Your client only wants you to conduct a specific analysis, perhaps like descriptive statistics or a statistical test between two groups. No problem, just conduct the analysis. There’s no need to go further. That’s easy.
 Answer a Specific Question – Some clients only want one thing — answer a specific question. Maybe it’s something like “is my water safe to drink” or “is traffic on my street worse on Wednesdays.” This will require more thought and perhaps some experience, but again, you have a specific direction to go in. That makes it easier.
 Address a General Need – Projects with general goals often involve model building. You’ll have to establish whether they need a single forecast, map or model, or a tool that can be used again in the future. This will require quite a bit of thought and experience but at least you know what you need to do and where you need to end up. Not easy but straightforward.
 Explore the Unknown – Every once in a while, a client will have nothing specific in mind, but will want to know whatever can be determined from the dataset. This is a challenge because there’s no guidance for where to start or where to finish. This blog will help you address this objective.
If your client is not clear about their objective, start at the very end. Ask what decisions will need to be made based on the results of your analysis. Ask what kind of outputs would be appropriate – a report, an infographic, a spreadsheet file, a presentation, or an application. If they have no expectations, it’s time to explore.
Got data?
Scrubbing your data will make you familiar with what you have. That’s why it’s a good idea to know your objective first. There are many things you can do to scrub your data but the first thing is to put it into a matrix. Statistical analyses all begin with matrices. The form of the matrix isn’t always the same, but most commonly, the matrix has columns that represent variables (e.g., metrics, measurements) and rows that represent observations (e.g., individuals, students, patients, sample units, or dates). Data on the variables for each observation go into the cells. Usually this is usually done with spreadsheet software.
Data scrubbing can be cursory or exhaustive. Assuming the data are already available in electronic form, you’ll still have to achieve two goals – getting the numbers right and getting the right numbers.
Getting the numbers right requires correcting three types of data errors:
 Alphanumeric substitution, which involves mixing letters and numbers (e.g., 0 and o or O, 1 and l, 5 and S, 6 and b), dropped or added digits, spelling mistakes in text fields that will be sorted or filtered, and random errors.
 Specification errors involve bad data generation, perhaps attributable to recording mistakes, uncalibrated equipment, lab mistakes, or incorrect sample IDs and aliases.
 Inappropriate Data Formats, such as extra columns and rows, inconsistent use of ND, NA, or NR flags, and the inappropriate presence of 0s versus blanks.
Getting the right numbers requires addressing a variety of data issues:
 Variables and phenomenon. Are the variables sufficient to explore the phenomena in question?
 Variable scales. Review the measurement scales of the variables so you know what analyses might be applicable to the data. Also, look for nominal and ordinal scale variables to consider how you might segment the data.
 Representative sample. Considering the population being explored, does the sample appear to be representative.
 Replicates. If there are replicate or other quality control samples, they should be removed from the analysis appropriately.
 Censored data. If you have censored data (i.e., unquantified data above or below some limit), you can recode the data as some fraction of the limit, but not zero.
 Missing data. If you have missing data, they should be recoded as blanks or use another accepted procedure for treating missing data.
Data scrubbing can consume a substantial amount of time, even more than the statistical calculations.
What To Look For
If statistics wasn’t your major in college or you’re straight out of college and new to applied statistics, you might wonder where you might start looking at a dataset? Here are five places to consider looking.
 Snapshot
 Population or Sample Characteristics
 Change
 Trends and Patterns
 Anomalies
Start with the entire dataset. Look at the highest levels of grouping variables. Divide and aggregate groupings later after you have a feel for the global situation. The reason for this is that the number of possible combinations of variables and levels of grouping variables can be large, overwhelming, each one being an analysis in itself. Like peeling an onion, explore one layer of data at a time until you get to the core.
Snapshot
What does the data look like at one point. Usually it’s at the same point in time but it could also be common conditions, like after a specific business activity, or at a certain temperature and pressure. What might you do?
Snapshots aren’t difficult. You just decide where you want a snapshot and record all the variable values at that point. There are no descriptive statistics, graphs, or tests unless you decide to subdivide the data later. The only challenge is deciding whether taking a snapshot makes any sense for exploring the data.
The only thing you look for in a snapshot is something unexpected or unusual that might direct further analysis. It can also be used as a baseline to evaluate change.
Population Characteristics
It’s always a good idea to know everything you can about the populations you are exploring. Here’s what you might do.
This is a nobrainer; calculate descriptive statistics. Here’s a summary of what you might look at. It’s based on the measurement scale of the variable you are assessing.
For grouping (nominal scale) variables, look at the frequencies of the groups. You’ll want to know if there are enough observations in each group to break them out for further analysis. For progression (continuous) scales, look at the median and the mean. If they’re close, the frequency distribution is probably symmetrical. You can confirm this by looking at a histogram or the skewness. If the standard deviation divided by the mean (coefficient of variation) is over 1, the distribution may be lognormal, or at least, asymmetrical. Quartiles and deciles will support this finding. Look at the measures of central tendency and dispersion. If the dispersion is relatively large, statistical testing may be problematical.
Graphs are also a good way, in my mind, the best way to explore population characteristics. Never calculate a statistic without looking at its visual representation in a graph, and there are many types of graphs that will let you do that.
What you look for in a graph depends on what the graph is supposed to show – distribution, mixtures, properties, or relationships. There are other things you might look for but here are a few things to start with.
For distribution graphs (box plots, histograms, dot plots, stemleaf diagrams, QQ plots, rose diagrams, and probability plots), look for symmetry. That will separate many theoretical distributions, say a normal distribution (symmetrical) from a lognormal distribution (asymmetrical). This will be useful information if you do any statistical testing later.
For mixture graphs (pie charts, rose diagrams, and ternary plots), look for imbalance. If you have some segments that are very large and others very small, here may be common and unique themes to the mix to explore. Maybe the unique segments can be combined. This will be useful information if you do break out subgroups later.
For properties graphs (bar charts, area charts, line charts, candlestick charts, control charts, means plots, deviation plots, spread plots, matrix plots, maps, block diagrams, and rose diagrams), look for the unexpected. Are the central tendency and dispersion what you might expect? Where are big deviations?
For relationship graphs (icon plots, 2D scatter plots, contour plots, bubble plots, 3D scatter plots, surface plots, and multivariable plots), look for linearity. You might find linear or curvilinear trends, repeating cycles, onetime shifts, continuing steps, periodic shocks, or just random points. This is the prelude for looking for more detailed patterns.
Change
Change usually refers to differences between time periods but, like snapshots, it could also refer to some common conditions. Change can be difficult, or at least complicated, to analyze because you must first calculate the changes you want to explore. When calculating changes, be sure the intervals of the change are consistent. But after that, what might you do?
First, look for very large, negative or positive changes. Are the percentages of change consistent for all variables? What might be some reasons for the changes.
Calculate the mean and median changes. If the indicators of central tendency are not near zero, you might have a trend. Verify the possibility by plotting the change data. You might even consider conducting a statistical test to confirm that the change is different from zero.
If you do think you have a trend or pattern, there are quite a few things to look for. This is what What to Look for in Data – Part 2 is about.
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data Analysis at amazon.com, barnesandnoble.com, or other online booksellers.
Posted in Uncategorized
Tagged cats, censored data, data, data analysis, data scrubbing, descriptive statistics, graphs, measurement scales, missing data, replicates
Leave a comment
DARE TO COMPARE – PART 4
Part 3 of Dare to Compare shows how onepopulation statistical tests are conducted. Part 4 extends these concepts to twopopulation tests.
To review, this flowchart summarizes the the process of statistical testing.
First, you PLAN the comparison by understanding the populations you will take a representative sample of individuals from and measure the phenomenon on. Then you assess the frequency distributions of the measurements to see if they approximate a Normal distribution.
Second, you TEST the measurements by considering the test parameters, the type of test, the hypotheses, the test dimensionality, degrees of freedom, and violations of assumptions.
Third, you review the RESULTS by setting the confidence, determining the effect size and power of the test, and assessing the significance and meaningfulness of the test.
Now imagine this.
You’re a sophomore statistics major at Faber College and you need to sign up for the dreaded STATS 102 class. The class is taught in the Fall and the Spring by two different instructors (Dr. Statisticus and Prof. Modearity) as either three, onehour sessions on Mondays, Wednesdays, and Fridays, or as two, hour and a half sessions on Tuesdays and Thursdays. You wonder if it makes a difference which class you take. Having completed STATS 101, you know everything there is to know about statistics, so you get the grades from the classes that were taught last year. Here are the data.
What class should you take to get the highest grade? Dr. Statisticus gave out the highest grades in the Fall; Prof. Modearity gave out a higher grade in the Spring. On the other side of the coin, only one person flunked (grade below 75) Dr. Statisticus’ classes but six people flunked Prof. Modearity’s classes. Three students flunked in the Fall while four students flunked in the Spring. Two people flunked TuTh classes and five people flunked MWF classes. This is complicated.
Looking at the averages, you think that taking Dr. Statisticus’ TuesdayThursday class in the fall would be your best bet. However, is a two or three point difference worth the class conflicts and scheduling hassles you might have? Does it really matter?
Maybe it’s time for some statistical testing? But these would be twopopulation tests because you have to compare two semesters, two instructors, and two class lengths.
Two Population tTests
In a twopopulation test, you compare the average of the measurements in the first population to the average of the second population, using the formula:
This is a bit more complicated than the formula for a onepopulation test because you can have different standard deviations and different numbers of measurements in the two populations.
Here’s what’s happening. The numerator (top part of the formula) is the same in both ttest formulas. The leftmost term in the denominator calculates a weighted average of the variances, called a pooled variance.
If the number of measurements taken of the two populations is the same, the test design is said to be balanced. If the variances of the measurements in the two populations are the same, the leftmost term in the denominator reduces to s^{2}. So, the formula for a balanced twopopulation ttest with equal variances is:
Much more simple but not as useful as the more complicated formula. You might be able to control the number of samples from the populations but you can’t control the variances.
Once you calculate a t value, the rest of the test is similar to a onepopulation test. You compare the calculated t to a tvalue from a table or other reference for the appropriate number of tails, the confidence (1 α), and the degrees of freedom (the number of samples in the sample of the population minus 1).
If the calculated t value is larger than the table t value, the test is SIGNIFICANT, meaning that the means are statistically different. If the table t value is larger than the calculated t value, the test is NOT SIGNIFICANT, meaning that the means are statistically the same.
Example
Back to the example. You want to compare the differences between semesters, instructors, and class days. You have no expectations for what the best semester, instructor, or class day would be. To be conservative, you’ll accept a false positive rate (i.e., 1confidence, α) of 0.05. Your null hypotheses are:
ц_{Fall Semester }= ц_{Spring Semester} ц_{Dr. Statisticus }= ц_{Prof. Modearity} ц_{MWF} = ц_{TuTh}
Now for some calculations, first the semesters.
X_{Fall Semester } = 84.0 X_{Spring Semester} = 83.5 N_{Fall Semester } = 33 N_{Spring Semester} = 35 S^{2}_{Fall Semester } = 49.7 (S = 7.05) S^{2}_{Spring Semester} = 41.7 (S = 6.46)
And the tabled value is:
t_{(2tailed, 0.05 confidence, 65 degrees of freedom)} = 1.997
You can do these calculations in Excel with the formula:
=T.TEST(array1,array2,tails,type)
Where type=3 is a ttest for twosamples with unequal variances. There are also a few online sites for the calculations, such as https://www.evanmiller.org/abtesting/ttest.html, from which this graphic was produced.
So there is no statistically significant difference between the Fall semester classes and the Spring semester classes.
Now for the instructors:
X_{Dr. Statisticus} = 85.4 X_{Prof. Modearity} = 82.0 N_{Dr. Statisticus } = 35 N_{Prof. Modearity} = 33 S^{2}_{Dr. Statisticus } = 37.5 (S = 6.12) S^{2}_{Prof. Modearity} = 48.5 (S = 6.96)
And the tabled value is:
t_{(2tailed, 0.05 confidence, 66 degrees of freedom)} = 1.996
So there is a statistically significant difference between instructors. Dr. Statisticus gives higher grades than Prof. Modearity.
From www.evanmiller.org/:
Now for the days of the week:
X_{MWF } = 82.4 X_{TuTh} = 85.2 N_{MWF } = 36 N_{TuTh } = 32 S^{2}_{MWF} = 47.8 (S = 6.91) S^{2}_{TuTh } = 39.4 (S = 6.28)
So there is no statistically significant difference between the onehour classes on Mondays, Wednesdays, and Fridays and the hourandahalf classes on Tuesdays and Thursdays.
From www.evanmiller.org/:
Here is a summary of the three tests.
So take Dr, Statisticus’ class when ever it fits in your schedule.
ANOVAs
So what do you do if you have more than two populations or more than one phenomenon or some other weird combinations of data? You use an Analysis of Variance (ANOVA).
ANOVA includes a variety of statistical designs used to analyze differences in group means. It is a generalization of the ttest of a factor (called maineffect or treatments in ANOVA) to more than two groups (called levels in ANOVA). In an ANOVA, the variances in the levels of factors being compared are partitioned between variation associated with the factors in the design (called model variation) and random variation (called error variation). ANOVA is conceptually similar to multiple twopopulation ttests, but produces fewer type I (false positive) errors. While ttests use tvalues from the tdistribution, ANOVAs use Ftests from the Fdistribution. An Ftest is the ratio of the model variation the error variation. When there are only two means to compare, the ttest and the ANOVA Ftest are equivalent according tp the relationship F = t^{2}.
Types of ANOVA
There are many types of ANOVA designs. Oneway and multiway ANOVAs are the most common.
OneWay ANOVAs
Oneway ANOVA is used to test for differences among three or more independent levels of one effect. In the example ttest, a oneway ANOVA might involve more than two levels of one of the three factors. For example, a oneway ANOVA would allow testing more than two instructors or more than two semesters.
MultiWay ANOVAs
Multiway ANOVAs (sometimes called factorial ANOVAs) are used to test for differences between two or more effects. A twoway ANOVA tests two effects, a threeway ANOVA tests three effects, and so on. Multiway ANOVAs have the advantage of being able to test the significance of interaction effects. Interaction effects occur when two or more effects combine to affect measurements of the phenomenon. In the example ttest, a threeway ANOVA would allow simultaneous analysis of the semesters, instructors, and days, as well as interactions between them.
Other Types of ANOVA
There are numerous other types of ANOVA designs, some of which are too complex to explain in a sentence or two. Here are a few of the more commonly used designs.
Repeated Measures ANOVAs (also called as withinsubjects ANOVA) are used when the same subjects are used for each treatment effect, as in a longitudinal study. In the example, if the scores for the students were recorded every month of the semester, it could be analyzed with a Repeated Measures ANOVA.
Some ANOVAs use design elements to control extraneous variance. The significance of the design elements is not important to the dependent variable so long as it controls variability in the main effects. If the design element is a nominalscale variable, it is called a blocking effect. If the design element is a continuousscale variable, it is called a covariate and the model is called an Analysis of Covariance (ANCOVA). In the example, if students’ year in college (freshman, sophomore, junior, or senior, an ordinal scale measure) were added as an effect to control variance, it would be a blocking factor. If students’ GPA (grade point average, a continuous scale measure) as a covariate, it would be a ANCOVA design.
Random Effects ANOVAs assume that the levels of a main effect are sampled from a population of possible levels so that the results can be extended to other possible levels. The Instructors main effect in the example could be a random effect if other instructors were considered part of the population that included Dr. Statisticus and Prof. Modearity. If only Dr. Statisticus and Prof. Modearity were levels of the effect, it would be called a fixed effect. If a design included both fixed and random effects, it is called a mixed effects design.
Multivariate analysis of variance (MANOVA) is used when there is more than one set of measurements (also called dependent variables or response variables) of the phenomenon.
Now What?
Dare to Compare is a fairly comprehensive summary of statistical comparisons. You may not hear about all of these concepts in Stats 101 and that’s fine. Learn what you need to to pass the course. Some topics are taught differently, especially hypothesis development and the normal curve. Follow what your instructor teaches. He or she will assign your grade.
Believe it or not, there’s quite a bit more to learn about all of the topics if you go further in statistics. There are special ttests for proportions, regression coefficients, and samples that are not independent (called paired sample ttests). There are tests based on other distributions besides the Normal and tdistributions, such as the binomial and chi^{2} distributions. There are also quite a few nonparametric tests, based on ranks. And, of course, there are many topics on the mathematics end and o2n more metaphysical concepts like meaningfulness.
Statistical testing is more complicated than portrayed by some people but it’s still not as formidible as, say, driving a car. You might learn to drive as a teenager but not discover statistics and statistical testing until college. Both statistical testing and driving are full of intracacies that you have to keep in mind. In testing you consider an issue once, while in driving you must do it continually. When you make a mistake in testing, you can go back and correct it. If you make a mistake in driving, you might get a ticket or cause an accident. After you learn to drive a car, you can go on to learn to drive motorcycles, trucks, busses, and racing vehicles. After you learn simple hypothesis testing, you can go on to learn ANOVA, regression, and many more advanced techniques. So if you think you can learn to drive a car, you can also learn to conduct a statistical test.
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at amazon.com, barnesandnoble.com, or other online booksellers.
Posted in Uncategorized
Tagged ANOVA, blogs, cats, populations, statistical comparisons, statistical tests, statistics, statswithcats, ttest
Leave a comment
DARE TO COMPARE – PART 3
Parts 1 and 2 of Dare to Compare summarized fundamental topics about simple statistical comparisons. Part 3 shows how those concepts play a role in conducting statistical tests. The importance of these concept are highlighted in the following table.
Test Specification  Why it is Important 
Population  Groups of individuals or items having some fundamental commonalities relative to the phenomenon being tested. Populations must be definable and readily reproducible so that results can be applied to other situations. 
Number of populations being compared  The number of populations determines whether a comparison can be a relatively simple 1 or 2population test or a complex ANOVA test. 
Phenomena  The characteristic of the population being tested. It is usually measured as a continuousscale attribute of a representative sample of the population. 
Number of phenomenon  The number of phenomenon determines whether a comparison will be a relatively simple univariate test or a complex multivariate test. 
Representative sample  A relatively small portion of all the possible measurements of the phenomenon on the population selected in such a way as to be a true depiction of the phenomenon. 
Sample size  The number of observations of the phenomenon used to characterize the population. The sample size contributes to the determinations of the type of test to be used, the size of the difference that can be detected, the power of the test, and the meaningfulness of the results. 
Hypotheses  You start statistical comparisons with a research hypothesis of what you expect to find about the phenomenon in the population. The research hypothesis is about the differences between the categories of the variable representing the population. You then create a null hypothesis that translates the research hypothesis into a mathematical statement that is the opposite of the research hypothesis, usually written in term of no change or no difference. This is the subject of the test. If you do not reject the null hypothesis, you adopt the alternative hypothesis. 
Distribution  Statistical tests examine chance occurrences of measurements on a phenomenon. These extreme measurements occur in the tails of the frequency distribution. Parametric statistical tests assume that the measurements are Normally distributed. If the distribution is different from the tails of a Normal distribution, the results of the test may be in error. 
Directionality  Null hypotheses can be nondirectional or twosided (i.e., ц=0), in which both tails of the distribution are assessed. They can also be nondirectional or onesided (i.e., ц<0 or ц>0), in which only one tail of the distribution is assessed. 
Assumptions  Statistical tests assume that the measurements of the phenomenon are independent (not correlated) and are representative of the population. They also assume that errors are normally distributed and the variances of populations are equal. 
Type of test  Statistical tests can be based on a theoretical frequency distribution (parametric) or based on some imposed ordering (nonparametric). Parametric tests tend to be more powerful. 
Test Parameters  Test parameters are the statistics used in the test. For ttests using the Normal distribution, this involves the mean and the standard deviation. For Ftests in ANOVA, this involves the variance. For nonparametric tests, this usually involves the median and range. 
Confidence  Confidence is 1 minus the falsepositive error rate. The confidence is set by the person doing the test before testing as the maximum falsepositive error rate they will accept. Usually, an error rate of 0.05 (5%) is selected but sometimes 0.1 (10%) or 0.01 (1%) are used, corresponding to confidences of 95%, 90%, and 99%.. 
Power  Power is the ability of a test to avoid falsenegative errors (1β). Power is based on sample size, confidence, and population variance and is NOT set by the person doing the test, but instead, calculated after a significant test result.. 
Degrees of Freedom  The number of values in the final calculation of a statistic that are free to vary. For a ttest, the degrees of freedom is equal to the number of samples minus 1. 
Effect Size  The smallest difference the test could have detected. Effect size is influenced by the variance, the sample size, and the confidence. Effect size can be too small, leading to false negatives, or too large, leading to false positives. 
Significance  Significance refers to the result of a statistical test in which the null hypothesis is rejected. Significance is expressed as a pvalue. 
Meaningfulness  Meaningfulness is assessed by considering the difference detected by the test to what magnitude of difference would be important in reality. 
Normal Distributions
After defining the population, the phenomena, and the test hypotheses, you measure the phenomenon on an appropriate number of individuals in the population. These measurements need to be independent of each other and representative of the population. Then, you need to assess whether it’s safe to assume that the frequency distribution of the measurements is similar to a Normal distributed. If it is, a ztest or a ttest would be in order.
Yes, this is scary looking. It’s the equation for the Normal distribution. Relax, you will probably never have to use it. 
This figure represents a Normal distribution. The area under the curve represents the total probability of measured values occurring, which is equal to 1.0. Values near the center of the distribution, near the mean, have a large probability of occurring while values near the tails (the extremes) of the distribution have a small probability of occurring.
In statistical testing, the Normal distribution is used to estimate the probability that the measurements of the phenomenon will fall within a particular range of values. To estimate the probability that a measurement will occur, you could use the values of the mean and the standard deviation in the formula for the Normal distribution. Actually though, you never have to do that because there are tables for the Normal distribution and the tdistribution. Even easier, the functions are available in many spreadsheet applications, like Microsoft Excel.
Statistical tests focus on the tails of the distribution where the probabilities are the smallest. It doesn’t matter much if the measurements of the phenomenon follow a normal distribution near the mean so long as it does in the tails. The zdistribution can be used if the sample size is large; some say as few as 30 measurements and others recommend more, perhaps 100 measurements. The tdistribution compensates for small sample sizes by having more area in the tails. It can be used instead of the zdistribution with any number of samples.
The concept behind statistical testing is to determine how likely it is that a difference in two populations parameters like the means (or a population parameter and a constant) could have occurred by chance. If the probability of the difference occurring is large enough to occur in the tails of the distribution, there is only a small probability that the difference could have occurred by chance. Differences having a probability of occurrence less then a prespecified value (α) are said to be significant differences. The prespecified value, which is the acceptable false positive error rate, α, may be any small percentage but is usually taken as fiveinahundred (0.05), oneinahundred (0.01), or teninahundred (0.10).
Here are a few examples of what the process of statistical testing looks like for comparing a population mean to a constant.
One Population zTest or tTest
All ztests and ttests involve either one or two populations and only one phenomenon. The population is represented by the nominalscale, independent variable. The measurement of the phenomenon is the dependent variable, which can be measured using a nominal, ordinal, interval, or ratio scale.
For a onepopulation test, you would be comparing the average (or other parameter) of the measurements in the population to a constant. You do this using the formula for a onepopulation ttest value (or a ztest value) to calculate the t value for the test.
The Normal distribution and the tdistribution are symmetrical so it doesn’t matter if the numerator of the equation is positive or negative.
Then compare that value to a table of values for the tdistribution (for the appropriate number of tails, the confidence (1 α), and the degrees of freedom (the number of samples of the population minus 1). If the calculated t value is larger than the table t value, the test is SIGNIFICANT, meaning that the mean and the constant are statistically different. If the table t value is larger than the calculated t value, the test is NOT SIGNIFICANT, meaning that the mean and the constant are statistically the same.
Example
Imagine you are comparing the average height of male, high school freshmen, in Minneapolis school district #1. You want to know how their average height compares to the height of 9th to 11th century Vikings (their mascot), for the school newspaper. Turnofthecentury Vikings were typically about 5’9” or 69 inches (172 cm) tall.
This comparison doesn’t need to be too rigorous. The only possible negative consequence to the test is it being reported by Fox News as a liberal conspiracy, and they do that to everything anyway. You’ll accept a false positive rate (i.e., 1confidence, α) of 0.10.
Nondirectional Tests
Say you don’t know many freshmen boys but you don’t think they are as tall as Vikings. You certainly don’t think of them as rampaging Vikings. They’re younger so maybe they’re shorter. Then again, they’ve grown up having better diets and medical care so maybe they’re taller. Therefore, your research hypothesis is that Freshmen are not likely to be the same height as Vikings. The null hypothesis you want to test is:
Height of Freshmen = Height of Vikings
which is a nondirectional test. If you reject the null hypothesis, the alternative hypothesis:
Height of Freshmen ≠ Height of Vikings
is probably true of the Freshmen. Say you then measure the heights of 10 freshmen and you get:
63.2, 63.8, 72.8, 56.9, 75.2, 70.8, 68.0, 64.0, 61.4, 65.2
The measurements average 66 inches with a standard deviation of 5.3 inches. The tvalue would be equal to:
(Freshmen height – Viking height) / ((standard deviation / (√number of samples)))
tvalue = (66 inches – 69 inches) / (5.3 inches / (√10 samples))
tvalue = 1.790
Ignore the negative sign; it won’t matter.
In this comparison, the calculated tvalue (1.79) is less than the table tvalue (t_{(2tailed, 90% confidence, 9 degrees of freedom)} = 1.833) so the comparison is not significant. The comparison might look something like this:
There is no statistical difference in the average heights of Freshmen and Vikings. Both are around 5’6” to 5’9” tall. That isn’t to say that there weren’t 6’0” Vikings, or Freshmen, but as a group, the Freshmen are about the same height as a band of berserkers. I’m sure that there are high school principals who will agree with this.
When you get a nonsignificant test, it’s a good practice to conduct a power analysis to determine what protection you had against false negatives. For a ttest, this involves rearranging the ttest formula to solve for t_{beta}:
t_{beta} = (sqrt(n)/sd) * difference – t_{alpha}
The t_{alpha} is for the confidence you selected, in this case 90%. Then you look up the tvalue you calculated to find the probability for beta. It’s a cumbersome but not difficult procedure. In this example, the calculated t_{beta} would have been 1.24 so the power would have been 88%. That’s not bad. Anything over 80% is usually considered acceptable.
Most statistical software will do this calculation for you. You can increase power by increasing the sample size or the acceptable Type 1 error rate (decrease the confidence) before conducting the test.
So if everything were the same (i.e., mean of students = 66 inches, standard deviation = 5.3 inches) except that you had collected 30 samples instead of 10 samples:
tvalue = (69 inches – 66 inches) / (5.3 inches / (√30 samples))
tvalue = 3.10
t_{(2tailed, 90% confidence, 29 degrees of freedom)} = 1.699
If you had collected 100 samples:
tvalue = (69 inches – 66 inches) / (5.3 inches / (√100 samples))
tvalue = 5.66
t_{(2tailed, 90% confidence, 99 degrees of freedom)} = 1.660
These comparisons are both significant, and might look something like this:
More samples give you better resolution.
Directional Tests
Now say, in a different reality, you know that many of those freshmen boys grew up on farms and they’re pretty buff. You even think that they might just be taller than the Vikings of a millennia ago. Therefore, your research hypothesis is that Freshmen are likely to be taller than the warfaring Vikings. The null hypothesis you want to test is:
Height of Freshmen ≤ Height of Vikings
which is a directional test. If you reject the null hypothesis, the alternative hypothesis:
Height of Freshmen >Height of Vikings
is probably true of the Freshmen. Then you measure the heights of 10 freshmen and get:
72.4, 71.1, 75.4, 69.0, 75.7, 73.3, 76.0, 58.8, 70.4, 78.6
The measurements average 71.2 inches with a standard deviation of 5.3 inches. The tvalue would be equal to:
(Freshmen height – Viking height) / (standard deviation / (√number of samples))
tvalue = (72 inches – 69 inches) / (5.3 inches / (√10 samples))
tvalue = 1.790
In this comparison, the table tvalue you would use is for a onetailed (directional) test at 90% confidence for 10 samples, t_{(1tailed, α = 0.1, 9 degrees of freedom)} = 1.383. For comparison, the value of t_{(2tailed, 0.9 confidence, 9 degrees of freedom)}, which was used in the first example, is equal to 1.833, as is t_{(1tailed, 0.95 confidence, 9 degrees of freedom)}. The reason is that you only have to look in half of the tdistribution area in a onetailed test compared to a twotailed test. That means that if you use a directional test you can have a smaller false positive rate.
The table t value you would use, t_{(1tailed, 0.1 confidence, 9 degrees of freedom)}, is equal to 1.383. which is smaller than the calculated tvalue, 1.790, so the comparison is significant. The comparison might look something like this:
In this comparison, the Freshmen are on average at least 3 inches taller than their frenzied Viking ancestors. Genetics, better diet, and healthy living win out.
But what if the farm boys averaged only 71 inches:
(Freshmen height – Viking height) / (standard deviation / (√number of samples))
tvalue = (71 inches – 69 inches) / (5.3 inches / (√10 samples))
tvalue = 1.193
The table t value you would use, t_{(1tailed, 0.1 confidence, 9 degrees of freedom)}, is equal to 1.383. which is larger than the calculated tvalue, 1.193, so the comparison is not significant. The comparison might look something like this:
And that’s what onepopulation ttests look like. Now for some twopopulation tests in Dare to Compare – Part 4.
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at amazon.com, barnesandnoble.com, or other online booksellers.
You Need Statistics to Make Wine
The American Statistical Association has identified 146 college majors that require statistics to complete a degree.
You probably wouldn’t be surprised that statistics is required for degrees in mathematics, engineering, physics, astronomy, chemistry, meteorology, and even biology and geology. Most businessrelated degrees also require statistics. Agronomy degrees require statistics as do degrees in dairy science, aquatic sciences, and veterinary sciences. Degrees for medical professions such as nursing, nutrition, physical therapy, occupational health, pharmacy, and speechlanguagehearing all require statistics. And, many social science degrees require statistics, including economics, psychology, sociology, anthropology, political science, education, and criminology. What may be surprising though is that statistics is required for some degrees in history, archaeology, geography, culinary science, viticulture (grape horticulture), journalism, graphic communications, library science, and linguistics. Pretty much everybody needs to know statistics.
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at amazon.com, barnesandnoble.com, or other online booksellers.
Posted in Uncategorized
Tagged college degrees, college majors, statistics, stats with cats
1 Comment
Dare to Compare – Part 2
Part 1 of Dare to Compare summarized several fundamental topics about statistical comparisons.
Statistical comparisons, or statistical tests as they are usually called, involve populations, groups of individuals or items having some fundamental commonalities. The members of a population also have one or more characteristics, called phenomena, which are what is compared in the populations. You don’t have to measure the phenomena in every member of the population. You can take a representative sample. Statistical tests can involve one population (comparing a population phenomenon to a constant), two populations (comparing a population phenomenon the phenomenon in another population), or three or more populations. You can also compare just one phenomenon (called univariate tests) or two or more phenomena (called multivariate tests).
Parametric statistical tests compare frequency distributions, the number of times each value of the measured phenomena appears in the population. Most tests involve the Normal distribution in which the center of the distribution of values is estimated by the average, also called the mean. The variability of the distribution is estimated by the variance or the standard deviation, the square root of the variance. The mean and standard deviation are called parameters of the Normal distribution because they are in the mathematical formula that defines the form of the distribution. Formulas for statistical tests usually involve some measure of accuracy (involving the mean) divided by some measure of precision (involving the variance). Most statistical tests focus on the extreme ends of the Normal distribution, called the tails. Tests of whether population means are equal are called nondirectional, twosided, or twotailed tests because differences in both tails of the Normal distribution are considered. Tests of whether population means are less then or greater then are called directional, onesided, or onetailed tests because the difference in only one tail of the Normal distribution is considered.
Statistical tests that don’t rely on the distributions of the phenomenon in the populations are called nonparametric tests. Nonparametric tests often involve converting the data to ranks and analyzing the ranks using the median and the range.
The nice thing about statistical comparisons is that you don’t have to measure the phenomenon in the entire population at the same place or the same time, and you can then make inferences about groups (populations) instead of just individuals or items. What may even be better is that if you follow statistical testing procedures, most people will agree with your findings.
Now for even more …
Process
There are just a few more things you need to know before conducting statistical comparisons.
You start with a research hypothesis, a statement of what you expect to find about the phenomenon in the population. From there, you create a null hypothesis that translates the research hypothesis into a mathematical statement about the opposite of the research hypothesis. Statistical comparisons are sometimes called hypothesis tests. The null hypothesis is usually also written in term of no change or no difference. For example, if you expect that the average heights of students in two school districts will be different because of some demographic factors (your research hypothesis), then your null hypothesis would be that the means of the two populations are equal.
When you conduct a statistical test, the result does not mean you prove your hypothesis. Rather, you can only reject or fail to reject the null hypothesis. If you reject the null hypothesis, you adopt the alternative hypothesis. This would mean that it is more likely that the null hypothesis is not true in the populations. If you fail to reject the null hypothesis, it is more likely that the null hypothesis is true in the populations.
The results of statistical tests are sometimes in error, but fortunately, you have some control over the rates at which errors occur. There are four possibilities for the results of a statistical test.
 True Positive – The statistical test fails to reject a null hypothesis that is true in the population.
 True Negative – The statistical test rejects a null hypothesis that is false in the population.
 False Positive – The statistical test rejects a null hypothesis that is true in the population. This is called a Type I error and is represented by α. The Type I error rate you will accept for a test is called the Confidence. Typically the confidence is set at 0.05, a 5% Type I error rate, although sometimes 0.10 (more acceptable error) or 0.001 (less acceptable error) are used.
 False Negative – The statistical test fails to reject a null hypothesis that is false. This is called a Type II error and is represented by β. The ability of a particular comparison to avoid a Type II error is represented by 1β and is called the Power of the test. Typically, power should be at least 0.8 for a 20% Type II error rate.
When you design a statistical test, you specify the hypotheses including the number of populations and directionality, the type of test, the confidence, and the number of observations in your representative sample of the population. From the sample, you calculate the mean and standard deviation. You calculate the test statistic and compare it to standard values in a table based on the distribution. If the test statistic is greater than the standard value, you reject the null hypothesis. When you reject the null hypothesis the comparison is said to be significant. If the test statistic is less than the standard value, you fail to reject the null hypothesis and the comparison is said to be nonsignificant. Most statistical software now provide exact probabilities, called pvalues, that the null hypothesis is false so no tables are necessary.
After you conduct the test, there are two pieces of information you need to determine – the sensitivity of the test to detect differences, called the effect size, and the power of the test. The power of the test will depend on the sample size, the confidence, and the effect size. The effect size also provides insight into whether the test results are meaningful. Meaningfulness is important because a test may be able to detect a difference far smaller than what might of interest, such as a difference in mean student heights less than a millimeter. Perhaps surprisingly, the most common reason for being able to detect differences that are too small to be meaningful is having too large a sample size. More samples are not always better.
Tests
It seems like there are hundreds of kinds of statistical tests, and in a way there are, but most are just variations of the concept of the accuracy in terms of the precision. In most tests, you calculate a test statistic and compare it to a standard. If the test statistic is greater than the standard, the difference is larger than might have been expected by chance, and is said to be statistically significant. For the most part, statistical software now reports exact probabilities for statistical tests instead of relying on manual comparisons.
Don’t worry too much about remembering formulas for the statistical tests (unless a teacher tells you to). Most testing is done using software with the test formulas already programmed. If you need a test formula, you can always search the Internet.
Tests depend on the scales of the data to be used in the statistical comparison. Usually, the dependent variable (the measurements of the phenomenon) is continuous and the independent variable (the divisions of the populations being tested) is categorical for parametric tests. Sometimes there are also grouping variables used as independent variables, called effects. In advanced designs, continuousscale variables used as independent variables are called covariates. Some other scales of measurement for the dependent variable, like binary scales and restrictedrange scales, requires special tests or test modifications.
Here are a few of the most common parametric statistical tests.
zTests and tTests
The ztest and the ttest have similar forms relating the difference between a population mean and a constant (onepopulation test) or two population means (twopopulation test) to some measure of the uncertainty in the population(s). The difference in the tests is that a ztest is for Normally distributed populations where the variance is known and ttests are for populations where the variance is unknown and must be estimated from the sample. tTests depend on the number of observations made on the sample of the population. The greater the sample size, the closer the ttest is to the ztest. Adjustments of twopopulation ttests are made when the sample sizes or variances are different in the two populations. These tests can also be used to compare paired (e.g., before vs after) data.
ANOVA FTests
Unlike ttests that are calculated from means and standard deviations, Ftests are calculated from variances. The formula for the oneway ANOVA Ftest is:
 F = explained variance / unexplained variance, or
 F = betweengroup variability / withingroup variability, or
 F = Mean square for treatments / Mean square for error
These are all equivalent. Also, as it turns out, F = t^{2}.
χ^{2} Tests
The chisquared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in mutually exclusive categories of a contingency table. The test statistic is the square of the observed frequency minus the expected frequency divided by the expected frequency.
Nonparametric Tests
Nonparametric tests are also called distributionfree tests because they don’t rely on any assumptions concerning the frequency distribution of the test measurements. Instead, the tests use ranks or other imposed orderings of the data to identify differences. Here are a few of the most common nonparametric statistical tests.
Assumptions
You make a few assumptions in conducting statistical tests. First you assume your population is real (i.e., not a phantom population) and that your samples of the population are representative of all the possible measurements. Then, if you plan to do a parametric test, you assume (and hope) that the measurements of the phenomenon are Normally distributed and that the variances are the same in all the populations being compared. The closer these assumptions are met, the more valid are the comparisons. The reason for this is that you are using Normal distributions, defined by means and variances, to represent the phenomenon in the populations. If the true distributions of the phenomenon in the populations do not exactly follow the Normal distribution, the comparison will be somewhat in error. Of course, the Normal distribution is a theoretical mathematical distribution so there is always going to be some deviation from it and real world data. Likewise with variances in multipopulation comparisons. Thus, the question is always how much deviation from the assumptions is tolerable before the test becomes misleading.
Data that do not satisfy the assumptions can often be transformed to satisfy the assumptions. Adding a constant to data or multiplying data by a constant does not affect statistical tests, so transformations have to be more involved, like roots, powers, reciprocals, and logs. BoxCox transformations are especially useful but are laborious to calculate without supporting software. Ultimately, ranks and nonparametric tests can be used in which there is no assumption about the Normal distribution.
Next, we’ll see how it all comes together …
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at amazon.com, barnesandnoble.com, or other online booksellers.
Dare to Compare – Part 1
In school, you probably had to line up by height now and then. That wasn’t too difficult. There weren’t too many individuals being lined up and they were all in the same place at the same time. An individual’s place in line was decided by comparing his or her height to the heights of other individuals. The comparisons were visual; no measurements were made. Everyone made the same decisions about the height comparisons. You didn’t need statistics to solve the problem. So why might you ever need statistics to compare heights?
Populations
Statistics are primarily concerned about groups of individuals or items, especially those having some fundamental commonalities. These groups are called populations. Populations are more difficult to compare than pairs of individuals because you have to define the population and measure the characteristics of the phenomenon that you want to compare. Statistical comparisons, or statistical tests as they are usually called, can involve one population (comparing a population phenomenon to a constant), two populations (comparing a population phenomenon the phenomenon in another population), or three or more populations . You can also compare just one phenomenon (called univariate tests) or two or more phenomena (called multivariate tests). You can test if the phenomena are equal (called a nondirectional or twosided test) or less then/greater then (called a directional or onesided test).
For example, you might want to compare the heights of male high school freshmen in two different school districts. There would be a twopopulation test – male high school freshmen in school district 1 and male high school freshmen in school district 2. The phenomenon you want to compare is the height of the two populations. But, it’s not as easy as just visually comparing the heights of pairs of individuals because they are not located in the same place. You have to measure at least some of the heights of the individuals in the two populations.
Samples
Fortunately, you don’t have to measure every individual in the population so long as you measure a representative sample of the individuals in the populations. You can improve your chances of getting a representative sample by using the three Rs of variance control — Reference, Replication, and Randomization.
How many samples should you have? No, the answer isn’t as many as possible. Some people think the answer is 30 samples, but that’s a myth based on a misunderstood tradition. Like potato chips and middle managers, too many can be as bad as not enough. It’s a matter of resolution.
Distributions
If you were comparing two individuals, you would only be concerned with whether one height is greater than, equal to, or less than the other height. When you’re comparing populations, there’s not just one height but many, and you only know what some of the heights are (hopefully a representative sample of them). That’s where distributions come in.
In statistical testing, a frequency distribution refers to the number of times each value of the measured phenomena appears in the population. A bar chart of these values, with the values on the horizontal axis and their frequencies on the vertical axis, is called a histogram. Histograms often looks like a bell, which is why they are called bell curves.
Measured phenomena that have a histogram that looks like a bell curve have many values located at the middle of the distribution and fewer values farther away from the center, called the tails. The center of the distribution of values is estimated by the average. The variability of the values, how far they stretch along the horizontal axis, is estimated by the variance or the standard deviation, the square root of the variance.
A bell curve is usually assumed to represent a Normal distribution. The average and the variance of the values are called parameters of the distribution because they are in the mathematical formula that defines the form of the distribution.
Having a mathematical equation that you can use as a model of the frequency of phenomenon values in the population is advantageous because you can use the distribution model to represent the characteristics of the population.
Statistical Comparisons
Once you have data on the phenomenon from the representative sample of the population, you calculate descriptive statistics for the population. Statistical comparisons consider both the accuracy (i.e., the difference between the measured heights and the true heights in the population of individuals) and the precision (i.e., how consistent or variable are the heights) of the measurements of the population. Formulas for statistical tests usually involve some measure of accuracy divided by some measure of precision.
Statistical tests that compare the distributions of population characteristics are called parametric tests. They are usually based on the Normal distribution and involve using averages as measures of the center of the population distribution and standard deviations as measures of the variability of the distribution. (This is not always the case but is true most of the time.) The average and standard deviation are called test parameters. You can still test whether population means are equal (called nondirectional or twosided tests because differences in both tails of the Normal distribution are considered) or less then/greater then (called directional or onesided tests because the difference in only one tail of the Normal distribution is considered).
Statistical tests that don’t rely on the distributions of the phenomenon in the populations are called nonparametric tests. Nonparametric tests usually involve converting the data to ranks and analyzing the ranks using the median and the range.
And, there is still a lot more to know about statistical comparisons … more to come
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at amazon.com, barnesandnoble.com, or other online booksellers.
Posted in Uncategorized
Tagged blogs, cats, Normal distribution, population, statistical comparisons, statistical tests
Leave a comment
Catalog of Models
Whether you know it or not, you deal with models every day. Your weather forecast comes from a meteorological model, usually several. Mannequins are used to display how fashions may look on you. Blueprints are drawn models of objects or structures to be built. Maps are models of the earth’s terrain. Examples are everywhere.
Models are representations of things, usually an ideal, a standard, or something desired. They can be true representations, approximate (or at least as good as practicable), or simplified, even cartoonish compared to what they represent. They can be about the same size, bigger, or most typically, smaller, whatever makes them easiest to manipulate. They can represent:
 Physical objects that can be seen and touched
 Processes that can be watched
 Behaviors that can be observed
 Conditions that can be monitored
 Opinions that can be surveyed.
The models themselves do not have to be physical objects. They can be written, drawn, or consist of mathematical equations or computer programming. In fact, using equations and computer code can be much more flexible and less expensive than building a physical model.
Classification of Models
There are many ways that models are classified, so this catalog isn’t unique. The models may be described with different terms or broken out to greater levels of detail. Furthermore, you can also create hybrid models. Examples include mashups of analytical and stochastic components used to analyze phenomena such as climate change and subatomic particle physics. Nevertheless, the catalog should give you some ideas for where you might start to develop your own model.
Physical Models
Your first exposure to a model was probably a physical model like a baby pacifier or a plush animal, and later, a doll or a toy car. From then, you’ve seen many more – from ant farms to anatomical models in school. You probably even built your own models with Legos, plastic model kits, or even a Halloween costume. They are all representations of something else.
Physical models aren’t used often for advanced applications because they are difficult and expensive to build and calibrate to a realistic experience. Flight simulators, hydrographic models of river systems, and reef aquariums are well known examples.
Conceptual Models
Models can also be expressed in words and pictures. These are used in virtually all fields to convey mental images of some mechanism, process, or other phenomenon that was or will be created. Blueprints, flow diagrams, geologic fence diagrams, anatomical diagrams are all conceptual models. So are the textual descriptions that go with them. In fact, you should always start with a simple text model before you embark on building a complex physical or mathematical model.
Mathematical and Computer Models
Theoretical Models
Theoretical models are based on scientific laws and mathematical derivations. Both theoretical models and deterministic empirical models provide solutions that presume that there is no uncertainty. These solutions are termed exact (which does not necessarily imply correct). There is a single solution for given inputs.
Analytical Models
Analytical models are mathematical equations derived from scientific laws that produce exact solutions that apply everywhere. For example, F (force) = M (mass) times A (acceleration) and E(energy) = m (mass) times c^{2} (speed of light squared) are analytical models. Probably, most concepts in classical physics can be modeled analytically.
Numerical Models
Numerical models are mathematical equations that have a time parameter. Numerical models are solved repeatedly, usually on a grid, to obtain solutions over time. This is sometimes called a Dynamic Model (as opposed to a Static Model) because it describes timevarying relationships.
Empirical Models
Empirical models can be deterministic, probabilistic, stochastic, or sometimes, a hybrid of the three. They are developed for specific situations from measured data. Empirical models differ from theoretical models in that the model is not necessarily fixed for all instances of its use. There may be multiple reasonable empirical models that can apply to a given situation.
Deterministic Models
Deterministic empirical models presume that a mathematical relationship exists between two or more measurable phenomena (as do theoretical models) that will allow the phenomena to be modeled without uncertainty (or at least, not much uncertainty, so that it can be ignored) under a given set of conditions. The difference is that the relationship isn’t unique or proven. There are usually assumptions. Biological growth and groundwater flow models are examples of deterministic empirical models
Probability Models
Probability models are based on a set of events or conditions all occurring at once. In probability, it is called an intersection of events. Probability models are multiplicative because that is how intersection probabilities are combined. The most famous example of a probability model is the Drake equation, a summary of the factors affecting the likelihood that we might detect radiocommunication from intelligent extraterrestrial life
Stochastic Models
Stochastic empirical models presume that changes in a phenomenon have a random component. The random component allows stochastic empirical models to provide solutions that incorporate uncertainty into the analysis. Stochastic models include lottery picks, weather, and many problems in the behavioral, economic, and business disciplines that are analyzed with statistical models.
Comparison Models
In statistical comparison models, the dependent variable is a groupingscale variable (one measured on a nominal scale). The independent variable can be either grouping, continuous, or both. Simple hypothesis tests include:
 c^{2} tests that analyze cell frequencies on one or more grouping variables, and
 ttests and ztests that analyze independent variable means in two or fewer groups of a grouping variable.
Analysis of Variance (ANOVA) models compare independent variable means for two or more groups of a dependent grouping variable. Analysis of Covariance (ANCOVA) models compare independent variable means for two or more groups of a dependent grouping variable while controlling for one or more continuous variables. Multivariate ANOVA and ANCOVA compare two or more dependent variables using multiple independent variables. There are many more types of ANOVA model designs.
Classification Models
Classification and identification models also analyze groups.
Clustering models identify groups of similar cases based on continuousscale variables. There need be no prior knowledge or expectation about the nature of the groups. There are several types of cluster analysis, including hierarchical clustering, KMeans clustering, twostep clustering, and block clustering. Often, the clusters or segments that are used as inputs to subsequent analyses. Clustering models are also known as segmentation models.
Clustering models do not have a nominalscale dependent variable, but most classification models do. Discriminant analysis models have a nominalscale dependent variable and one or more continuousscale independent variables. They are usually used to explain why the groups are different, based on the independent variables, so they often follow a cluster analysis. Logistic regression is analogous to linear regression but is based on a nonlinear model and a binary or ordinal dependent variable instead of a continuousscale variable. Often, models for calculating probabilities use a binary (0 or 1) dependent variable with logistic regression.
There are many analyses that produce decision trees, which look a bit like organization charts. C&R (Classification and Regression Trees) split categorical dependent variables into its groups based in continuous or categoricalscale independent variables. All splits are binary. CHAID (Chisquare Automatic Interaction Detector) generates decision trees that can have more than two branches at a split. A Random Forest consists of a collection of simple tree predictors.
Explanation Models
Explanation models aim to explain associations within or between sets of variables. With explanation models, you select enough variables to address all the theoretical aspects of the phenomenon, even to the point of having some redundancy. As you build the model, you discover which variables are extraneous and can be eliminated.
Factor Analysis (FA) and Principal Components Analysis (PCA) are used to explore associations in a set of variables where there is no distinction between dependent and independent variables. The two types of statistical analysis:
 Create new metrics, called factors or components, which explain almost the same amount of variation as the original variables.
 Create fewer factors/components than the original variables so further analysis is simplified.
 Require that the new factors/components be interpreted in terms of the original variables, but they often make more conceptual sense so subsequent analyses are more intuitive.
 Produce factors/components that are statistically independent (uncorrelated) so they can be used in regression models to determine how important each is in explaining a dependent variable.
Canonical Correlation Analysis (CCA) is like PCA only there are two sets of variables. Pairs of components, one from each group, are created that explain independent aspects of the dataset.
Regression analysis is also used to build explanation models. In particular, regression using principle components as independent variables is popular because the components are uncorrelated and not subject to multicollinearity.
Prediction Models
Some models are created to predict new values of a dependent variable or forecast future values of a timedependent variable. To be useful, a prediction model must use prediction variables that cost less to generate than the prediction is worth. So the predictor variables and their scales must be relatively inexpensive and easy to create or obtain. In prediction models, accuracy tends to come easy while precision is elusive. Prediction models usually keep only the variables that work best in making a prediction, and they may not necessarily make a lot of conceptual sense.
Regression is the most commonly used technique for creating prediction models. Transformations are used frequently. If a model includes one or more lagged values of the dependent variable among its predictors, it is called an autoregressive model.
Neural Networks is a predictive modeling technique inspired by the way biological nervous systems process information. The technique involves interconnected nodes or layers that apply predictor variables in different ways, linear and nonlinear, to all or some of the dependent variable values. Unlike most modeling techniques, neural networks can’t be articulated so they are not useful for explanation purposes.
Picking the Right Model
There are many ways to model a phenomenon. Experience helps you to judge which model might be most appropriate for the situation. If you need some guidance, follow these steps.
 Step 1 – Start at top of the Catalog of Models figure. Decide whether you want to create a physical, mathematical, or conceptual model. Whichever you choose, start by creating a brief conceptual model so you have a mental picture of what your ultimate goal is and can plan for how to get there.
If your goal is a physical or full blown conceptual model, do the research you’ll need to identify appropriate materials and formats. But this blog is about mathematical models, so let’s start there
 Step 2 – If you want to select a type of mathematical model, start on the second line of the Catalog of Models figure and decide whether your phenomenon fits best with a theoretical or an empirical approach.
If there are scientific or mathematical laws that apply to your phenomenon, you’ll probably want to start with some type of theoretical model. If there is a component of time, particularly changes over time periods, you’ll probably want to try developing a numerical model. Otherwise, if a single solution is appropriate, try an analytical model.
 Step 3 – If your phenomenon is more likely to require data collection and analysis to model, you’ll need an empirical model. An empirical model can be probabilistic, deterministic, or stochastic. Probability models are great tools for thought experiments. There are no wrong answers, only incomplete ones. Deterministic models are more of a challenge. There needs to be some foundation of science (natural, physical, environmental, behavioral, or other discipline), engineering, business rules, or other guidelines for what should go into the model. More often than not, deterministic models are overly complicated because there is no way to distinguish between components that are major factors versus those that are relatively inconsequential to the overall results. Both Probability and Deterministic models are often developed through panels of experts using some form of Delphi process.
 Step 4 – If you need to develop a stochastic (statistical) model, go here to pick the right tool for the job.
 Step 5 – Consider adding hybrid elements. Don’t feel constrained to only one type of component in building your model. For instance, maybe your statistical model would benefit from having deterministic, probability, or other types of terms in it. Calibrate your deterministic model using regression or another statistical method. Be creative.
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at amazon.com, barnesandnoble.com, or other online booksellers.
Posted in Uncategorized
Tagged analytical, blogs, cats, data mining, deterministic, empirical, models, numeric, probabilistic, relationships, statistical, statistics, stats with cats, stochastic, theoretical
2 Comments