Courses‎ > ‎

PSYCH 301: Statistics

SPSS tutorial

SPSS is a powerful statistical analysis software package used by many social and behavioral researchers.  It can be a bit quirky, but once you know the basics of how SPSS is set up, you can do a wide variety of statistical analyses.  We're going to focus on t-tests, ANOVA, and correlation in this workshop.

One Sample t-test
  1. First, download the spss.zip file below.  It contains the data sets that we'll be using in this tutorial.  Unzip the file someplace you'll remember, like your desktop.
  2. Launch SPSS by going to Start>All Programs>Class Specific>Statistcs>SPSS.
  3. Now, open the SPSS data file ttest1data.sav by going to File>Open>Data and navigating to where you unzipped the spss.zip file.

  4. these data are from a single sample and we want to know if the mean of the sample differs from 0.  Thus, we want to do a one-sample t-test.  To do this, select Analyze>Compare Means>One-sample t-test.  this will pull up a dialogue box that looks like this:
  5. Click on the name of our variable (tTestData) and then click on the blue arrow to move it to the Test Variables area.  The test value defaults to 0.  This is our hypothesized mean of the null distribution.  In our case, we'll leave it as 0.  Click OK to run the test.
  6. An SPSS output window will now open.  It will have a lot of information, including some descriptive statistics and the results of our t-test.  What was the mean of our sample?  The standard deviation?  Did the mean differ significanlty from 0?  If so, what was the p-value?  SPSS defaults to a 2-tailed p-value.  How would you calculate the 1-tailed p-value?
Paired t-test
  1. Now, let's do a paired t-test.  Open the pairedttest.sav data set.  Remember that SPSS expects every line to be a different case (or individual).  So, in this data set, we have two values (Time1 and Time2) for each individual.  We can therefore pair the two samples in a meaningful way, and a paired t-test would be appropriate if we want to know if the means of the samples differ.
  2. To run a paired t-test, go to Analyze>Compare Means>Paired Samples t-test.  A dialogue box like this will appear:
  3. You need to specify which variables you wan to compare.  Do this by clicking on the first variable (Time1) and then clicking on the blue arrow, then select the 2nd variable (Time2) and click on the blue arrow again.  When you have both variable fields populated, you can click OK to run the analysis.
  4. Again, the SPSS output window should come to the front.  If you did not close it after your fist analysis, then the results of your new analysis will appear below the first.  If you did close the window, then a new one will appear.  Again, we have summary statistics, as well as some new inferential statistics.  By default, SPSS calculates the correlation between our two variables.  It also gives us a table with the results of our paired t-test.  What were the means and standard deviations of the two groups?  What was the t-statistic?  Was it significant?  What was the exact p-value?
Independent Samples t-test
  1. Next, let's do an independent samples t-test.  Open the indsampttest.sav dataset.  Now, notice that the data are arranged a little differently than before.  Again, each line (row) is a subject (or case), but now we have two variables for each: a quantitative variable (score) and a nominal variable (group). 
  2. By default, SPSS assumes that variables are quantitative.  We need to tell it that our second variable is not quantitative, but nominal.  Go to the variable view by clicking "Variable View" at the bottom of the SPSS data window.  In this view, we can specify some things about our variables.  Scroll all the way to the right of the window and click on where it says "Scale" in the Measure column for the Group variable.  This will bring up a drop-down menue with options for calling the variable scalar (or quantitative), ordinal, or nominal.  Select nominal for the Group variable and go back to the Data View.
  3. Now we're ready to do the independent samples t-test.  Do this by going to Analyze>Compare Means>Independent-Samples T Test.  This will pull up a dialogue box similar to what you've seen before.  Select Score from the list of variables to be the test variable.  Select Group to be the Grouping Variable.  Notice the question marks?  Click on Define Groups to bring up another dialogue box like this:
    Type "1" in for Group 1 and "2" in for Group 2.  Click Continue to dismiss the Define Groups dialogue box, and then click OK to run the analysis. 
  4. In the Output window, you'll now have a new pair of tables for the summary statistics and the results of the independent samples test.  SPSS runs the analysis both assuming equal variance and assuming unequal variance between our samples.  The results of the two analyses come out virtually identical in this case, which is a good indication that the two samples had equal variance (the first line of the table).  Did we have a significant difference between the means of the two groups?  Did SPSS use the correct degrees of freedom?
One-way ANOVA
  1. Next, open the 1wayANOVA.sav data set.  Here we want to know if the means of the three groups differ from one another.  Again, the data are organized into a score for every individual as well as a variable that indicates the person's group (1, 2, or 3).  Again, click on the Variable View to make sure that SPSS knows that the Group variable is nominal.
  2. Run the analysis by going to Analyze>Compare Means>One-Way ANOVA.  The following dialogue box will appear:
    Score is our dependent variable and group is our factor.  Click on the Post Hoc button to select your favorite post hoc comparison.  For this exercise, choose a couple, e.g., bonferroni, sheffe, and LSD. Click proceed, and the OK to run the analysis.
  3. The output from this analysis will include an ANOVA table as well as a table with the post hoc comparisons.  Did we have a significant F value?  What does that mean?  If you got a significant F, what groups differed from one another?  If you did not get a significant F, why are post hoc comparisons inappropriate?
Two-way ANOVA
  1. On to two-way ANOVAs.  Open the data set called 2wayANOVA.sav.  Now, notice that we still only have one dependent variable, but now we have two grouping variables.  Again, make sure that the grouping variables are nominal.
  2. To run the analysis, select Analyze>General Linear Model>Univariate.  A dialogue box like the following will appear:
    Score is our dependent variable and GroupA and GroupB are fixed factors. (A fixed factor is one where the levels have fixed values, such as male and female.  A random factor can take on any value.)  Click OK to run the analysis.
  3. The output for the 2-way ANOVA looks like this:
    The lines that we want to pay attention to are: GroupA (for the main effect of Group A), GroupB (for the main effect of Group B), the GroupA*GroupB interaction, Error (which is the same as Within-groups variability), and the Corrected Total.
  4. What are our results?  Are there any main effects?  Which?  Is the interaction significant?  What do these results mean?
Scatterplots and Correlation
  1. Finally, open the correlation.sav data set.  For this data set, we have the (made-up) height (in inches) and weight (in pounds) of 20 men. 
  2. First, let's create a scatter plot of the data.  Do this by going to Graphs>Chart Builder.  First, a dialogue box will pop up to remind you to specify the type of your variables (i.e., nominal, scalar, ordinal).  Click OK to dismiss this box.  The chart builder dialoge box looks something like this:

    Select Scatter/Dot from the list of chart types on the lower left of the box.  Drag the first option (simple scatter) to the chart preview window.  Drag the height variable to one axis and the weight variable to the other axis.  Which axis is which does not matter.  Click OK to generate a scatterplot.
  3. The scatter plot will be generated in the output window.  What can you say about the relationship between the two variables?  Would you predict a correlation between the two?  If so, in what direction?
  4. Let's now go back to the data window and calculate the correlation.  Do this by going to Analyze>Correlate>Bivariate.  This will open a dialoge like this:

    Select Hight and Weight as the variables to analyze, and click OK.
  5. The output for the correlation is called a correlation matrix.  It gives you the correlation coefficient for each variable with every other variable (so you can calculate this for more than 2 variables at a time).  Is there a significant correlation betwen height and weight?  If so, how strong is it?  What direction does it go in?
ċ
spss.zip
(4k)
Brock Kirwan,
May 27, 2010, 2:55 PM
Comments