Dependent Sample Assessment Plots (DSAP) constitute a way of visualizing data in the context of two dependent sample analyses. One (of at least four ways^{1}) to think about this would be to think of preintervention and postintervention response data scores, when studying the effects of intervention.
Suppose you’re an educator and you administer an assessment to students at the beginning of a unit asking about their level of confidence or understanding of a topic. You then teach a lesson that spans some period of time. At the end you collect responses to the same questions again. You now have a dependent sample: two responses that related to the same individual for some number of individuals.
Pre  Post  
Adam  22  45 
Beth  33  30 
Cindy  35  53 
David  32  55 
Elisabeth  27  40 
For such a small sample, you can quickly eyeball the raw data and see that there seems to have been an upward shift in scores, but is it significant (in the statistical sense)? Could you so easily eyeball the results for a class of 20, 30, or 100 students? Probably not.
Data visualization is an attempt to reveal patterns in data by converting it from raw numbers to graphic images where we can more easily discern clusters (small groups of students who exhibit similar score patterns), outliers (unusually high or low scores), and effects of treatments (did the instruction result in learning?).
An assessment plot is simply a specialized scatter plot showing the pairs of values as (x, y) coordinates. When we enhance the scatter plot a little, we can gain quick insight into patterns in our data. Consider the following Dependent Sample Assessment Plots:
The plot has several features worth mentioning:
 The xaxis and yaxis use the same range (they’re on the same scale), so the plot is square.
 I’ve plotted postassessment scores along the xaxis so that the mean difference will be positive for increases in postscores and negative for decreases in postscores.
 The solid black line running from the lowerleft to the upperright represents x and y values that are the same (10, 10), (20, 20), and so on; this is called the identity line. Therefore, if there was no change between the pre and postassessment, we would expect the points to appear along this 45 degree line.
 Any points below this line represents a positive change (scores increased from the pre to postassessment).
 Any points above this line represents a negative change (scores decreased from the pre to postassessment).
 The horizontal, thinlydashed red line represents the preassessment mean; here, about 29.
 The vertical, thinlydashed red line represents the postassessment mean; here, about 44.
 The thick, dashed red line running diagonally is the mean of the difference between pre and postassessment scores (the difference mean); here, 14.8, i.e., postassessment scores were 14.8 points higher than preassessment scores, on average.
 The green bar indicates the 95% confidence interval: the range of values for the population mean difference that are reasonable, in light of these data.
 If the green bar overlaps the identity line, then any observed difference is not statistically significant.
 Conversely, if the green bar does not overlap the identity line, then any observed difference is statistically significant. (It’s up to the analyst to decide whether it’s of practical significance!)
This simple visualization offers us much information quickly and scales well to samples of moderate class sizes. (See the 40student example, below.)
Free software is available to help us generate these graphs with just a little effort on our part. R, a statistical programming environment, is available for download for Windows, Macintosh, or Linux operating systems and offers a wealth of data management, analysis, and visualization tools. Here I’ll focus on only one such tool: granova.
Granova is an abbreviation for Graphical Analysis of Variance and is a package available (also for free) for use in R written by Bob Pruzek and Jim Helmreich. In fact, the above plot was generated using granova.
In order to install and use R and granova to produce this plot, you would
 Download R
 Install R per your operating system’s usual process
 Mac Users will also need to download and install tcl/tk
 Launch R
 Type the following commands within R

install.packages("granova", dep=TRUE)

library(granova)

x < cbind(post=c(45, 30, 53, 55, 40), pre=c(22, 33, 35, 32, 27))

granova.ds(x)
In the future, to run your own analysis using your own pre and postassessment data, you would simply
 Launch R
 Type the following commands within R

library(granova)

x < cbind(post=c(45, 30, 53, 55, 40), pre=c(22, 33, 35, 32, 27))

granova.ds(x)
replacing the numbers on the line beginning with
x <
with your own pre and postassessment scores. It's important that the two lists of numbers are in matched order. That is, 22 and 45 are scores for the first student, 33 and 30 are scores for the second student, and so on. Also, notice that I've entered postassessment scores, first, so that the postscores will appear on the xaxis.
In closing, consider the data below: 40 pairs of scores in both raw numeric and granova format. Can you eyeball any trends from the raw data? What about based on the plot?
Pre  Post  
21.72  50.68  
33.26  36.39  
20.41  51.55  
26.06  36.26  
33.62  32.18  
27.16  20.13  
30.38  27.90  
28.84  59.17  
33.00  34.18  
36.36  41.76  
30.36  48.92  
33.16  35.80  
31.42  43.73  
39.90  40.53  
29.90  52.50  
35.42  34.67  
29.14  38.33  
22.31  29.02  
27.96  41.19  
24.42  57.77  
23.65  28.68  
26.94  26.31  
35.09  23.09  
38.29  53.75  
39.09  50.23  
30.21  23.43  
24.78  35.14  
34.26  54.71  
31.64  20.91  
31.41  27.45  
23.84  48.05  
36.11  25.58  
37.45  59.60  
29.38  56.05  
39.72  51.28  
29.82  36.91  
31.82  21.71  
24.82  23.96  
37.80  49.52  
38.45  56.68 
 See Pruzek and Helmreich's paper in the Journal of Statistics Education Volume 17, Number 1 (2009), Enhancing Dependent Sample Analyses Using Graphics ↩
One thought on “Dependent Sample Assessment Plots Using granova and R”