DISCStyles Assessment™
January 2007

Examination of Reliability and Validity


Executive Summary

We provide this document as a tool for end-users of the DISCStyles Assessment™ to allow comparisons between the DISCStyles Assessment™ and other four-dimensional models in the marketplace. There are many models using the four-dimensional design, and most are based on the initial work of Carl Jung. There are many web-based assessment designs, and a declining number of robust paper-based instruments. The need for paper-based instruments remains strong, and has many advantages over the web-based offerings. At meetings and conferences, it is logistically difficult to get all participants to log onto a website to complete responses. Additionally, there are always some 'walk-ins' who want to participate, but who have not completed the on-line assessments. It is also helpful to have some basic information about the entire design, as well as some 'intelligence' into other style patterns, different from one's own. The DISCStyles Assessment™ helps to answer those needs.

This document serves as a base from which to compare other four-dimensional models, and for users of the DISCStyles Assessment™ to compare their own statistical work. (Users are encouraged to send copies of any statistical analyses they conduct to the HRD Press office. We will assemble a library of statistical reports and make them available to other users, if permission is granted.) This document is based on an analysis of over 5,000 respondents, and is based on a DISC instrument that was originally developed in 1980, with revisions in 1984, 1989, and 2001. The data provided in this report is data obtained since 2001, and is one of the few paper-based DISC instruments using exclusively 21st Century data.

All DISC instruments, and most similar instruments, are ipsative in design. That is, they are self-report inventories that measure qualities (traits) as individuals perceive those traits within themselves, and they ask the respondent to choose one trait at the exclusion of the others. The result is not an absolute set of scores that would easily fit in a normative field, but rather a relative set of scores that applies to an individual's self perception. The success of all self-report instruments depends on the insight, candor, honesty, and insight of the respondent. We will provide the essential types of statistical analysis herein, and we caution the reader to be aware of over-analyzing ipsative data. Some companies produce many pages of tables applying normative statistical rules to ipsative data, and we caution the reader to be aware of this. DISC instruments do not measure quantities like levels of cholesterol or blood pressure, but rather qualities that an individual reports about themselves. One person might perceive him or herself as possessing "high D" qualities, whereas peers might perceive otherwise.

Scale Characteristics

The DISCStylesAssessment™ is a forced-choice instrument like many others in the marketplace. Variations among the instruments largely come from specific word choices and selections. A HRD Press, we have conducted robust factor analysis with the words selected, and have attempted to control for typical instrument variability, such as social desirability of certain word choices. This assists in building an instrument with higher degrees of construct reliability.

Descriptive Statistics

Descriptive statistics are measures of central tendency that describe a set of data in comparable terms. Most essential are the measures of mean and standard deviation, to allow the audience to view the arithmetic average and the spread of scores. N for the sample below is 2,157 cases.

Scale
Mean Std. Dev.
Work Style D 4.43 2.72
Work Style I 4.57 2.10
Work Style S 6.35 2.82
Work Style C 4.92 2.11
Basic Style D 8.01 3.27
Basic Style I 3.57 2.44
Basic Style S 4.46 2.07
Basic Style C 5.79 2.36

Reliability

Reliability is a measure of the internal consistency of an instrument or assessment. Reliability on an identical set of data varies because of a wide array of choices for specific formulas to apply to that data. Reliability is a common statistic, and one that should always be considered when evaluating instruments of any type. Realize also that reliability is rather easy to obtain, if care is taken in the design of an instrument. Additionally, reliability of any instrument can be easily increased simply by increasing the number of items in the instrument. There are several measures of reliability, and some will be explained briefly below.

Test-Retest Reliability

This is a measure of the consistency of scores within the same population when re-tested with the identical instrument. With academic tests, this may yield a higher score on a re-test because of learning additional content from the test. When this type of statistic is applied, it is important to know the time differential between the two administrations of the instruments. With a self-report instrument, such as DISCStyles™ the statistic is less corrupted by subsequent administrations of the instrument because it is a self-report inventory. That said, remember that all self-report instruments are subject to the respondent's level of honesty, integrity, and insight.

Alternate-Form Reliability

When the subject takes two similar forms (but not identical forms), of the instrument, this is alternate-form reliability. Like test-retest methods, the time delay from first to second administration should be posted along with the statistics. Similar situations apply to this method as with the test-retest methods.

Split-Half Reliability

This method involves only one administration of an instrument, and then 'splits' the instrument in half by using odd numbered items as one set of scores, and even numbered items as the second set of scores, then correlates the two sets of variables. The advantage of this method is that subjects take only one instrument, and it removes the passage of time as an extraneous variable.

Kuder-Richardson Reliability

Like the split-half technique, this method also uses only one administration of the instrument. It measures the consistency of all items on an instrument calculating the mean of all split-half coefficients based on a variety of splittings of an instrument.

Spearman-Brown Reliability

This method employs a formula that doubles the number of items in an instrument, as a result this method of determining reliability usually yields higher reliability coefficients. As a result, critics of this method say that it artificially raises the reliability of an instrument.

Correlations

The purpose of a correlation is to display the level or correspondence or co-relationship between two variables. An item or trait correlated against itself yields a perfect correlation of 1.0, that's as high as the scale goes. A completely opposite correlation yields a coefficient of -1.0, and that's a perfect inverse or negative correlation. Scores that have no co-relationship at all, show a correlation coefficient at or near zero. That is, all correlations follow a spectrum of scores beginning at +1.0, passing through zero, and ending at -1.0. The closer a correlation is to zero, the lower the correlation. The more a correlation coefficient moves away from zero, in either direction, the stronger the correlation becomes. The more a correlation coefficient approaches +1.0 or -1.0, the stronger the correlation becomes. The reader should note that there is no agreed-upon table in the world of statistics that 'grades' a correlation as weak or strong in absolute, definitive terms. As a result, specific commentary by a field of researchers may vary with regard to what they consider to be 'strong' or 'weak' correlations.

Cronbach's Alpha

This technique is regarded as one of the most robust measures of reliability, and presents the highest 'bar' from which to compare. The readers should note that Cronbach's alpha is the method selected by HRD Press authors and researchers for this DISCStyles™ instrument, because of its high standards. The reader is encouraged to compare reliability coefficients presented herein to other vendors, and also to ask those vendors which reliability formulas they used to compute their reliability coefficients.


Cronbach's Alpha Reliability Table

Scale Item
Cronbach's Alpha
Work Style D .78
Work Style I .76
Work Style S .74
Work Style C .70
Basic Style D .79
Basic Style I .76
Basic Style S .61
Basic Style C .72

Spearman's Rank Order Correlations

N=2,157

Scale
WD
WI
WS
WC
BD
BI
BS
BC
Work Style D
Correlation
Sig. (2-tailed)

1.000
Work Style I
Correlation
Sig. (2-tailed)

-.035
.454
1.000
Work Style S
Correlation
Sig. (2-tailed)

-.653**
.000

-.351**
.000
1.000
Work Style C
Correlation
Sig. (2-tailed)

-.168**
.001

-.342**
.000

-.078
.135
1.000
Basic Style D
Correlation
Sig. (2-tailed)

-.638**
.000

-.139**
.007

.557**
.000

-.137**
.009
1.000
Basic Style I
Correlation
Sig. (2-tailed)

.177*
.016

-.451**
.000

.109*
.035

-.139**
.005

-.169**
.000
1.000
Basic Style S
Correlation
Sig. (2-tailed)

.463**
.000

.213**
.000

-.531**
.000

-.139**
.005

-.661**
.000

-.245**
.000
1.000
Basic Style C
Correlation
Sig. (2-tailed)

.425**
.000

.385**
.000

-.415**
.000

-.255**
.000

-.560**
.000

-.401**
.000

.371**
.000
1.000

* Statistically significant at .05 alpha level (two-tailed).
** Statistically significant at .01 alpha level (two-tailed).

Pearson's Correlation

N=2,157

Scale
WD
WI
WS
WC
BD
BI
BS
BC
Work Style D
Correlation
Sig. (2-tailed)

1.000
Work Style I
Correlation
Sig. (2-tailed)

-.095*
.054
1.000
Work Style S
Correlation
Sig. (2-tailed)

-.636**
.000

-.354**
.000
1.000
Work Style C
Correlation
Sig. (2-tailed)

-.201**
.000

-.349**
.000

-.099*
.047
1.000
Basic Style D
Correlation
Sig. (2-tailed)

-.613**
.000

-.117**
.021

.540**
.000

-.151**
.003
1.000
Basic Style I
Correlation
Sig. (2-tailed)

.119*
.019

-.442**
.000

.108*
.031

.274**
.000

-.179**
.000
1.000
Basic Style S
Correlation
Sig. (2-tailed)

.453**
.000

.229**
.000

-.523**
.000

-.143**
.003

-.659**
.000

-.272**
.000
1.000
Basic Style C
Correlation
Sig. (2-tailed)

.419**
.000

.371**
.000

-.439**
.000

-.299**
.000

-.579**
.000

-.411**
.000

.394**
.000
1.000

* Statistically significant at .05 alpha level (two-tailed).
** Statistically significant at .01 alpha level (two-tailed).

The above information was determined from a random sample of a group of 3,010 individuals throughout a wide spectrum of US businesses for the years 2001-2002. Other specific statistical investigations have been conducted. These include:

  • ANOVA
  • Two-way mixed effect model (for internal consistency measures)
  • Hotelling’s T-Squared
  • AMOS Structural equation factor modeling
  • Factor Analysis
    • Extraction: Principal Component Analysis
    • Varimax Rotation method with Kaiser Normalization
  • Correlations:
    • Cronbach’s Alpha (table above)
    • Spearman’s rho
    • Item-to-item correlation matrix
    • Interclass correlation coefficients

Back To Top>>>

Download a PDF or a Word Doc of this article.

Read more information on:


Online Assessments
www.assessments24x7.com

"DISCmastery is a trademark of Dr. Tony Alessandra. Used with permission."
"The Platinum Rule® is a registered trademark of Dr. Tony Alessandra.
Used with permission."


Privacy Policy || Disclaimer || System Requirements || Help Desk/Technical Support