Starting on August 2, Hogan will be making a small update to the Validity statement that appears on some of our reports. The new statement will say, “This report shows a regular assessment pattern,” or “This report shows an irregular assessment pattern.” Here’s why we’re making this change.
The quintessential feature of any Hogan report is its accuracy. Of course, the accuracy of any Hogan report is dependent upon the quality of the assessment data used to generate the report. If a test taker is illiterate, takes the assessment in the wrong language, is careless, or is otherwise inattentive while taking the assessment, the resulting report will be inaccurate. The Validity scale of the HPI was originally designed to detect such responding.
The logic behind the Validity scale is simple. It consists of 14 items that have very high (i.e., 90% or higher) endorsement rates in the general working population. Because most people endorse these items in the same way, it is unusual for attentive test takers to get a low score. While test takers who are inattentive will score low on the Validity scale, a low score does not always mean the test taker was inattentive.
In the early days of the Hogan business, our assessments were often administered to industrial workers, students, and incarcerated individuals. In these populations, we sometimes encountered inattentive responders, and the Validity scale would correctly flag these reports (although it would occasionally flag attentive but unusual responders as well).
As our business has grown, our client base has shifted. Most test takers today recognize the value of completing a Hogan assessment, and we now find nearly all people who complete a Hogan assessment are attentive. This change in the base rate of attentiveness means flagged Validity scores rarely reflect inattentiveness and more commonly reflect an irregular response pattern. For example, in the past most people who disagreed with the statement “I do the best I possibly can at my job” were simply inattentive to the question. However, today most people who disagree with that statement do so intentionally and for specific reasons.
With this changing trend, we are clarifying our advice for interpretation of the Validity scale. Although a high proportion of those failing the Validity scale in the past were inattentive, that is simply not the case today. When an attentive test taker fails on the Validity scale, the report is perfectly accurate and interpretable. Therefore, we recommend the Validity scale be viewed as a “caution” sign, rather than as the “stop” sign that some may have used it as in the past.
The application is important to consider when reviewing the Validity scale. For selection cases, we recommend the report be interpreted “as is” with no opportunity to reassess. We recommend this for fairness; giving some people a second opportunity to assess is not fair to the entire candidate pool.
For development cases, we recommend the person providing the debrief determine if the test taker was inattentive or responding in an unusual manner. Of course, this determination needs to be done delicately without suggesting the participant responded in some way that was wrong. We are all different from each other and have different ways of navigating our social worlds. It is OK to tell the test taker that the response pattern was somewhat rare, and as a result, you want to ensure that the test taker was attentive and purposeful when taking the assessments. Once it is confirmed the test-taker was attentive, the report is considered valid and interpretable. However, if in a development case, you learn that the assessment was taken under circumstances that would render the report inaccurate (e.g., in the wrong language or with careless responding), the test taker should reassess before any interpretation is provided.
For more information about the Validity scale, please review our Validity Scale FAQ document. For further questions, contact your PBC contact.