Using Tests in the Employment Process
Using Tests in the Employment Process
Share
What is a test?

While we commonly think of tests as either a series of questions to be answered and scored or a set of actions to be performed and scored, from the EEO perspective, the definition of test is much broader. Any part of a selection procedure where candidates are considered and some are chosen, while others are not, is subject to the Uniform Guidelines on Employee Selection Procedures (UGESP) (41 CFR 60-3) and, for all intents and purposes, are tests. This includes interviews and pre-employment screenings. For example, if a receptionist is instructed to make sure that the candidates can speak understandable English and she uses her judgment of how easily the candidates communicate with her to decide that some will move forward in the selection process and some will not, this is a test, even if the receptionist did not conceive of herself as administering a test.



What is the significance of being a test?

It is important to know what constitutes a test because tests that screen out applicants on the basis of race, color, religion, gender, or national origin (bases covered by Executive Order 11246) must be validated as required by UGESP. UGESP does not apply to Section 503 of the Rehabilitation Act which prohibits discrimination on the basis of disability or to Section 4212 of the Vietnam-Era Veteran's Re-adjustment Assistance Act (VEVRAA) which prohibits discrimination on the basis of protected veteran's status. In Executive Order 11246 cases, compliance with UGESP is the only way to show that a test is job related and consistent with business necessity. It is not necessary to show that the test was crafted or administered with the intent to discriminate. It is sufficient to show that it disproportionately screens out the affected class at statistically significant levels and that it does not satisfy the validation requirements of UGESP.



Why does OFCCP reject some validation studies?

All test validation evidence is sent to the National Office to be examined by OFCCP's testing experts. The validation supporting the test has to comply with the requirements of UGESP. Following are a number of flaws that have resulted in the rejection of test validation studies.

  • Age of the study -- A validation study that is too old is not likely to satisfy the Office of Federal Contract Compliance Programs. A test that is, for example, 20 years old is not likely to be considered sufficient. It will not likely have taken into account changes in how the job is performed over this span of years. It will not likely have considered less discriminatory alternatives that have become available in the years since the validation.

  • No scoring standard -- A validation study that either fails to establish a scoring standard or a test which is administered without the use of or contrary to the scoring standard will not likely pass muster under UGESP. The purpose of the scoring is to help the company determine whether or not the test taker is proficient in the subject being tested or at least shows an enhanced likelihood to perform successfully in the job for which the test is given. If the scores are the subject of whim or impromptu adjustment, the test will not be a reliable indicator of whether the test taker possesses the attributes, knowledge, or skills that the test was created to detect.

  • No job analysis -- A validation study needs to be related to the position for which it is being used. OFCCP does not tend to accept "off the shelf" tests and validation studies because they were not created to determine the likelihood of success in the particular job under review. Generalized validations are frowned upon. A validation study is much more likely to pass muster if the test and its validation were created by examining the particular job for which the test is to be used. The reason for this is that OFCCP only reviews the test validation if the test is disproportionately screening out a particular group. Validation is the only way to show that the test is job related and consistent with business necessity, in other words, is meaningfully related to determining successful performance on the job. The closer the test designers look at what the particular job requires, the more likely the test measures those attributes and not other extraneous traits.

  • The test is used inconsistently with the terms of the validation study -- For example, a validation study may be prepared for a three part test. If the employer only gives one or two parts of the test and bases their selection on the outcome, the validation study will not satisfy the UGESP standard since it validated the test when all three parts are given, not just one or two parts.

  • The test is used for a job other than the one for which it was validated -- Sometimes a company will validate a test for a specific job and then use the test for a different job thinking that since it was validated, it must be okay to use. This is problematic because it combines the flaws of no analysis of the particular job for which the test is used and the flaw of administering the test for a purpose for which it was not validated.

  • Test for more than the job requires -- Sometimes a contractor will administer a test for an entry level job that tests for the ability to perform a potential promotion position. Whether this kind of test is valid depends on the relationship between the entry level job and the potential promotion position. These kinds of tests have not been considered properly validated when the turn over rate of the entry level position was such that very few entry level hires were ever faced with performing the promotion position. The lower the likelihood that the entry level person will be required to perform the promotion level job, the lower the likelihood that such a screening device will be found acceptable by the agency. As a general rule of thumb, a test should test for aptitude in performing the position being sought, not some subsequent position.

  • No search for less discriminatory alternatives -- Part of the validation requirement is that the company look for alternatives to the test that meet the legitimate needs of the contractor, but do not disproportionately screen out the affected class or at least screens them out at a lesser rate. The contractor should be able to describe what efforts it made to determine that there were no less discriminatory alternatives to the test it seeks to administer. This may involve determining whether there are other tests that have less of an adverse impact on the affected class, or whether experience requirements could substitute for the testing requirement or other alternatives.


What happens if you have no validation study?

A validation study is the only way to show that a test with adverse impact is job related and consistent with business necessity and thus not discriminatory. Usually, if a company has made up a test and not had it validated, it will be up the proverbial creek without a paddle if that test has adverse impact. Companies do make up tests. Sometimes this is deliberate; in other words, they deliberately engage in what they know is the creation of a test. In one case an ambulance company created a test to determine if candidates could safely put patients in the ambulance. They basically had their skilled ambulance workers come up with a performance test and did not have it validated. Other times, because contractors are not clear on what a test is, they inadvertently create a test which they do not validate because they do not recognize it as a test. For example, a committee may be formed to encourage talent mobility within an organization and the committee members may come up with criteria for selecting qualified applicants for the program. They score the applicants according to the criteria and select some for upward mobility opportunities. This committee has created a test, but because they do not perceive themselves as having created a test, they do not perceive the need for validation. They then proceed to use the selection procedure without validation. In both cases, intentional testing or unintentional testing, the absence of a validation study will pose a serious hurdle if the test disproportionately screens out an affected class.



If a test has not been validated before it is administered, it is possible to validate a test after the fact. However, this is a lot more difficult than validating the test before it is administered. If the test is validated and then administered, the test administrators and users know what the standards are that they must follow to ensure that the test matches the UGESP requirements. One of the challenges to post facto validation is that the validation study has to validate the test as administered at the time of the observed selection disparity. This situation may be difficult to recreate by the time the test comes under review. Companies usually do not deliberately put themselves in this situation but it has come up. If you have parallel positions elsewhere for which a validation study was done and if the testing standard was consistent with the administration of the test at the facility under review, it may be possible to demonstrate that the test is valid; however, differences in the circumstances and standards of administration or in the job duties may make this defense difficult, if not impossible to establish. It is highly unlikely that a test created by company personnel who are not professional test creators will be successfully defended by a post facto validation study.



If you are administering a test that has not had a validation study and you have not been scheduled for a review, I recommend that you have your test validated. Both paper and pencil tests (question and answer tests) and physical demonstration tests (performance tests) require validation. You can contact local universities for recommendations of experts who are qualified to validate your test. Make sure that the experts you engage are willing to observe the job, evaluate the scoring standards and otherwise satisfy the UGESP requirements. Do not wait for a compliance review to take this step. While it would be handy, OFCCP will not give you a seal of approval on your test or validation outside of the compliance review process. Some research into validation studies that have been rejected by OFCCP may be helpful when choosing an expert. It may give you some idea of what to avoid in validation studies.



What if you are no longer using the test under review?

OFCCP will investigate a test that had adverse impact during the review period even if the company is no longer using the test by the time that the review is scheduled. The fact that you are no longer using the test tends to undermine the argument that the test was job related and consistent with business necessity. Also, if you have substituted a new test that does not have adverse impact, it may show that there was a less discriminatory alternative that you could have used instead of the test that caused the adverse impact. If you adopted the new test because you were looking for a less discriminatory alternative, be sure to document this fact. It is best to update your testing procedures by moving from one valid test to an even better, also valid, test.



How are you notified that a test is problematic?

If you are monitoring your testing process, your internal audit procedures should alert you to problems in the pass rates of the test. If that does not alert you, your first clue might be OFCCP's request for validation studies. OFCCP should only be requesting validation evidence if the test disproportionately screens out an affected class of people. If there is no disproportionate adverse or disparate impact, the test does not need to be defended.



Officially you may be notified of a testing violation either through a Predetermination Notice (PDN) or a Notice of Violation (NOV). The Regional and District Offices will receive a written evaluation of the validation evidence from the National Office. This document is usually not passed along to the contractor. The contractor customarily receives a summary explanation of the problems that have been identified with the validation evidence submitted. It is very common for the contractor to then ask for a meeting so that its testing expert can talk with the OFCCP testing expert. These meetings can be arranged. At this meeting, the contractor can ask in more detail about the rationale for the OFCCP's conclusions. Sometimes the contractor finds that the OFCCP result can be reevaluated if more information is provided. If this is the case, the additional information is submitted and OFCCP looks at the validation question again. The Regional and District Offices defer to the OFCCP testing experts on the question of test validation so requesting this meeting does more good for the contractor than harm. Whatever conclusion the expert finally reaches on the validation of the test will be adopted by the field.



If you are administering a discriminatory test at one facility, make sure that the facilities not under review are not using the test. It will be viewed as more egregious if you were on notice that a test was potentially problematic and you continued to use it for the same jobs elsewhere in your organization should that test again produce adverse impact.



What happens if the validation study satisfies UGESP?

If the test is supported by a validation study that satisfies UGESP, you can continue to use it despite the adverse or disparate impact on the affected class. However, your continued use must be the use that was validated, not some other purpose.



Summary



Tests can be a useful and impartial way to determine who is the best candidate for a given position, but only if they operate in a way that accurately tests for the attributes, skills, and knowledge that are predictive of success in the job. The purpose of validation studies is to ensure that this is the case. Here are some reminders and tips to help you use tests successfully.

  • Train all staff participating in employment selection processes (of any kind) on what a test is and how a test must be treated.

  • Validate all tests, including exploring less discriminatory alternatives. (UGESP includes provisions for validating scored and un-scored tests).

  • Monitor the pass rates of tests to determine if there is disparate or adverse impact.

  • Make sure that your test vendors are familiar with the requirements of OFCCP and UGESP.

  • Be skeptical of "off-the-shelf" tests and claims of general validation for jobs without any first hand knowledge or observation of the position in question.

  • Regularly update your test validation consistent with the recommendations of industry experts.

  • Take advantage of the opportunity to discuss potential violations with the OFCCP experts.

  • Make sure that a test found discriminatory at one facility is not being administered elsewhere in your organization.


Carefully following the UGESP requirements will ensure that your tests serve the legitimate business purposes they were meant to serve without leading you into a discrimination violation you never anticipated.