Jump to content

Equating

From Wikipedia, the free encyclopedia
(Redirected from Test equating)

Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam.[1] It can be accomplished using either classical test theory or item response theory.

In item response theory, equating[2] is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.

Purpose

[edit]

Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.

Equating in item response theory

[edit]
Figure 1: Test characteristic curves showing the relationship between total score and person location for two different tests in relation to a common scale. In this example a total of 37 on Assessment 1 equates to a total of 34.9 on Assessment 2 as shown by the vertical line

In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropriate also the units, of the scales obtained from response data from the different tests. The process is referred to as equating or test equating.

In item response theory, two different kinds of equating are horizontal and vertical equating.[3] Vertical equating refers to the process of equating tests administered to groups of students with different abilities, such as students in different grades (years of schooling).[4] Horizontal equating refers the equating of tests administered to groups with similar abilities; for example, two tests administered to students in the same grade in two consecutive calendar years. Different tests are used to avoid practice effects.

In terms of item response theory, equating is just a special case of the more general process of scaling, applicable when more than one test is used. In practice, though, scaling is often implemented separately for different tests and then the scales subsequently equated.

A distinction is often made between two methods of equating; common person and common item equating. Common person equating involves the administration of two tests to a common group of persons. The mean and standard deviation of the scale locations of the groups on the two tests are equated using a linear transformation. Common item equating involves the use of a set of common items referred to as the anchor test embedded in two different tests. The mean item location of the common items is equated.

Classical approaches to equating

[edit]

In classical test theory, mean equating simply adjusts the distribution of scores so that the mean of one form is comparable to the mean of the other form. While mean equating is attractive because of its simplicity, it lacks flexibility, namely accounting for the possibility that the standard deviations of the forms differ.[1]

Linear equating adjusts so that the two forms have a comparable mean and standard deviation. There are several types of linear equating that differ in the assumptions and mathematics used to estimate parameters. The Tucker and Levine Observed Score methods estimate the relationship between observed scores on the two forms, while the Levine True Score method estimates the relationship between true scores on the two forms.[1]

Equipercentile equating determines the equating relationship as one where a score could have an equivalent percentile on either form. This relationship can be nonlinear.

Unlike with item response theory, equating based on classical test theory is somewhat distinct from scaling. Equating is a raw-to-raw transformation in that it estimates a raw score on Form B that is equivalent to each raw score on the base Form A. Any scaling transformation used is then applied on top of, or with, the equating.

See also

[edit]

References

[edit]
  1. ^ a b c Kolen, M.J., & Brennan, R.L. (1995). Test Equating. New York: Spring.
  2. ^ National Council on Measurement in Education http://www.ncme.org/ncme/NCME/Resource_Center/Glossary/NCME/Resource_Center/Glossary1.aspx?hkey=4bb87415-44dc-4088-9ed9-e8515326a061#anchorE Archived 2017-07-22 at the Wayback Machine
  3. ^ Baker, F. (1983). Comparison of ability metrics obtained under two latent trait theory procedures. Applied Psychological Measurement, 7, 97-110.
  4. ^ Baker, F. (1984). Ability metric transformations involved in vertical equating under item response theory. Applied Psychological Measurement, 8(3), 261-271.
[edit]