You are here

Validity

26 July, 2019 - 10:10
Available under Creative Commons-ShareAlike 4.0 International License. Download for free at http://cnx.org/contents/ce6c5eb6-84d3-4265-9554-84059b75221e@2.1

Validity is the evaluation of the “adequacy and appropriateness of the interpretations and uses of assessment results” for a given group of individuals (Linn & Miller, 2005, p. 68).For example, is it appropriate to conclude that the results of a mathematics test on fractions given to recent immigrants accurately represents their understanding of fractions? Is it appropriate for the teacher to conclude, based on her observations, that a kindergarten student, Jasmine, has Attention Deficit Disorder because she does not follow the teachers oral instructions? Obviously in each situation other interpretations are possible that the immigrant students have poor English skills rather than mathematics skills, or that Jasmine may be hearing impaired.

It is important to understand that validity refers to the interpretation and uses made of the results of an assessment procedure not of the assessment procedure itself. For example, making judgments about the results of the same test on fractions may be valid if the students all understand English well. A teacher concluding from her observations that the kindergarten student has Attention Deficit Disorder (ADD) may be appropriate if the student has been screened for hearing and other disorders (although the classification of a disorder like ADD cannot be made by one teacher). Validity involves making an overall judgment of the degree to which the interpretations and uses of the assessment results are justified. Validity is a matter of degree (e.g. high, moderate, or low validity) rather than all-or none (e.g. totally valid vs invalid) (Linn & Miller, 2005).

Three sources of evidence are considered when assessing validity content, construct and predictive. Content validity evidence is associated with the question: How well does the assessment include the content or tasks it is supposed to? For example, suppose your educational psychology instructor devises a mid-term test and tells you this includes chapters one to seven in the text book. Obviously, all the items in test should be based on the content from educational psychology, not your methods or cultural foundations classes. Also, the items in the test should cover content from all seven chapters and not just chapters three to seven unless the instructor tells you that these chapters have priority.

Teachers' have to be clear about their purposes and priorities for instruction before they can begin to gather evidence related content validity. Content validation determines the degree that assessment tasks are relevant and representative of the tasks judged by the teacher (or test developer) to represent their goals and objectives (Linn & Miller, 2005). It is important for teachers to think about content validation when devising assessment tasks and one way to help do this is to devise a Table of Specifications. An example, based on Pennsylvania's State standards for grade 3 geography, is in . In the left hand column is the instructional content for a 20-item test the teacher has decided to construct with two kinds of instructional objectives: identification and uses or locates. The second and third columns identify the number of items for each content area and each instructional objective. Notice that the teacher has decided that six items should be devoted to the sub area of geographic representations- more than any other sub area. Devising a table of specifications helps teachers determine if some content areas or concepts are over-sampled (i.e. there are too many items) and some concepts are under-sampled (i.e. there are too few items).

Table 11.1 Example of Table of Specifications: grade 3 basic geography literacy

Content

Instructional objective

Total number of items Per cent of items

Identifies

Uses or locates

Identify geography tools and their uses

     

Geographic  representations: e.g. maps, globe, diagrams and photographs

3

3

6 30%

Spatial information: sketch & thematic maps

1

1

2 10%

Mental maps

1

1

2 10%

Identify and locate places and regions

     

Physical features (e.g. lakes, continents)

1

2

3 15%

Human features (countries, states, cities)

3

2

5.25%

Regions with unifying geographic characteristics e.g. river basins

1

1

2.10%

Number of items

10

10

20

Percentage of items

50%

50%

100%

 

Construct validity evidence is more complex than content validity evidence. Often we are interested in making broader judgments about student's performances than specific skills such as doing fractions. The focus may be on constructs such as mathematical reasoning or reading comprehension. A construct is a characteristic of a person we assume exists to help explain behavior. For example, we use the concept of test anxiety to explain why some individuals when taking a test have difficulty concentrating, have physiological reactions such as sweating, and perform poorly on tests but not in class assignments. Similarly mathematics reasoning and reading comprehension are constructs as we use them to help explain performance on an assessment. Construct validation is the process of determining the extent to which performance on an assessment can be interpreted in terms of the intended constructs and is not influenced by factors irrelevant to the construct. For example, judgments about recent immigrants' performance on a mathematical reasoning test administered in English will have low construct validity if the results are influenced by English language skills that are irrelevant to mathematical problem solving. Similarly, construct validity of end-of-semester examinations is likely to be poor for those students who are highly anxious when taking major tests but not during regular class periods or when doing assignments. Teachers can help increase construct validity by trying to reduce factors that influence performance but are irrelevant to the construct being assessed. These factors include anxiety, English language skills, and reading speed (Linn & Miller 2005).

A third form of validity evidence is called criterion-related validity. Selective colleges in the USA use the ACT or SAT among other criteria to choose who will be admitted because these standardized tests help predict freshman grades, i.e. have high criterion-related validity. Some K-12 schools give students math or reading tests in the fall semester in order to predict which are likely to do well on the annual state tests administered in the spring semester and which students are unlikely to pass the tests and will need additional assistance. If the tests administered in fall do not predict students' performances accurately then the additional assistance may be given to the wrong students illustrating the importance of criterion-related validity.