5 July, 2016
- Other term: growth measures
Value-Added Measures are implemented to estimate or quantify the effect individual teachers on student learning during the course of a given school year.
To produce the estimates, value-added measures typically use sophisticated statistical algorithms and standardized-test results, combined with other information about students, to determine a “value-added score” for a teacher. School administrators may then use the score, usually in combination with classroom observations and other information about a teacher, to make decisions about tenure, compensation, or employment.
- Example: To obtain a value-added score for a sixth-grade teacher, the fourth- and fifth-grade test scores of every student in the class might be collected. A mathematical formula would factor in the test-score data alongside a variety of other information about the students, such as whether the students are on special-education plans or whether their parents dropped out of school, completed high school, or earned a college degree. The formula would generate projected sixth-grade test scores for each student, and then the sixth-grade scores would be compared to the predicted scores after students take the test. The teacher’s value-added score would be based on the average difference between the actual scores earned by students and the predicted scores. (Here’s another way of phrasing it: value-added measures consider the test-score trajectory of the students in a given teacher’s class, at the time they arrived in the class, while also controlling for non-teacher factors, to determine whether the teacher caused the trajectory to increase, decrease, or stay the same.)
- Value-added measures offer an impartial and consistent way to measure teacher efficiency on a large scale. The need to enhance school performance and instructive results requires that the best teachers be identified and matched with the most needy students.
- Value-added measures, though not perfect, represent a development over existing systems. Current approaches to teacher evaluation have failed to distinguish effective from ineffective teachers. Many job-performance evaluations are highly subjective or flawed, which is why most teachers receive positive evaluations, even in cases where students are clearly underperforming.
- Value-added scores provide an unbiased measure for making difficult decisions. Struggling students cannot afford to spend years of schooling obtaining poor instruction from ineffective teachers—they will only fall further and further behind. The least-effective teachers must be given a reasonable chance to improve, and if they don’t they must be removed and replaced with better teachers.
- Value-added measures are a more fair way to assess teachers, and the influence they have had on their students, than considering student test scores or achievement levels in isolation from other influencing factors.
- The best teachers deserve to be recognized by objective measures so they can be acknowledged and rewarded.
- Concerns about value-added measures are overstated, since they have shown in some studies to be more accurate than other accepted and widely used teacher-evaluation methods, such as principal evaluations.
- Value-added measures are ethically doubtful, unverified, and not yet ready for real-world application. Evidence suggests that there is a vital risk that the measures will misidentify effective teachers as ineffective and ineffective teachers as effective. The potential consequences do not justify the risk.
- Research shows that out-of-school factors could account for up to eighty percent of the variation in student test scores, so it’s highly doubtful that value-added algorithms can accurately and reliably isolate the effect that an individual teacher has on student learning. The contributing issues are just too complex to be reduced to a single mathematical formula. Consequently, teachers are still liable to be blamed for factors that are beyond their control.
- The student-performance and testing data used in value-added calculations may be flawed or inaccurate, even if the value-added algorithm is considered sound. Since data can easily become corrupted by numerous factors, it’s ethically questionable to compensate or fire teachers on the basis of numbers that could be inaccurate or misleading.
- Basing teacher evaluations on test data is another high-stakes use of test results, and the method will likely contribute to or worsen the same problems associated with high-stakes testing, including cheating and teaching students only the narrow range of material evaluated on tests.
- Value-added scores for individual teachers can vary widely from year to year, rating them as excellent one year and ineffective the next, even when the teaching strategies they use remain consistent—which suggests that value-added measures can be imprecise or misrepresentative, with potentially significant consequences for teachers.
- Student-growth measures should not be used to rate teachers because they do not attempt to control for other influences on student accomplishment.