Date of Award

Spring 5-18-2019

Degree Type

Dissertation

Degree Name

Executive Ed.D. in Education Leadership Management and Policy

Department

Education Leadership, Management and Policy

Advisor

Gerard Babo, Ed.D

Committee Member

Elaine Walker, Ph.D

Committee Member

Matthew Bolton, Ed.D

Keywords

TEACH NJ, ACHIEVE NJ, PARCC, MSGP

Abstract

When New Jersey passed the TEACHNJ legislation in August of 2012, it brought about the ACHIEVE NJ educator evaluation and support system. New Jersey would shift from a two-tier evaluation system (satisfactory and unsatisfactory) based solely on administrator observations to a four-tier system (ineffective, partially effective, effective, highly effective) based not only on administrator observations but also locally set Student Growth Objectives and, in the areas of ELA and Math in tested grades, Median Student Growth Percentiles (MSGPs). Median Student Growth Percentiles are the amount of student “growth” on the PARCC ELA and Math assessments, which the State attributes to a teacher’s influence and therefore is factored into their evaluations. For the 2016-2017 school year, teachers in tested grades with the proper number of qualifying students received MSGP scores as 30% of their final evaluations. Starting in the 2013-2014 school year, districts were mandated to adopt one of the State’s approved teacher practice rubrics in order assess teachers’ pedagogical performance. This case study asked whether the evaluated teacher practice, skill and ability scores had a moderating effect on the known predictive relationship between prior academic performance and current academic performance. Do the scores the administrators give to teachers, and therefore their evaluated skill, impact how a student progress academically? This case study used secondary data from one school district to test the assumption that a teacher’s evaluated skill, practice or ability influences students’ progress on PARCC. Evaluation rubric data was gathered separately from Student Growth Objectives scores and MSGP scores. Moderation regression was used to test whether teachers’ evaluation practice scores had any effect on the predictive relationship between students’ 2015 PARCC Math and ELA scores and their 2016 PARCC Math and ELA scores. The sample was separated by subgroups according to demographic and programmatic affiliation. Thirty moderation regressions were run, revealing six areas where teacher evaluation scores were statistically significant, positive moderators of the relationship between students’ 2015 and 2016 PARCC scores. The results of this study revealed that in the areas of significant moderation, the impact was minimal, and the positive betas showed teacher evaluation scores enhanced rather than mitigated the predictive impact of students’ prior performance. This research can inform this particular district’s evaluation practices and can lead to further study of the teacher practice component and how it is applied. This research can also be viewed in the context of the difficulty of revealing a specific, measurable impact a teacher may have on standardized test scores.

Share

COinS