Miller & Carney Video Annotation to Examine Supervisor Interpretation of State Mandated Teacher Assessments

Miller, M. & Carney, J. (2009). Lost in translation: Using video annotation software to examine how a clinical supervisor interprets and applies a state-mandated teacher assessment instrument. The Teacher Educator, 44(4), pp. 217-231.

Summary: This qualitative case study examines how a university supervisor uses a state mandated evaluation tool. The video annotation software was a research tool rather than being a tool under research as in the case of many other recent studies. The supervisor was a retired veteran teacher and administrator, which is an ordinary representation of the population of university supervisors. The supervisor was given two training sessions that focused on how to use the video annotation tool and the rubric as an evaluation instrument. Data for this study were ethnographic observation of the supervisor’s behavior and document collection. The study found that this clinical supervisor lacked a sophisticated understanding of the complexity of the rubric as an evaluation instrument. That coupled with her infrequent visits and lack of professional development resulted in her requiring the preservice teachers to create demonstration lessons that would include all of the criteria in the rubric. The researchers argue that supervisors’ professional development needs attention.

Key Words: university supervisor, supervision in teacher education, evaluation, supervision as evaluation, pedagogical skills, pedagogy of supervision, supervisory practices

 

Conceptual Framework: None listed

 

Research Questions:

How does a clinical supervisor use a mandated performance assessment instrument to evaluate teacher candidates?

On what basis does she make her evaluative judgments?

How does she support her assertions?

 

Methods:

Qualitative case study

Ethnographic observation and document collection

 

Participants:

One university supervisor

 

Findings:

  • “…the clinical supervisor found it difficult to interpret rubric criteria, often made tenuous claims about candidates’ performance, and tended to require students to design lessons that were artificial demonstrations of mandated competences (p. 217).”
  • The supervisor struggled because she lacked professional development.
  • “She used the instrument exclusively for formal, summative evaluation of candidates’ performance (p. 224).”
  • “In this way, candidates were encouraged to develop ‘exhibition lessons’ that would demonstrate a wide variety of PPA-targeted competencies not noted in routine observation (p. 224).”
  • “Given her limited time and contact with candidates, Felicia reported finding it difficult to assess candidates’ performance toward the rigorous performance expectations articulated on the state’s PPA. These limitations caused her to stretch the evidence when she was unable to directly observe targeted competences. (p. 225).”
  • The supervisor used the standards in the rubric like a checklist.
  • The supervisor made judgments about performance based on very limited experience and examples of the candidate’s performance.
  • “Rather than evaluating learning activities based on research and principles of effective practice, she focuses on easily observed teacher practices such as questioning, delivery, and pacing (p. 228).”
  • Supervisors need adequate professional development and support.

 

Additional Key Passages

  • PPA stands for Performance-based Pedagogy Assessment

 

Resources for my other studies:

  • Slick (1998) clinical supervisors in teacher education are marginalized
  • Paris & Bespass (2001) supervisors focus on observing teaching practice rather than student learning
  • Marshall (2005) supervisors lack subject matter content knowledge
  • Slick (1997) are unsure about their roles and responsibilities

 

Also: (This study could be a reference for my pedagogical skills paper for the following reasons…)

  • “The video annotation and interview transcripts provided insights into the clinical supervisor’s noticing behavior, the reasoning she used to support her assertions about particular teaching/learning events, and the manner in which the state’s PPA instrument guided her assessment (p. 223).”
  • “Focusing on only one supervisor-participant in a case study enabled us to closely examine her verbalized assessment of student teachers, the ‘pointing’ that occurred as she annotated the candidates teaching videos, and her use of the state-mandated assessment rubrics (p. 224).”

 

This study also could be a reference for my supervision as evaluation paper.