Insights

Our Blog

Strengthening Teacher Preparation: Using Data to Support Teacher Candidates and Improve Program Offerings

In my first two posts, I described how we at Jackson State University (JSU) transformed our clinical practice and our course curriculum to strengthen our teacher candidate’s experiences so they are ready on day one to successfully teach all students, especially Black, Latinx and students experiencing poverty. In this post, I’ll share how the process of collecting, analyzing and using data is a foundation for those efforts and everything we do to strengthen our teacher preparation program.

I’ll start by focusing on clinical practice. As my first post describes, before we overhauled candidates’ clinical practice experience, we had a retired educator observe candidates a few times a year and then manually report those observation data to the Director of Teacher Quality who kept the data centrally. So we collected data on how candidates were progressing but didn’t analyze what they were telling us or collect additional data to help us see multiple perspectives on candidate progress. And we didn’t do anything differently as faculty members and program administrators once we had the data. What we didn’t fully see then that we do now is that we were sitting on information that could help us provide differentiated support to each candidate given where they stood on their trajectory towards becoming excellent beginning teachers. We also—if we viewed candidate observations as a whole cohort—didn’t see that the data could tell us as a program what we were doing well and where we needed to focus more energy to support candidate learning as a whole.

That all changed once we began to think about data as a critical resource we needed to strengthen our programming. Now, we have a candidate-centered model with data at the core. Here is what it now looks like at JSU to collect, analyze and use data to support clinical practice:

  • We collect data from multiple sources on candidate progress and supports. We expanded the information we gathered on candidates as they progressed through the set of clinical experiences in our program: in addition to two formal observations for each candidate, we also now collect:
    • Artifacts and feedback from POP cycles (pre-conference, observation, and post-conference. In their student teaching, JSU candidates are required to engage in two POP cycles each semester. In each cycle, candidates design and implement interventions in the classroom with support from their mentor teacher. A university supervisor meets with the candidates before they implement it to review and give feedback on draft lesson plans, observes the candidates in the classroom with their students and then gives feedback after it, all anchored in the Mississippi Teacher Intern Assessment Instrument (TIAI), Mississippi’s instructional rubric for new teachers.
    • Self-assessment data. We require candidates to self-assess themselves against the TIAI every time they implement an intervention in a POP cycle.
    • K12 student perception survey data. We also gather feedback from the K12 students our candidates teach in student teaching.
    • K12 student achievement data. Although not always available, we work with our district partners to gather K12 student achievement data as part of a formal data-sharing agreement. 

We also collect survey data directly from teacher candidates and mentor teachers who support candidates in their student teaching experiences.

  • We analyze the data to generate insights about strengths and areas to improve. We produce summary statistics, correlate candidate progress data with K12 student achievement and perception data and triangulate candidate self-assessment with university supervisor assessments. We use a variety of analytic tools to gain a nuanced understanding of where candidates stand from a variety of perspectives, including their own, their students’ and their faculty supervisors’. And we also seek to understand the mentor teacher experience.
  • We use the data to guide clinical coaching, strengthen candidate supports and shift programming. Collecting and analyzing data isn’t the end of the process. The power is in the act of using the insights of the data to do something differently. We do that now in two ways:
    • Coach and give feedback to candidates. A critical change we made when we shifted to the clinical residency model is to share candidate data directly with a university faculty member rather than a central coordinator. We now provide faculty with formal observation data, as well as POP cycle data. Those faculty members then use those data, alongside their own informal classroom observations, to shape weekly individual candidate coaching sessions. They often use these data to provide ongoing 1:1 feedback as needed as well.
    • Shift program offerings and candidate supports. We work with our district partners in regular governance meetings to review aggregate candidate data and then make joint decisions to strengthen the supports we provide to candidates based on what we are collectively seeing. District leaders, faculty members, program leadership and the university supervisor all meet twice per semester and look at how candidates are progressing against the TIAI, both individually and collectively. As one example, a key shift we made based on these data reviews in governance meetings was in focusing more heavily on helping our candidates understand K12 student standards before their student teaching experience. We saw in mentor teacher and candidate survey feedback data that as candidates worked with mentor teachers in student teaching that they felt as if the mentor teachers were teaching the standards to them as they taught the students: the candidates didn’t feel that they understood the standards well enough coming into the experience and suggested that they get that training earlier in their program experience. Faculty reviewed those results and agreed: with so much to cover in program offerings, they hadn’t been giving K12 standards the time it deserved to prepare candidates for student teaching. In response, we developed and implemented progressional development for all of our teacher educators to unpack the K12 student standards and help them support their candidates to do so in turn.

We used a similar process to collect, analyze and use data to support our curriculum overhaul. Partnering with US PREP, we worked in design-based research teams (DBRT), which included JSU faculty working with a diverse group of faculty researchers across other programs in the US PREP coalition. These teams gathered pre-, in-design and post-course assessment data from across five core courses over the course of two semesters. We gathered those data from candidates on their experience with courses, piloted a redesign based on those data and then implemented revised courses in the following academic year. This process unearthed key insights. For example, the DBRT process led us to determine that our math methods course did not meet the needs of elementary education majors, as the content contained a disproportionate emphasis on secondary math content. Similarly, DBRTs also revealed that the required reading course for both elementary and secondary education did not adequately serve secondary education majors, because they had not been required to complete the foundational prerequisite courses prior to taking it. These insights led us to make changes in required courses and their sequencing across programs.

It is not always true that more data is better. Sometimes you can be swimming in a sea of data but not have the tools to make sense of them in a systematic way. Using a US PREP toolkit and with help from researchers from the University of Washington, we implemented self-studies to understand how our faculty were using data for the purpose of program improvement. This process involved gathering quantitative and qualitative survey outcomes with data collected from all stakeholders, including teacher candidates, faculty, administrative team members, and district personnel such as mentor teachers, school leaders, and key office personnel. Pulling together data from all of these sources, we made two fundamental program changes:

  • Overhauled scheduling for the junior and senior year block 
  • Reorganized course matriculation for alignment within each program year to field-based experiences

All of this work required us as faculty to think differently about data. We had to shift our mindsets. In years past, we collected data when a compliance report was due. So we complied, and that was that. In our transformed model, we have come to view data as an essential tool to help support a continuous program improvement reflection process with our district partners. That was a pedagogical shift for us that has made all the difference.

As we worked together to make sense of the data, it became clear that we also had to create new structures to support data review and response. One example: we now have a fixed agenda item in departmental meetings with reporting to a newly established Teacher Preparation Task Force, which has as its charge to continuously improve the program, and includes educator preparation faculty, department chairs, essential program staff and administrative leaders.

In short, data have completely changed how we work. Even with the new challenges we are addressing today given the ongoing COVID-19 pandemic – juggling new priorities for virtual learning, managing often new obligations in our personal lives – we have maintained our commitment to review data together. That speaks volumes to the importance of data in helping us and our partners to sustain the improvements we have made for years to come. We look forward to the continued journey!

View the full blog series here.

Meet the experts who authored this post

Nadine Gilbert
Nadine Gilbert
Coordinator, Jackson State University

What’s next? Action steps to help you benefit from our expertise:

Work with us Gain insights Learn about our expertise