NEW STUDY FINDS CHILDREN LEARN MORE WITH EDUFINANCE SUPPORT

Read Here

Menu



Subscribe to our newsletter

Get updates, news and stories from our work around the world.

Follow Us

Search


Opportunity EduFinance
Level 18, 100 Bishopsgate, London EC2M 1GT

Telephone: +44 (0) 7768599834

© 2024 Opportunity International Education Finance functions under its US and UK affiliates. Opportunity International United Kingdom is registered as a charity in England and Wales (1107713) and in Scotland (SCO39692). Opportunity International United Statesis a 501(c)3 nonprofit.

Classroom Observation Tool: Capturing changes in teacher practice and learner engagement

By Catherine O’Shea

In advance of rolling out our new EduQuality blended learning model in 2021 – which includes new teacher mentor training modules – we understood that we needed the right tool to measure the changes in teaching practices we expected to see over time. We wanted to document how teachers are currently engaging learners in the classroom and then measure changes in instruction over time as teachers adapted more best practice.

Therefore, the Monitoring and Evaluation team developed and launched a Classroom Observation Tool. This was designed specifically for Opportunity International EduFinance and was customized based on several internationally recognized observation tools.

The tool aims to gather valid and reliable data on the in-class teaching practices, student engagement and classroom environment at our partner schools which are affordable non-state schools enrolled in EduQuality, our three-year holistic school development program.

Because we wanted to prioritize data quality and objectivity, we trained our M&E Specialists to carry out observations, rather than relying on teachers’ self-reporting or observations by school leaders.

After multiple iterations of the tool during the development stage, our Education Specialists who work in the local contexts reviewed and gave feedback along with a diverse group of stakeholders. This was followed by pilot-testing with our M&E Specialists while watching video-recorded classroom lessons. After many thorough stages of testing, the tool reached a level of 80% inter-rater reliability (meaning each rater's scores matched or exceeded 80% of the 'gold standard' scorer observing the same lesson), and was approved for use across our program.

To learn more, we interviewed Dr. Anna Ermakova, EduFinance Monitoring and Evaluation Advisor on this design and testing process.

What was the approach for developing the Classroom Observation Tool?

Firstly, we considered the realities of the contexts that we work in, as well as the specifics of the EduQuality program. We also looked at the international best practices, other tools that have been designed with similar purposes, tools that have been validated and empirical evidence of best teaching practices.

I looked at what it is that we want to accomplish through the Teacher Mentor Professional Development (TMPD) sessions in EduQuality, as well as the kinds of things that we're looking to report to donors on changes in teaching practice, so that we are measuring the same kinds of indicators that we want to improve.

We used the approach taken by the TEACH tool, developed by the World Bank, and adapted it to the indicators that we selected as aligned to TMPD, as well as to the Pathways to Excellence program.

Could you tell me about why this Classroom Observation tool needed to be a customized version?

The primary reason is that it seemed that existing tools would pigeonhole us with the types of indicators that they measure, which may not be the areas where we want our schools or teachers to grow. Or if they are, they might be operationalized a little bit differently from the way that we were interested in.

We also wanted to use it as a monitoring tool for program management purposes, and we had questions about our schools that some of the existing tools didn't capture. We decided to try to get the best of both worlds by creating a bespoke tool that would satisfy our questions and be easy to train staff to use, while at the same time taking the best-practice methods that make the existing tools valid and reliable.

What does this tool capture?

There are four sections to the classroom observation tool.

  • We capture materials and environment, evaluating what is available and what is used in our classrooms - anything from technology to learner workbooks to furniture.
  • A key section is ‘learner engagement,’ which captures the percentage of classroom time dedicated to learning tasks, as well as the percentage of learners that are doing something related to learning versus being off-task.
  • We also have a section on planning and record-keeping where we look at the teacher’s lesson plans and marking records, and their quality.
  • And finally, we compared the observed teaching practice with international best practices. There are timed indicators, such as how much time teachers spend lecturing versus discussing with their students. And there are quality indicators, such as the depth of the teacher’s questions or tasks, or the accuracy and depth of explanations that the teacher provides.

How can the tool measure changes in teacher practice over time?

We measure the quality of lesson planning and of teaching practices over time. The way that we plan to do it is by measuring the same thing longitudinally.  For example, if only 30% of the observed teachers have a learner-centered lesson objective at the beginning of the year, and then we measure again at the end of the year and 50% do, then that is a direct way that we can measure growth.

The Learner Engagement section also captures growth, but more indirectly. If we find that learners are paying more attention and fewer learners are disengaged during tasks, then we might assume that this is attributable to more effective teaching practices. However, to further confirm correlation, we would need to analyze statistically whether the changes we hope to see in teaching practices correlate to changes we hope to see in learner engagement.

What has been your approach to working with different EduFinance team members on this tool?

Well, something to emphasize is that the development of this tool was a lengthy and iterative process that engaged many team members. The first step was just internal validation, then we shared this tool with education specialists, as well as the designers of the teacher mentor professional development content. We had discussions about alignment, and how much our tool is capturing the training content. We started training a pilot group of M&E specialists from local contexts. There were areas of the draft tool that looked good on paper but were very difficult to implement when you're watching a lesson. Based on feedback, we changed several indicators, for example some indicators that initially started on a measuring scale from one to four ended up being binary.

We also made some decisions on the indicators themselves. How much do the different people who are scoring the same video of the same lesson agree on the score? When we saw indicators where the agreement between observers was very low, we knew that the problem is probably not with the observers but with the tool itself, so these indicators were either improved or eliminated. Most of the work on this tool went toward the coding manual and protocol that explain how to use it. Because that's where all the explanations are of what each score of each indicator means. A lot of work went into improving those explanations to make them as exact as possible, based on different team members’ input.

How did you know when the M&E specialists were ready to use the classroom observation tool?

Our aim is for this tool to be reliable when it's used. And so even if the tool itself makes sense but the interpretation of the tool by the observers isn’t consistent, we must change it because the point is that it can be used well and reliably.

We needed to make sure that every M&E specialist agrees with the “gold standard” score on at least 80% of the items. We did a lot of practice video scoring until each specialist agreed to the intended score 80% of the time. If not, then we pursued training and retraining to make the scoring reliable.

What were your personal reflections on the overall tool development process?

I have designed observation tools before, but I think that this was probably the most thorough process I've been engaged in, and I was honored to be part of the process from beginning to end, design to implementation. It was challenging and interesting to simplify this tool, because a lot of the time these types of tools can be quite academic, and we were faced with the task of designing a tool that would be both valid and reliable in an applied setting with limited resources.

We were simplifying complex pedagogical practices and making them measurable and accessible. I also really appreciated the teamwork on this and the fact that people from every level of the organization were involved. We really tried to hear everybody's voices and incorporate them. I would like to extend my personal gratitude to everyone who has been part of this process, especially the M&E specialists and associates who have labored tirelessly to become experts on this tool and make sure reliable and meaningful data is collected at our partner schools day after day. 

Read the Classroom Observation Tool: Development Review here.

Subscribe to our newsletter