Case Study 3: Data collection to build staff capacity and program implementation

The organization – an East Coast nonprofit that builds the capacity of high need, low performing K-12 schools to address children’s behavioral, social-emotional and mental health needs -  had a strong idea and the bones of a good model, but needed help to: i) develop it more fully as well as ii) understand both implementation of the model and outcomes from it.

Because the program was still in the development phase, even as it was being implemented in multiple schools, a key task was to understand leadership’s vision of and goals for it; then compare it with staff implementation of the program and school participation in it. In other words, it was important to determine where theory and practice aligned, where it differed, and how to integrate the two in order to develop a well-defined, comprehensive implementation framework that could be deployed consistently across schools.

We selected a specific intervention to focus work on: a student support team that identified, evaluated and case managed students in need of behavioral, mental health and/or SEL supports.

A needs assessment process was conducted.* Once results were obtained and the core elements of the program finalized, the challenge was determining how to document the program for implementation purposes (i.e., not in manual/initial training form, but in a way that could be used regularly in the field to track practice). To do this, a rubric was developed that encapsulated the core elements of the program in four different dimensions - student identification; student evaluation; developing interventions; case management - and at four levels of implementation expertise.

The rubric accomplished several goals. It:  

1.    Summarized the key elements of the program that should be in place – including innovations in the field that were seen as best practices - building on the organization's training manual. 

2.    Served as a data collection tool, and specifically a tool for tracking implementation.

3.    Served as a training tool. Data collected via the rubric (by both evaluation and program staff) could be sliced and diced various ways to identify a variety of key implementation issues, including staff who needed support as well as those who could provide this support.

4.    Provided a developmental perspective for both program and school staff so they might better understand the process of team development. The rubric built understanding that the work was a process requiring time and capacity building; it also piqued interest in tackling discrete issues of practice.  

This implementation focused approach succeeded because the issues that were surfaced via data collection were practical and tangible to staff.  The data wasn’t focused on student achievement and other distal accountability goals - but the quality with which work was being implemented and the outcomes of this work for the school-based teams and the wider school community.

Data gathered helped to answer many key implementation questions, including:

-       Are team members knowledgeable about outside services for students and able to link students to them?

-       How are families engaged and supported?

-       What metrics are used to track student progress and what checks are put in place to ensure action is taken to re-evaluate/adapt interventions when needed?

Asking staff to collect usable, useful metrics not only shed light on issues that needed to be addressed, but also helped to build a data culture in an organization that had to date been resistant to using it.

Longer term results of this implementation work were that:    

-      Staff implemented the program more consistently and effectively

-      The organization - as work began to happen more consistently across schools and practice data was collected - was better prepared for an outcomes evaluation.

 

* In terms of data collection, it was important to dig deep and go beyond the basics (reviewing existing materials, interviewing program leadership and staff, environmental scans and secondary research, attending internal program development and implementation meetings). A large chunk of data collection, therefore, consisted of spending time in schools: speaking with school leadership, staff, students and families about their participation in the nonprofit’s program and observing the program in action.