What Florence Nightingale can teach nonprofits about telling better stories (through pictures)

What's the foundation of memorable data visualizations and infographics? Good data.  

[Picture by Florence Nightingale (1820–1910). [Public domain], via Wikimedia Commons

Several years ago, a board member at a nonprofit I was at gave the evaluation department a gift: Edward Tufte’s The Visual Display of Quantitative Information – a not so subtle hint that he was underwhelmed by the reports we were generating.

This gesture made me a little defensive. It also made me laugh. Because the board member wasn’t asking us why we were getting particular results; what they meant for the program; how the organization was responding to the data. Or even if the data he was seeing was good quality. He was simply asking us to present the data differently, so it was more exciting to look at.  

In the years since this happened, nonprofits – along with everyone else - have embraced visual representation of data; a gesture made possible in large part by the free/low-cost tools through which data collection, analysis and reporting can be carried out.

It’s great to see so much interest in data and presenting it. The focus, though, seems to have gone – like that of the board member - straight from collecting data to selling it. Nonprofits, including small ones who have little budget for research and evaluation, are more and more being encouraged to tell their stories via data visualizations and infographics.

But unlike the graphics you see on Tableau, nonprofits are not analyzing robust, clean, large n datasets that you need R or another statistical package to produce images around. We’re analyzing home grown surveys and self-administered assessments. The data that’s collected with the most consistency (though I'm not saying anything about their quality) are output metrics like attendance, people served, and meetings held.

What does this mean for nonprofit data displayed via infographics and data visualizations? That it’s not focused on implementation and outcomes; there's usually not enough of it to generate a visualization. Instead, visualizations are used to provide context (using someone else’s data); tell a story; and tabulate outputs.

I completely understand and am sympathetic to nonprofits and social impact organizations’ need to make their work easy to understand and good looking. I’ve got infographics and video on my own website. Organizations need to raise awareness, interest and funds. In our content saturated age, eye catching pictures that also quickly and easily convey information are a good way to hold attention for more than a couple of seconds. A strong marketing story can be the difference between a well funded or a poorly funded cause.

What bothers me is to see effort put into visualizations and infographics that at first glance seem to be telling you something interesting and in-depth – but at second glance are doing no such thing. As I’ve argued, many nonprofits can talk in great depth about i) a problem and ii) their outputs, the activities they engage in to address the problem. But tying the two together – charting how their activities specifically address a problem and what this looks on the ground – is often a black, unopened box. Greater interest in and use of data visualizations and infographics haven’t addressed this issue.

What you usually end up with, instead, are:

Bells and whistles around what’s straightforward, contextual data

 

 

Infographics or visualizations that tell a data story, but not a clear one once you scratch the surface

Output numbers but not a good explanation of what they mean for implementation or outcomes

 

What can nonprofits do to improve their use of data visualization and infographics? Florence Nightingale provides some answers.

[If you have the time, go and read this article in Atlas Obscura about Florence Nightingale and her pioneering use of data visualization. Then come back! The quotes that follow are pulled from this piece.]

Nightingale “in addition to caretaking and advocating” was also a “dedicated statistician, constantly gathering information and thinking up new ways to compare and present it....Her months in the war hospitals of Crimea provided her with plenty of opportunities to gather information.” As she returned home with her data, the timing was propitious. Britain was “gripped by its own numerical fervor…journalists and politicians were comparing sets of numbers in order to demonstrate particular correlations.” When asked by a high government official to “’communicate her opinions’ about hospital treatments in Crimea,” she was ready. She synthesized the wealth of information she had collected – in the form of stories and observations as well as tabulations (of how many were sick at a given time, when they died, what they died of) into a magisterial 850 page report.

No. 1712: Nightingale's Graph, from Engines of Our Ingenuity

No. 1712: Nightingale's Graph, from Engines of Our Ingenuity

Notes on Matters Affecting Health, Efficiency, and Hospital Administration in the British Army included beautifully designed charts, tables and graphs. She used these graphical representations of her data to bring home her main points: i) many, many more soldiers died of preventable diseases than combat; ii) when a sanitary commission was sent to Crimea to clean up the Scutari hospital there were drastic reductions in mortality. This report, and the advocacy she engaged in around its findings, were key reasons sanitation standards in the British Army and general population eventually improved.

Bar charts comparing mortality rates between soldiers and civilians of different demographics. FLORENCE NIGHTINGALE/PUBLIC DOMAIN

Bar charts comparing mortality rates between soldiers and civilians of different demographics. FLORENCE NIGHTINGALE/PUBLIC DOMAIN

There are many lessons to be learned here, including the importance of persistence, patience, daring, and conviction. There’s also a data lesson: how important it is to understand your own work - and the very particular context in which you’re doing the work – when telling your story and getting people invested in it.

Nightingale was involved in both program implementation and data collection; she knew they reinforced each other. She had a particular intervention she’d implemented, seen success with and wanted to scale. Because she had thoroughly documented her work, because she didn’t use what was readily available (the poor data collected by the Office of Army Medical Statistics) or settle for a handful of anecdotes, she had the material with which to make a highly specific case. Her visualizations are the product of a brilliant mind, yes. But also the product of a methodical and deliberate one, who tracked her own work and used it to advocate for her cause.

Context-rich, on the ground, programmatic data is the foundation of compelling visualizations. They’re also the basis of visualizations donors and other stakeholders can tie directly to your organization. If you’re one of the innumerable nonprofits producing national or even international context about a problem, you run a couple of risks. First, having someone understand the problem – but then decide someone else’s approach is the way to go. Second, having someone decide the problem is too big to handle – for them and for you.

Every nonprofit has something relevant and data-driven to say about its own program and what results it's generating in the field. Instead of focusing so much energy on context, broad storytelling and outputs – none of which open the black box of implementation to outcomes - go out there and look at what you’re doing. What makes it great? Why is it important? What do you want people to take away from your findings? What do you want them to do in response to your data? Highly specific data will produce not just pretty pictures for your website, but - as Florence Nightingale so ably demonstrated - images you can tie to action.

I know, I know. You're not being funded to do this work. But you're not a novice;  you are collecting some data. If you've read this far, you know data visualizations and infographics can be put to better use.    

Here are some ways - taking into account existing resources - to get the work started.

·      Instead of putting staff time and energy into yet another generic customer satisfaction survey, go out into the field and watch your users interacting with your program. What do you see? What’s standing out as a strength you can publicize? What's great about this approach is that you can start with a handful of service recipients.  

·      Examine a segment of your logic model (if you don’t have a logic model, make a hypothesis about a short term, observable result you expect to see from your work). Then go out into the field and see what the data says about it. You can collect data via observations, interviews or even a single question survey.

·      Think about how your board can advance your data cause. Do you have a board member that has expertise in data collection? If not, can you recruit one? Board members can serve different purposes. Not all of them have to be champion fund-raisers; they can also bring in-kind resources in a field you need to invest in - like data collection and analysis.    

·      Talk to a funder. No, not all your funders are interested in helping you produce stronger, more program-specific data. But there’s a good chance there's one who is. You might have a particularly good relationship with one, or they have a special focus on knowledge management and best practices, or they’re looking to build capacity, or they're looking to support a new project of yours.

Nonprofit staff, I'd love to hear your thoughts on data visualizations, infographics and other ways you tell the world about your work. How do use data to create a narrative and make an argument? What kind of support do you need to do this work more effectively and efficiently? 

As always, I can be reached at shefali@tricycleusa.com      

 

 

Underwhelmed by your consultants? Maybe it's you. And them.

Are your consulting engagements accountability exercises – or actionable and empowering?

April and May have been full of proposal writing which, no matter how many times I’ve done it, is a time consuming process. On the plus side, it’s great to puzzle over an RFP and figure out how I might bring the Tricycle approach to a project while keeping the client’s needs front and center. On the minus side, proposal writing frequently goes nowhere.

To resolve this issue, I apply a litmus test. For every one proposal I put out, there are a dozen more that I don’t apply for. What differentiates the two? The proposals I write are for organizations who understand the ideal consultant is an expert who brings to the table a unique, skilled perspective on how the project should be conducted. These organizations loosely outline what they need, but then ask for suggestions on how to go about the work. For example: “how might we conduct an experiment in pricing our services”; “how might we create an evidence base for our work, considering the following about our organization”; “how can we build staff capacity to implement our program with more quality and fidelity, given our resources and work to date?”

The RFPs I don’t respond to have everything scoped out – not just what the deliverables will be, but every element that each deliverable will contain. In this situation, the consultant is providing a highly delimited service. It’s hard to tell how s/he can add value when the RFP is asking for – essentially – evidence the consultant has done this very same work for another organization; the client wants someone to replicate previous work for them.

This (over-)scoping is unfortunate for both consultants and the organizations that hire them. They’re based on the flawed assumption that work done for Organization A can be easily and appropriately applied to Organizations B, C and D. Additionally, by saying “we need someone who will do exactly this for us,” organizations close themselves off to fresh ideas, perspectives and solutions. They’re asking for something packaged – and thus shouldn’t be surprised when it tastes stale.

Consultants, in order to make a living, don’t always push the client to think about what’s the best approach to take given a variety of factors (budget; staff capacity; how well developed the program/product is; what data exists on it and the clientele; what are pressing organizational needs; what other initiatives are currently under way; and what similar work has been done). They, instead, produce what’s asked for.

Sure, consultants should push back. But more importantly, organizations’ leadership should be asking themselves: why am I hiring a consultant? Is it to check an accountability box (we’ve done a strategic plan; we’ve produced an evaluation of our work). Or is it to actually address and resolve a problem the organization is grappling with (we need to rethink our approach because numerous competitors have emerged and our growth has slowed; we need to understand the connection between implementation and outcomes so we can produce more consistent, high quality work across sites.)   

When I hired an architect to help my NYC apartment better accommodate my growing family, I told them what I liked about my home (lots of light, clearly delimited public and private spaces), and what I didn’t like (too many walls and doors, inefficient use of space). Then I asked them what else they needed to know - what was their process and how could I help them with it? What I didn’t say was: here is where I want you to create a home office; here is how you must create more kitchen space; here is where you will knock down walls and put the washer/dryer. Now please tell me exactly how you’ll do this, what it will cost and what’s your timeline. And prove you’ve done this very same job before.

Not surprisingly, what they envisioned and executed was far superior to anything I could have scoped on my own – though not without some stress and teeth-gnashing. They had experience and expertise renovating dozens of residential settings. My experience and expertise consisted of living in my house for 8 years. Just because I’d spent more time in my house than the architects didn’t mean I knew exactly what the key problems were and the best way to solve them. Knowing this, I gave them the freedom to propose several solutions, picked one and  worked with them around the specifics – and hence got great results. 

Organizations, in short, need to trust the consultants they’re hiring to unearth key problems and propose solutions. By giving consultants more freedom – including what the deliverables will be and who they’ll work with internally to produce them - you’ll get opportunities to hear some really interesting ideas that aren’t the usual – “talk to the leadership team, get the board on board, look at secondary data, talk to a couple of clients, report findings and suggest next steps.”

If doing that kind of consultation worked, I wouldn’t be so hard on it. But the opprobrium consultants fall under - the feeling that hiring them is a waste of time, but something organizations must do to resolve their problems - demonstrates that something new is needed from both sides.

What might this new type of engagement look like? I recently worked with a client, an education organization, to help them understand how they should follow up on staff capacity-building efforts. How did staff perform post-training? What could leadership do to ensure the training was built on, rather than serving as a “one and done” exercise? Based on this scope, I conducted program focused observations, did structured interviews, looked at organizational data and made recommendations. Then I worked with the organization to incorporate some of these recommendations into planning for the next fiscal year, making sure they were properly resourced (e.g., money, time, staff). Here, data was being used for action.

At the end of the engagement, the client told this work was essential for his organization, but hard to prioritize; he was grateful I had made it the focus of the work.  

Because this wasn’t the project scope as envisioned by the client. The organization wanted me to administer a staff satisfaction survey focused on whether staff who received the training now feel better equipped to deliver the program. I didn’t think this was appropriate. They were already measuring satisfaction through a separate data collection process; they needed to get beyond feelings about performance and start examining actual performance. Why did the client want to do a survey? Because it seemed like a tidy way to show they had followed up on training. They were checking a box; they didn’t “want to overthink it.”

The client, fortunately, was open to a discussion and feedback about what the next steps should be, and we had several discussions about scope and how to precisely develop a project that would produce actionable, timely and budget conscious results. I’ve not always had this experience with clients. Sometimes, when I suggest alternate approaches and what - based on the data I’ve gathered - the organization should be focusing on, the client is simply not willing to reconsider their approach. As always, the relationship developed with leadership is key.     

Organizations, I’d love to hear how and why you’ve engaged with consultants in the past, what worked and didn’t. Consultants, I’d love to hear how you handle issues of scoping and deliverables. As always, you can reach me at Shefali@tricycleusa.com         

 

What Would Google Do?

 

Why should nonprofits engage in user experience research? Because they can’t assume that once they build a program, clients will start knocking on the door.

Over the last two decades – thanks to my husband’s working for tech companies - I’ve had a chance to see all kinds of jobs (like DevOps, shorthand for collaboration between software development and IT operations) emerge in the tech industry and then filter more widely into the private sector.

Now it seems nonprofits are getting in on the act. A recent article in Fast Company touting the nonprofit jobs of the future focused on ones - Culture Officer, Data Scientist, UX Researcher – that are commonplace and widely recruited for tech jobs.

The piece is talking more about what should be rather than what currently is (You might be saying: “Nonprofit data scientist? My organization has a hard time collecting attendance!”). But that doesn’t mean these jobs are unnecessary. In particular, I’m very interested these days with bringing UX style research to nonprofits and social impact organizations.

UX research as we know it today began in earnest in the tech world as companies began creating - not variations on familiar products, but rather new products that asked consumers to interact with them in unfamiliar ways (think Apple). UX researchers examine how someone interacts with a service or product - from first encountering it to integrating it into their day. The goal is to make the user experience not just worry free, but engaging and seamless.   

Tech companies, like any for profit company, have generally straightforward outcomes centering around growth: total sales, net profit, total daily users, click through rates, etc. While these are obviously the numbers that indicate effectiveness, tech companies know their outcomes are in large part a result of everything they’re doing internally – a result of how well their product is designed and implemented. They know that if they rely only on outcomes they won’t have enough information to be responsive to customers - to change course, scale and, of course, innovate.

That’s why user experience research has been so important for tech companies. As they’re creating products and services for which there may be few models for comparison, companies need to understand how existing users are reacting. This helps them make informed decisions around not just retaining these customers, but acquiring new ones. How can AirBnB, which asks people to engage in the unfamiliar and potentially uncomfortable act of letting strangers stay in their homes – often while they’re still living there – make that service work across countries and cultures? How does Etsy make it as tempting as possible to browse (and buy from) a website that’s offering a bewildering multitude of niche products?

User experience research – done through ethnographic and other social science mixed methods – provides targeted data and insight to inform product development, to create goods and services that customers are excited to engage with.  User experience is not focused on understanding what users want, what they profess to like. The focus instead is on understanding what users do when interacting with a product - with making their experience so satisfactory that they become regular customers and recommend the service to others. 

When I first discovered UX research, its simultaneous newness and familiarity made it feel like finding a long lost cousin. As applied to nonprofits it’s a subset of program implementation research, focused in a very substantive way on understanding and being responsive to constituents. Rather than viewing people receiving services as an undifferentiated block of “need” that needs to be “met”, nonprofits engaged in UX work gather feedback from constituents and prototype and test out solutions with them. They take into consideration factors like culture, geography, demographics, the existing marketplace - and how they all come together to shape behavior. They think carefully about not just why the program exists and what they hope to achieve, but how they will provide services to their target users.

Now, you might be thinking to yourself – NO WAY! UX research is about consumerism, fueling internet addictions, and making it impossible for anyone but rich techies to live in San Francisco.  It has nothing to do with my nonprofit work, which is about helping people on the thinnest of margins.

But – just like tech companies – nonprofits need to make their interventions and programs not just effective, but also attractive to potential clients; they need to make sure they’re designed and disseminated in a way that retains existing customers and brings in new ones. Nonprofits – just like for profits - are selling their services. They’re competing in a marketplace of organizations who are going after the same clients (customers) and sources of philanthropy (revenue). A thoughtfully designed program or intervention, a service that adapts to its clients’ behavior, is more likely to maintain and grow users than one that is built without a strong understanding of how current and potential clients interact with it – and why they may not be willing to use the service.   

Nonprofits may be addressing a clear need – but they cannot assume that simply building a program will bring in clients. The international development field is rife with examples of programs and services that were designed with the outcome in mind, but gave little thought to the end user until it was too late.

Partly because of these and other well documented failures, and partly because they have relatively large budgets compared to many US based nonprofits, international development organizations have started thinking more about users, about how and why products and services work - and don’t. As the research conversation moves from “only outcomes matter” to “hey, understanding implementation is also key,”  I’m interested to see whether more US nonprofits start to explore UX research, and whether philanthropy partners with them to support the work.

If you are currently thinking about how to collect data that goes beyond basic output-to-outcome metrics, try emulating some of the most successful tech companies and conduct some UX work of your own, remembering to:

·       Adopt a “test and learn” approach.

·       Start small

·       Look for people with the right skills to help you.

 

 

Back to School: Thoughts and Declarations, Part 2

As promised, here’s the second of two posts in which I share the hopes and goals of a few members of my so very talented education network.

What am I thinking about as the school year gets under way? First, I’m grateful the high holy days are in early October, so my kids will be in school for 3.5 uninterrupted weeks. That almost never happens!

More seriously, I hope for what my contributors talk about. That children – not just mine but everyone’s - get a chance to be part of a community of learners where they learn to respect each other and appreciate their differences. That they have access to arts and other enrichment activities; that they have multiple opportunities to be creative – to stretch their bodies and minds and be challenged in areas beyond Math and ELA. That they’re free from bullying and their schools have strong SEL programs and other student supports. That their teachers are open-minded and caring, that they treat each day as a fresh start and each student as having promise and boundless potential.

These are, of course, hopes, goals and declarations for now. We’re, most definitely, not there yet. But every year gives us a chance to get a little closer; I’ll once again be doing my part to bring these wishes closer to reality for all. 


Schools and students inspire and motivate me every day. I’m ecstatic to continue to work towards ensuring educational equity.
— Chris Lopez. Second grade teacher. Tulsa Public Schools.

As a fifth grade literacy and social science teacher of African-American students on Chicago’s south side, culturally relevant learning and themes of social justice are integral to my reading and writing instruction. For my students to authentically engage with multiple high quality texts and digital sources - and meaningfully write and respectfully debate opinion pieces about sensitive subject matter such as racism and violence in the city - it’s critical that we devote the first few weeks together to establishing a community of learners. This means we together create a safe environment where taking risks, embracing confusion, learning from mistakes, and pushing conceptual boundaries can take place. And I believe this process all begins with the teacher and their willingness to make their pedagogy transparent.  By opening up and sharing their methods and practices with their students, teachers also model the type of vulnerability and courage necessary for intellectual growth.

Shawn Reddy. 5th grade literacy and social science teacher. Chicago Public Schools.


For my 6th grade year I have two goals. Try not to get bullied this year. And be able to improve on my worst subject, ELA. If I could accomplish these goals my 6th grade could be better. This school has some cruel people but I’ll make it.
— Ava A. 6th grade. Brooklyn, New York.

For me, the beginning of the school year is about laying a foundation. It’s about helping my students understand that singing is a craft, with many facets, that takes time to learn. You won’t master it in a semester. I use these early days to help students think through why they’re taking lessons. I always ask kids - especially older kids, high schoolers - why they sing. Why do you want to learn to sing? Why not just sing in your room? I’m trying to get them to think about how - through singing, through your body - you can express deep feelings you can’t get at any other way. I don’t tell them this; they need to figure it out for themselves. It takes a while, at least a few lessons.

I also spend the beginning of the year assessing learning styles. Every student is different, and to teach them I need to figure out what works for them emotionally. Some kids you can be tough with; others you have to be gentle with or they’ll collapse in a heap.

All my students are working towards a solo performance - in front of a grand piano and a big audience. It’s just you up there, no microphone, having to command everyone’s attention with your presence and voice. It’s a good life lesson and preparation for the “real world”, even if you don’t pursue singing as a career.

Lara Nie. Opera Singer, Mezzo-Soprano, Voice Teacher. New York, New York.


This year I’m working for the Oakland Promise - a citywide initiative in Oakland that aims to triple the number of college graduates from Oakland’s schools…I’ve never worked on a local policy initiative like this before so I’m hoping for myself what I hope for all students this year: many opportunities for personal growth and building new skills.
— Michelle McNamara. LEE Public Policy Fellow for East Bay College Fund. Oakland, California.

Ahhh, the beginning of the school year: time to buy new shoes, get supplies… and gear up for the endless barrage of fundraisers. At the public school where my older son is starting 2nd grade, there’s the fall festival, the winter festival, the spring festival, not to mention the bake sales, the plant sales, the cookie sales, the auction, the parents’ bar night, and of course, the big ask: the annual fund drive, in which our PTA asks families to pony up hundreds of dollars per student to help fund school enrichment programs.

Ask most parents at my son’s school what they like about it, and you’ll hear them tick off a list of extra-classroom enrichments paid for via these fundraisers: music and arts classes, chess classes, partnerships with cultural organizations, extra recess help. In other words, many things that distinguish the school aren’t really part of the public school system – they’re add-ons the parents have purchased. This opaque, private fundraising system creates and deepens the chasm between the so-called “good” schools and the so-called struggling schools in New York City. It’s a vicious cycle, closely entwined with the real estate cycle.

I’ve seen a lot of good, thought-provoking press about the lack of racial integration that characterizes New York City public schools. But I’d like to hear more about how private money helps fuels these inequalities in an already segregated school system – and hear more about what we can, and should, do about it. Put limits on outside money? Redistribute it? Help struggling schools fundraise? I don’t have answers yet. Maybe another year of bake sales will give me some. 

Carlyn Kolker. Freelance writer. Brooklyn, New York.


Our school program is based on a partnership model. We have a seven-year learning sequence that integrates dance with rigorous journal writing and the PreK-5 curricula. This year, as always, I aspire to work with visionary principals of Title I schools. They’re critical to the program’s long-term success with diverse students, teachers, families and communities.

Dance arts don't generally seem to thrive in NYC elementary public education without a lot of creative problem solving. Our biggest challenge is the year-to-year fundraising we need to do to support our programs. We recently partnered with John Hopkins University School of Education to evaluate our program, which includes a comparison with four schools who will participate in our program post-study. I hope this evaluation finds a quantitative impact on students' overall learning and achievement, as another study did ten years ago. Such measures seem critical to sustaining our vision of helping to improve NYC's public education by providing children with access to interdisciplinary dance arts learning connected with rigorous reflection.

Mark DeGarmo, PhD. Founder, Executive & Artistic Director, Mark DeGarmo Dance. New York, NY.

Back To School: Thoughts and Declarations, Part I

It’s been more than 12 years since I completed my PhD, thus ending - or so I believed at the time – my connection to the September-June academic calendar. “I’m a working woman now,” I thought. “Finally, my life will be governed by the January-December calendar!” But shortly after I wrapped up my dissertation I moved into PreK-12 education. Then I had two kids, both of whom are now in school. So that September-June rhythm continues to rule my life, as it does for so many of us who are no longer in school ourselves. For me, for many years to come, Labor Day weekend will have more resonance and bittersweetness - will bring more zeal to start afresh and make resolutions - than New Year’s Eve and Day. 

To celebrate the new year, which is just getting under way in New York City, I’ve asked some of the many wonderful, talented students and educators (teachers, principals, and parents) I’m privileged to know to share with me what they’re thinking about as they go back to school.  I received such incredibly thoughtful responses, even as everyone was ramping up, that I’ve had to split this blog into two posts. So there’s more goodness to come. 


I’ve taught kindergarten for 22 years. What I love to witness is the change that takes place as children begin to understand they are part of a larger community - larger than just themselves and their family. Being a facilitator in building a caring community of learners is what’s kept me going, year after year, for so many years. Now, more then ever, I feel a great responsibility to create a compassionate environment where caring and respecting one another is as vital as learning to read and write. I feel it’s my responsibility to teach children to care for one another and to care for our planet. I want to impart to my students something I once read from the Dalai Lama, “Our prime purpose in this life is to help others. And if you can’t help them, at least don’t hurt them.” If my students learn only this in kindergarten, I will feel that I have succeeded.

Victoria Misrock Stein.  Kindergarten Head Teacher, Berkeley Carroll School. Brooklyn, New York.


Before the students arrive for day one, I meet with my staff as a group and ask each of them "What is a word to describe your feelings about the upcoming year?"  In this impulse I seek to gain perspective. But like so many things I encounter as a school leader, this word is only a snapshot in time - to be heard and catalogued, but then filed away for future reference. When I did this exercise with my teachers this year I finished by saying to them - "This is how you’re feeling now, but it would be unfair to assume this will be your perspective for the entire school year. As we look to the first day of school let us remember that how our students are feeling one day does not define their entire outlook." It is our role as educators to treat each day anew with our students. Each day is a new challenge and opportunity. Each day we have the chance to make it the best one for these children we serve. 

Kelly McGuire. Principal, Lower Manhattan Community Middle School. New York, New York.


How do I feel about school starting? I feel:

Annoyed: Because the first time my little brother gets on the bus with me I might have to sit next to him.
Confident: That I’m going to learn a lot of things
Sad: Because I can’t be with my boppy (my blanket) all day anymore
Excited: That I get to meet new people in my class.

Saanika Slotwiner. Second grader. Brooklyn, New York.


Academically I hope my daughter’s school emphasizes reading and writing more. It’s her weak subject. I’m hoping her teacher will motivate her to do better with creative lessons and homework. I will also be doing my part to help her along. I would also love if her school would include music and a language (Spanish!).

Friends are important to 6th graders. My daughter is shy and I’m hoping a few of her friends from lower campus will be attending upper campus. We’ve been talking about peer pressure and bullying and coping techniques so that she can survive it and won’t find herself on the wrong path. I have zero tolerance for bullying and her school has policies in place which are super important to me. 

Lastly, daily after school activities are apparently starting this year and are welcomed. My daughter didn’t have them last year. I think having after school options will help my daughter grow socially, be independent, improve her self-confidence and learn how to work in a group setting.

Karen Azikiwe. Nanny. Brooklyn, New York. 


The start of a new school year always begins with so much hope and promise for me as an educator.  For this year, my hope is to bring out the innate curiosity from within my fifth graders.  My school is fortunate enough to have a Chromebook for each upper-grade student; my goal is to teach and guide students on how to use this tool adeptly so that they can maneuver their own natural curiosity through the vast Internet.  Helping them become digital sleuths will hopefully not only engage their curiosity but help to flourish it as well.

Patricia Rendon Cardenas. 5th-grade teacher, Dr. Martin Luther King, Jr. Elementary. Santa Ana, California. 


These three years of high school have been a roller coaster, from staying up all night working on papers to meeting new people of all nationalities. Senior year is the year you're supposed to enjoy the most and also learn from. To ensure a motivated and determined mindset for the rest of the school year I’ve been taking classes at Queens College. Getting accepted into college and starting another 4 years of my life in a different academic world has always been an accomplishment I’ve wanted to achieve. Now it's right around the corner. I hope all the colleges and scholarship programs I apply to are amazed by my grades and academic work. I pray this school year will be filled with amazing and spectacular events for everyone!

Diana Jadunandan. Senior, Townsend Harris High School. Queens, New York.

 

On Pinterest Fails and Program Implementation

For a replicable cake pop - or program - ask How?
photo courtesy of
@pinterestfail

 

This SSIR piece from the leadership at Educate!, a nonprofit based in the US and Uganda, gives some very practical, operational advice for organizations ready to scale an existing model. One part of the article really stood out – probably because of my love of cake. “Imagine that instead of a social impact organization, you are a baker with a groundbreaking new recipe and one week to bake six wedding cakes, each with six differently flavored tiers. You have a strategic decision to make: bake six bottom tiers the first day and add a tier to each cake each day, or bake a full cake each day.”

In their work, Educate!'s leadership learned that "building a full-size unit immediately enables you to work out the kinks up-front. Once you clear that hurdle, you have a model that you know you can implement at scale. All that is left is to replicate it." In other words, it's preferable to bake a full cake each day, so you've got time to tweak as needed. It doesn't make sense to figure out the day before the wedding if the recipe is a good one. 

Even for organizations not thinking about scale, but focused on the earlier stages of developing and understanding their current work, recipe metaphors are useful. Because whether you’re new or established, understand your impact or not quite there yet, what’s key to success is understanding the how - aka understanding implementation.

Think of developing a program model as perfecting a cake recipe. A big part of the project is specifying inputs/ingredients as well as the step-by-step work needed to achieve successful outcomes - whether they’re constituents whose needs are addressed or delicious cake. The goal of model/recipe development is to create detailed instructions that produce strong results not just once – but over and over; and that can be done not only by the program/recipe developer, but by anyone with knowledge of the field/baking.        

But if you’ve seen pinterest fail - or unsuccessfully tried to replicate mom’s cooking – you know there’s more to a good recipe than what – how is crucial. How should items be measured (cup or scale, sifted or not); how can ingredients and steps be replaced, omitted or embellished; how do changes in oven temperature effect cooking time; what brand and freshness must the ingredients be; how much time and skill does the recipe really require? Without this kind of information it’s hard to replicate that promising recipe once, let alone time and time again. [For a fantastic example of someone who understands the importance of how, read Gabrielle Hamilton’s Prune.]

So it goes with program development. The what of the program model gets the work started, but the how of day-to-day field operations shapes what the program is and the results achieved. Without examining the how of implementation – including adaptations and quality as well as fidelity –  and its relationship to the what of the program model, you’ll never develop something that can be consistently and successfully implemented – let alone scaled. 

In case all this food talk hasn’t made you sufficiently hungry, I’ll share the story of the Vienna Beef Company (you can read more about it here or listen to Act 14 of this episode of This American Life). When the company moved to a brand new, state of the art factory in 1972 they noticed a disappointing change in the flavor of their smoked sausages, one they couldn’t explain. The ingredients were the same; they were following the same recipe as before - but something was off. It wasn’t until workers began to fondly reminisce about a retired coworker that they got to the bottom of the flavor change conundrum. When the factory moved, Irving - the coworker – retired. His job had been to transport the unsmoked sausages from refrigerators to the smoking room. Because the old factory had grown haphazardly, his path wasn’t logically laid out. The trip took Irving 30 minutes and led him through many of the warmer rooms in the factory – allowing the cold sausages to warm up before they entered the smoker. Those 30 minutes – how the sausages got to the smoker - had been a vital part of the recipe all along, and the company didn’t even know.

How to improve grant reporting? Dig into the black box.

Let's end the pro-forma accountability exercises and create something more meaningful 

 

This Center for Effective Philanthropy blog on how to make reporting requirements more useful for both grantmakers and grantees is a year old. But, as with all good commentary on the nonprofit sector, it's still completely relevant. 

If you’ve worked for a nonprofit you’ve been there: dragging yourself over the finish line to complete a funding report, trying to turn a spreadsheet with incomplete, inconclusive data into a cohesive narrative of your multiple successes. If you’re a grantmaker you no doubt have a stack of reports that you don’t have time to fully digest; the ones you do get to are as clear as mud. Blog author Jessica Bearman highlights the somewhat horrifying statistics that only “about half” of funding reports are used for internal decision-making, and only about one-quarter shared with the field.

How can we make reporting a useful exercise instead of a check-the-box requirement that doesn’t benefit anyone? Bearman provides some guidelines. They include grantmakers being very clear about what they want to learn; figuring out the best format in which to gain this knowledge; and determining ahead of time how they will follow up with grantees.

Bearman is agnostic regarding what should be reported. I am not. Many grantmakers currently require annual reports that focus on program effectiveness and expense accounting. These approaches are not very useful. A new approach is needed.

Human service (e.g., education, and healthcare) nonprofits all rely on the interactions between their staff and constituents to achieve results. A given organization has probably manualized its model; possibly staff are trained to deliver this model with quality and consistency. Even if these are in place, implementation involves interactions between dozens, hundreds - sometimes thousands - of people, all making daily, individual choices. It is these people - staff, volunteers and the constituents who are the focus of services - who will be the driving force behind any outcomes and impact produced by a particular program or intervention. Yet implementation is not usually well understood by either the grantmaker or the organization doing the work. What’s done to achieve results exists in the proverbial black box. Current reporting requirements – largely focused on accountability - leave this black box unopened and programmatic work underspecified.

If nonprofits and their multiple stakeholders are to discover and replicate successes, make a compelling argument for why and how their work has positive outcomes and impact, understand adaptations and obstacles - even define exactly what their program or intervention is - the black box of implementation needs to be opened and explored. Organizations with successful, evidence-based programs, such as Pathways to Housing and Nurse-Family Partnership, focus as much on careful implementation as evaluation of outcomes and impact.

Grantmakers can help more organizations open the black box through reports that facilitate grantees’ learning about their work. Questions that prompt organizations to dig into implementation will help both grantees and grantmakers. By deepening understanding of how work takes place in practice – rather than as outlined in a logic model - programs and interventions can be strengthened. With greater understanding of implementation, the connection between a particular program and outcomes, as well as the cost of running the program and what’s needed to build sufficient organizational capacity, can also become less opaque.  

Nonprofits don’t need to start data collection efforts from scratch. Surveys have shown the majority of NPOs collect data around staff performance, customer satisfaction, and/or program activities. Quality data collection and data-informed improvement efforts are, however, much less prevalent. By asking targeted questions - and supporting organizations to answer them - grantmakers can transform data usage and grant reporting from compliance exercises into opportunities for learning and growth.  

Here are some questions I ask when working with organizations around implementation and opening the black box.

  • What do you know about your constituents – who they are, what other services they receive, how they participate in your program, how they experience it? 
  • How do you and your staff understand how work takes place “in the field”? How do you know if/how goals are being met? With what quality? How do you use this information to improve your work?
  • What variations have you noticed in how your model operates within and across sites? Why do they occur?
  • Where is your implementation strongest? Where are the greatest challenges?
  • What do you not know about your work?
  • What methods do you use to gather information from program staff and constituents? Who gathers, analyzes and reports this data and in what form? What systems do you have in place to look at the data you collect? Who participates in them? How do you act on what you learn?

What’s your experience been in this field? Have you attempted to open the black box? Nonprofit staff, are you comfortable talking to grantmakers about implementation? Have you gotten support to do this work? Grantmakers, have you looked with your grantees at their implementation? How do you build trust with grantees to explore their work more closely? We’re eager to get a conversation started!

 

 

"Can you give us a little help with this?" : Balancing school-CBO needs

A back to school post, wrapping up my series on CBO-school partnerships. I'm putting this up much later than I planned to. Work deadlines are hard on blogging. As are vacations...

A while ago I posted about some of the issues the NYC Community Schools initiative will face as the project gets off the ground, noting there are three school-based issues that, particularly in the early stages, community school stakeholders need to keep in mind as they’re planning.

1.    Community Based Organizations (CBOs) are trying to address complicated issues, on an individual child as well as group basis. They’re doing this while working part-time in an institution they don’t manage, that has its own set of priorities - which don’t always match those of its partner(s);
2.    Implementation and management of school-CBO partnerships rest very heavily on one individual – a Resource Coordinator – who is expected to have a very wide and deep set of skills and experience.
3.    Schools want their CBOs to tackle a range of school needs, needs that are not always part of the original partnership agreements.

This is the last of three posts in which I explore how to address these issues and develop productive school-CBO relations from the beginning of the partnership. I’ll be focusing on issue three in this post. Here’s the discussion on working with schools as an “external provider”.  And here’s the one for Resource Coordinators.  

[*Disclaimers and clarifications can be found at the end of this post.*]  

Issue 3: Inevitably, there will be tension between schools and CBOs in terms of additional work the CBO can address.

Recommendation: Concentrate on getting your model off the ground, but provide targeted support in areas important to the school, complementary to your work

An organization and school may sign an MOU that clearly states the focus will be on attendance; but, as the organization spends time in the school, staff will ask about student behavior, classroom culture and family engagement – issues related to attendance, but not directly addressed by the organization’s intervention. The questions will range from the more basic: “can you tell me how I’m doing transitioning kids from classroom to lunch room?” To the more complicated: “can you sit down with me when I talk to the family about their child’s performance?”   

What’s a CBO to do? There are opposing factors at play. CBOs come into schools to help students and teachers. But they also have goals to meet and accountability metrics of their own (i.e., did they accomplish what they told their funders they would accomplish). In the long term, the best way for them to help the school is to get their model smoothly running. In the short term, CBOs, especially ones putting in place more complicated interventions, ask a lot from schools without providing immediate, tangible results (i.e., they’re taking more than they give). So it would behoove them to provide support during this ramp-up. Finally, while CBOs may have staff expertise in a number of areas, it might not be distributed evenly. Managers might have wide knowledge of school issues, while the on-the-ground staff might only be able to implement specific interventions. This is especially the case when field staff are recent college graduates (e.g., Americorps) or volunteers.

Keeping all this in mind, there are a few ways CBOs can approach requests from school staff to help them with issues beyond the immediate mandate:

o   Decline to provide feedback or assistance in these areas, because CBOs have been brought in to help with attendance and need to focus on that;

o   Get very involved in one or more of these areas, as they’re the school’s top priorities and CBOs should support the school;

o   Think about ways that behavior, culture and engagement can impact attendance, and provide feedback and/or other support that will help reinforce the ongoing attendance intervention.

From my experience, the third option is the best one. The issues schools grapple with are highly inter-dependent – attendance impacts classroom culture and student behavior and vice versa. A well thought out approach to classroom culture or behavior can bolster an organization’s attendance intervention, or at the very least build significant good will with the school while the organization deploys and adjusts its intervention. Approach three is also the most proactive, giving the organization room to figure out the best way to provide support (according to its capacity), rather than simply reacting to school requests for assistance.  

Ideally, a CBO’s feedback and support should empower full-time school staff to do the work (i.e., the CBO shouldn’t take on additional work it will then be primarily responsible for). It might consist of: providing support to (school, non-profit) staff engaged in complementary work; sitting in on classes and giving feedback; helping staff evaluate what the results of their work are; helping them to do research (e.g., into a program, source of funds); and leadership support.      

This middle ground is all part of understanding partners’ priorities and trying to align with them. Saying “yes, let me take a look at this and get back to you with some data and thoughts” is taking concerns seriously without derailing the organization’s core work. In a flexible model like the one community schools have, this should be feasible to accomplish.

And one final recommendation…

Set expectations and be honest in your communications with the school

Every school wants an outside provider – particularly ones focused on mental health, behavior, and/or social-emotional learning - to “fix” the children who command a disproportionate share of staff attention. But this goal is very, very difficult to achieve in a sustained way. Yes, there are cases where a new pair of glasses or mental health referral or reading intervention dramatically changes a child’s behavior and academic performance in school. However, most of the time there are complex, long-standing reasons that children act out and perform poorly in school. Gaining the family’s trust and participation, determining what course of action is best for the child and family, and maintaining this course over time can be extremely challenging. For every gain made, there will setbacks (also known as one step forward, two steps back). School staff can then become frustrated when week after week, month after month, children make little or very slow progress.

CBOs, in their excitement to start work in a school (or to get into a school), must be careful not to overpromise what they can deliver. It can derail an entire organization’s work if it declares or implies its intervention can improve the behavior and academic performance of individual children the school has struggled with for years.

On the surface, these are not complicated recommendations. But in practice they’re pretty tough to achieve. They require discipline and focus – from CBOs, schools and those supporting them – to carry out and maintain over time. In today’s highly charged, resource pinched, ever-shifting educational environment, this can be a lot to ask for. But I firmly believe they’re essential to follow if a CBO-school partnership is to achieve success.

 

* Disclaimers and clarifications *

·         I’m using the term CBO to encompass non-profits as well as community-based organizations deploying a particular program or intervention in a school.

·         I’m talking about programs that provide student supports (e.g., counseling, mental and general health, enrichment, attendance and behavior, family outreach) more than a specific academic intervention

·         This is a less evidence based post than usual. I’m working from my many years of experience and have numerous examples, but am not going into all of them  for the sake of brevity. If anyone is curious about what I’m basing the recommendations on, email me and we can get into the details.

·         Chalkbeat has covered the gap between DOE rhetoric around what must happen in schools versus what rollout has actually happened so far. 

Squeezing 3 jobs in 1 role: Community School Resource Coordinators

Back in February, I posted about some of the issues the NYC Community Schools initiative will face as the project gets off the ground, noting there are three school-based issues that, particularly in the early stages, community school stakeholders need to keep in mind as they’re planning.

1.      Community Based Organizations (CBOs) are trying to address complicated issues, on an individual child as well as group basis. They’re doing this while working part-time in an institution they don’t manage, that has its own set of priorities - which don’t always match those of its partner(s);

2.      Implementation and management of school-CBO partnerships rests very heavily on one individual – a Resource Coordinator – who is expected to have a very wide and deep set of skills and experience.

3.      Schools want their CBOs to tackle a variety of needs, ones that are not always part of the original partnership agreements.

This is the second of three posts in which I explore how to address these issues and develop productive school-CBO relations from the beginning of the partnership. I’ll be focusing on issue two - Resource Coordinators - in this post. Here’s the discussion on issue 1.  

[*Disclaimers and clarifications can be found at the end of this post.*]

Issue 2: Implementation and management of school-CBO partnerships rests very heavily on one individual – a Resource Coordinator – who is expected to have a very wide and deep set of skills and experience.

Recommendation: Scale back the job description significantly to focus on the on-site work; give particular tasks (e.g., professional development and data collection) to those who specialize in these areas.

[NOTE: Each CBO entering into a community school partnership is given grant funds to hire a full-time Resource Coordinator who is based in the partner school. Each school has its own Resource Coordinator (i.e., if a CBO works in three schools it hires three Resource Coordinators.)]

Herehere and here are typical job descriptions for Resource Coordinators; all were posted by CBOs in the last few months in response to their partnering with schools under the NYC Community Schools Initiative. As I read them I marveled at the set of skills and experience asked for. The Resource Coordinator needs to be able to: conduct needs assessments; build relationships with school staff, parents, children and CBOs; develop and implement strategies to address attendance and drop-out; identify research/evidence based interventions and practices; recruit students to participate in a three-tiered intervention program; coordinate services; develop and implement family engagement strategies; build a variety of school-based processes and systems; deliver professional development to CBO and school staff; develop and maintain summer services when school isn’t session. Oh, and keep records and report out data, including outcomes. 

All for someone with “at least five years of experience” and for an annual salary of between $35,000-55,000.

These job descriptions contain the work of at least three full-time staff; there are entire organizations dedicated to doing one of the activities listed here. If someone goes into a school with these parameters, s/he’ll most likely try to do a little of everything, with the result that nothing will take hold.   

Scaling back the job description so it prioritizes on-site work is a clear first step to take. Resource Coordinators need to focus on: i) building relationships with families, children and school staff; ii) identifying who is at-risk and what services they need; iii) linking them to available services that address these needs; iv) tracking their own work. And even in these areas s/he’ll need support, for example to build the processes and systems that will sustain the work over time.

The Resource Coordinator is so important because s/he’s full-time and based in the school – and thus is best equipped to serve as the conduit between the school community and the CBOs delivering services to them. Developing interventions and strategies; relationship building with outside providers and government agencies; providing professional development; collecting data outside the scope of tracking immediate work should be given to those skilled and experienced in the work.

CBOs should be very careful about the data burden they put on the Resource Coordinator. The NYC Community Schools Initiative is working under a very tight timeline but within a very loose framework (I wouldn’t call it a model). There’s going to be pressure on participating CBOs to collect data to explain both what they’re doing and how it’s delivering positive outcomes. CBOs, perpetually strapped for resources, will in turn put pressure on the Resource Coordinator – the full-time, on-site staff member - to collect this data.

Yes, people exist who are great at collecting and analyzing data as well as building relations and linking children to appropriate services. But there aren’t many of them. Because each of these activities requires specialized skills and training, you generally focus on one or the other. CBOs, therefore, should have someone on-site who can support the data collection, leaving the Resource Coordinator to focus on services. This person, additionally, could help the Resource Coordinator with activities that straddle the data/service provision divide, such as needs assessments, locating evidence-based interventions, and developing methods to identify at-risk students.  

 

* Disclaimers and clarifications *

·         I’m using the term CBO to encompass non-profits as well as community-based organizations deploying a particular program or intervention in a school.

·         I’m talking about programs that provide student supports (e.g., counseling, mental and general health, enrichment, attendance and behavior, family outreach) more than a specific academic intervention

·         This is a less evidence based post than usual. I’m working from my many years of experience and have numerous examples, but am not going into all of them  for the sake of brevity. If anyone is curious about what I’m basing the recommendations on, email me and we can get into the details.

It's Not Complicated: Building Positive CBO-School Relationships

A few weeks ago I posted about some of the issues the NYC Community Schools initiative will face as the project gets off the ground, noting there are three school-based issues that, particularly in the early stages, community school stakeholders need to keep in mind as they’re planning.

1.      Community Based Organizations (CBOs) are trying to address complicated issues, on an individual child as well as group basis. They’re doing this while working part-time in an institution they don’t manage, that has its own set of priorities - which don’t always match those of its partner(s);

2.      Implementation and management of school-CBO partnerships rests very heavily on one individual – a Resource Coordinator – who is expected to have a very wide and deep set of skills and experience.

3.   Schools want their CBOs to tackle a range of school needs, ones that are not always part of the original partnership agreements.

This is the first of three posts in which I explore how to address these issues and develop productive school-CBO relations from the beginning of the partnership. [*Disclaimers and clarifications can be found at the end of this post.*]  

Issue 1: Trying to ameliorate the impact on children and families of long-standing, complicated socioeconomic problems while working in schools as, essentially, consultants.  

Recommendation 1: Before starting the work, set clear goals and benchmarks for the year, based on a strong understanding of school needs.

This sounds extremely basic. But you might be surprised how many organizations don’t do this. Many CBOs make rushed entries into their schools – the rationale is to get in while the funding exists and school leadership is amenable. CBOs are also under various pressures (from funders, competing CBOs, schools and the district) to get a great deal done in a very short time frame, which forces them to go on-site as quickly as possible. 

In these circumstances CBOs have a rough mandate to address particular needs, but not much inside information on the school - including if what they’re focusing on are in fact the key needs of the school; what’s been tried in the past to address these needs; and what else is going on in the building. CBOs then spend much of the first year determining what’s going on, spending valuable resources and capital getting the lay of the land.

Rushed entry and lack of data also means CBOs haven’t had an opportunity to flesh out what they will do, when, and how in the context of a particular school. Hopefully they have a model – but how it will play out in the unique conditions of a school; what modifications will need to be made; and if these modifications are reasonable or will compromise the model - is often not discussed beforehand, but rather in the moment, under pressure.  

Rapidly deploying a program means that goal-setting and planning are given short shrift. CBOs don’t have time to answer the following questions. What are your goals for the first year? How will you know you’ve achieved these goals during the first 3 months, 6 months, 12 months? What is your schedule and how does it integrate with the school calendar and school’s other work? Yet without answering these questions there is no roadmap for the work – only a plan to “do stuff.”

What CBOs must prioritize is planning with the school – even if it means they get a later start. When it comes to getting CBO-school partnerships off the ground there’s a great sense of urgency – “this is now or never moment,” the thinking goes. Yet the problems schools and CBOs are tackling are long-standing ones that will not disappear in a few weeks or months. There is always time to get organized; without careful planning CBOs jeopardize their work from the very beginning.    

Recommendation 2: Determine whose trust and buy-in needs to be earned, then earn it.

CBOs come into the building as outsiders to the school community who nevertheless provide support with sensitive, personal issues that play out in classrooms and students’ homes. Yet CBOs can function like management consultants, getting the perspective of and working primarily with leadership and support staff instead of interacting regularly with classroom teachers, students and families.  

It’s not surprising CBOs prefer to interact with administration and support staff. Principals tend to take a longer-term view; they’re more forgiving of the time it takes to get work off the ground. Support staff might come from similar work/education backgrounds and generally interact with students one-on-one, outside the pressures of the classroom. Classroom teachers, on the other hand, are responsible for two dozen students simultaneously, with middle and high school teachers having to repeat this feat with different students several times a day. They're “on the front lines” and thus view CBOs’ work from a different perspective than administration and support staff. They’ve also seen many, many organizations come and go and are skeptical of the latest promise that their long-standing problems are about to be solved. Some teachers are openly or covertly hostile, others are polite but apathetic. Their trust is not easily gained, and their disapproval can come very fast.  It’s a similar story with families and students; in New York City, students in high-poverty communities are likely to already be receiving services of varying quality and helpfulness. The prospect of another one is not automatically greeted with excitement.  

But this wariness is why it’s so important to interact with the wider school community. CBO staff often sit in their designated space and wait for people to come to them. It can be nerve-wracking to go into classrooms and other spaces you don’t control and subject yourself to the scrutiny and judgement of teachers, students and families. But, my experience is that if you go into classes with an open mind and a true desire to learn more about the school, you’ll be accepted in many spaces. That’s when you can start to make connections and advance the work; because you’re not just imposing your view of what should be done on the school, but getting the perspective of the community on how the work can be done.      

A natural place to begin this relationship-building is in the early, planning stages, as CBOs gather information on how to roll out their work.  

Issues 2 and 3 (resource coordinators, and how to focus work in the first year or two of the partnership) as well as a final recommendation are coming up in the next posts, which I’ll put up...soon!

 

* Clarifications and disclaimers *

·         I’m using the term CBO to encompass non-profits as well as community-based organizations deploying a particular program or intervention in a school;

·         I’m talking about programs that provide student supports (e.g., counseling, mental and general health, enrichment, attendance and behavior, family outreach) more than a specific academic intervention;

·         This is a less evidence based post than usual. I’m working from my many years of experience and have numerous examples, but am not going into all of them for the sake of brevity. If anyone is curious about what I’m basing the recommendations on, email me and we can get into the details.

The Case for Evidence Based Implementation

How and why to collect data to better understand your work and its impact

Evidence based, evidence based, evidence based. I keep reading that non-profits and the institutions that fund them have entered a new era of developing programs and making decisions based on data and proof of “what works”. In theory, great! But in reality, these encomiums to evidence are pushing a rather simplistic notion of how organizations and their funders should ask questions and collect data to better understand their work and its impact. 

Here’s the latest. In late December the New York Times ran an op-ed lauding the Obama administration's embrace of evidence based social programs - and bemoaning possible Republican efforts to move away from this focus. The author, Ron Haskins, argues that after years of the federal government giving money to social programs that had no long term impact on recipients, Baraka Obama, building on work done by George W. Bush, ushered in an era of increasingly rigorous and accountable federal spending. Money is now funneled to programs that have been properly evaluated by a reputable outside organization and shown to work, allowing them to expand while fewer dollars are spent on programs that don’t work. 

I’ve spent years building evidence based cultures and practices within nonprofits. Which makes me skeptical of Haskins’ argument and conclusions; particularly his sweeping assertion that evidence of success is all we need to make decisions about how to fund social programs.*

It is important that funders make a more concerted effort to identify and fund programs that are committed to honestly evaluating their work. Primarily focusing on the end result, though, ignores the crucial implementation process. It’s one thing to compile a list of “what works” based on positive outcome data in a limited number of sites, and another to take these interventions and implement them in a diverse range of settings with consistent, positive results. Then there’s going to scale with these interventions, which is yet another order of magnitude. 

We need more nuanced definitions and less blunt applications of terms and processes like “evidence-based” and “what works.” Ones that take into consideration that interventions are deployed in complex settings with diverse populations. Ones that acknowledge that a single intervention doesn’t work in isolation, but is embedded in systems where a lot is happening, including other interventions. And finally, ones that articulate that in-between developing a program and writing up its outcomes is a hugely variable and important process – implementation – where how the program is deployed shapes results. Unless we understand how a program is implemented, we can't talk in any real way about its outcomes and impact.

In what I’m calling an “evidence-based implementation” approach, data would be used to clearly demonstrate fidelity and quality of program implementation – enabling quality improvement cycles and setting up future evaluations. Funders would encourage and help resource such work, which would increase learning, accountability and return on investment.  

 

What should "evidence-based" look like?

Margery Turner at the Urban Institute and Tony Bryk and Lisbeth Schorr in the Huffington Post have recently (2013 and 2015, respectively) made very convincing arguments for why it’s important to develop a more nuanced understanding of terms like “evidence-based” and “what works”, as well as how data might be collected to create such understanding.

Turner notes:

the conversation about “evidence-based policy” focuses too narrowly on a single question and a single step in the policymaking process: [the argument is that] if an initiative or program hasn’t been proven effective, it’s not “evidence based” and shouldn’t be implemented.
[But] in reality, policy development occurs in multiple stages and extends over time. New policies emerge in response to problems and needs, possible approaches are advanced and debated, new policies are adopted and implemented, and established policies are critiqued and refined. Evidence can add value at every stage, but the questions decision-makers need to answer differ from one stage to the next.
These questions go beyond “does the intervention work”. A far-from-exhaustive list includes: 

* (How) does this intervention meet the needs of the target population? Other populations?

* What is the cost of this intervention?

* Under what conditions does the work succeed? Under what conditions is it a struggle to implement? 

* What is needed to implement the program properly? How much does it cost? 

* Why this approach? What else has been tried in this space? What were the outcomes of the work?  

Turner goes on to suggest that to remedy this situation

Instead of relying on a single tool [RCT, randomized controlled trials], policymakers and practitioners should draw from a “portfolio” of tools to effectively advance evidence-based policy. Using the wrong tool may produce misleading information or fail to answer the questions that are most relevant when a decision is being made. Applying the right tool to the policy question at hand can inform public debate, help decision-makers allocate scarce resources more effectively, and improve outcomes for people and communities.
These tools might include a randomized control trial; but could also include micro-simulation models, administrative data, or qualitative methods like focus groups, interviews and observations. Qualitative information like in-person observations, one-on-one or group interviews, for example, can be used to break down a complex problem and pinpoint what’s the core issue; develop ways to address the issue, and better understand implementation.

Funders, evaluators and organizations, in other words, shouldn’t be focusing solely on demonstrating that a program “works”. But should explore a variety of questions along the program continuum, from inception to implementation to outputs to outcomes. Doing so will help stakeholders to better understand and hence make better decisions regarding program development, implementation and outcomes.   

Lisbeth Schorr and Tony Bryk (In response to Haskins' editorial in the Times) also call for moving: 

beyond our current preoccupation with evidence from "what works" in the small units that can be experimentally assessed. Achieving quality outcomes reliably, at scale, requires that we supplement carefully controlled, after-the fact program evaluations with continuous real-time learning to improve the quality and effectiveness of both systems and program. 

Why?

Because there is enormous variability in the impact of social interventions across different populations, different organizational contexts, and different community settings. We must learn not only whether an intervention can work (which is what randomized control trials tell us), but how, why, and for whom -- and also how we can do better. We must draw on a half-century of work on quality improvement to complement what experimental evidence can tell us. And, importantly, the learning must be done not alone by dispassionate experts, but must involve the people actually doing the work, as well as those whose lives the interventions are trying to enrich.

A strong summative evaluation, Bryk and Schorr argue, is just one piece of figuring out how to create positive social change. If you truly want to figure out how to make evidence-based interventions work at scale, you cannot deploy widely based solely on the results of RCTs. Instead, you must look at the intervention in context: with the people who are participating in the work and including the wider systems in which the intervention is embedded. 

Acknowledging complexity addresses the fact that a successful intervention may not be appropriate in every setting; and that interventions interact not only with the systems in which they are being deployed, but also with other work taking place there. This is analogous to the understanding that has emerged in medicine that prescribing a drug needs to take into account not only what else is being taken, but also who it is being given to and the context in which they have gotten sick and require care. And that the end result of care (whether it’s “successful” or not) can be very dependent on what has gone on for years before a course of treatment was prescribed.

 

The reality of collecting nuanced evidence

How much do organizations and their funders engage in and support data collection that goes beyond “does it work?” to explore questions of implementation, systems and context? In academic settings (i.e., for interventions developed within universities), such work does sometimes take place, though there are still many calls for the “black box of implementation” to be opened and explored (see, for example, the work of Kimberly Hoagwood and Marc Atkins). 

For your average nonprofit or community based organization (CBO) implementing a school-based intervention, however, in-depth data collection focused on implementation is rare. They - and the organizations that fund them - continue to think about evaluation in the reductive “what works” way rather than what Turner and Bryk/Schorr advocate for.

A key reason for this focus on outcomes and impact over implementation is that organizations currently have very limited data collection resources and capacity. Collecting nuanced evidence (i.e., evidence demonstrating not only that a program produces positive outcomes, but under what conditions and through what mechanisms) is difficult to prioritize given resource constraints. Limited dollars are going to be put towards acquiring evidence of success, not evidence around how the work is being done.

Education organizations, as a result, report to funders about attendance, grades, test scores, teacher retention, student engagement, suspensions, and other variables without drawing a clear through-line between their work and what they claim as outcomes. This can give reporting a fragmentary quality, where what the program does and how it leads to positive outcomes is elusive. For both organizations and funders this is frustrating. All this work goes into collecting, analyzing and writing up data. But in the end what’s reported doesn’t give a clear sense to anyone of the work done and its impact.   

 

Evidence Based Implementation as a Supplement to “What Works”

Through an "evidence based implementation" approach, in contrast, funders would encourage programs - and provide them with the resources - to look rigorously at their program implementation, systematically document their interventions, and think about whether and how the work could be carried out at larger scale.  Data collection would focus on fidelity and quality of implementation, what adaptations have been made and why, who is the population receiving services and what they want and are getting out of the work.

Which doesn't mean, of course, that outcomes and impact would be ignored; rather, they’d be part of a continuum of evidence starting with program development, continuing through implementation and outputs, and then ending with outcomes/impact.  Implementation data would shed light on the work being done; and outcomes could then be tied more tightly to a program’s work.

Most programs cannot evaluate a program well on their own, and for a decent look at outcomes use an outside evaluator. But outside evaluators don’t have much of a connection to the program or community; and coming in without a good record of what’s been done only makes it less likely they’ll be able to draw strong conclusions about the work and its potential effectiveness. Every program is claiming an impact on the same set of variables; so being able to explain how you might be impacting those variables, and what more intermediate outcomes might be, is what can set you apart from other programs. More importantly being able to explain your program gives you a good sense of fidelity, how to train, build capacity, scale, adapt, who is your target audience, etc. In other words, how to do your work with quality.   

 

Evidence Based Implementation and Community Schools

So how might this look in the field? Take NYC's recent school reform initiative focusing on community schools. Given the multiple service providers, large number of schools participating, complicated intervention model and diversity of school and community settings, to have any hope of understanding the impact of the work it will be critical to carefully track implementation. It would be a mistake to jump straight to an outcome/impact focused evaluation. Funders should want to know a great deal about what the model looks like, variations in implementation, where it’s working/not working, what’s working/not working, etc. Looking primarily at outcomes isn’t going to tell you much about what the role of the model was in producing these outcomes.  

A recent Children’s Aid Society (CAS) report on how NYC might scale up its community schools work contains evaluation recommendations from the Center for Innovation Through Data Intelligence (CIDI). On the outcomes end CIDI suggest a nested design that would enable evaluators to look at outcomes at the student, school and community levels. Prior to looking at impact, however, they recommend conducting an implementation evaluation with a focus on i) the early phases of implementation and ii) fidelity to the model. The goals of this work would be to take “corrective action when necessary” and “be able to define the prototype of a successful community school”. [CIDI notes that because an implementation evaluation is “resource intensive”, the work might only be conducted in a sampling of community schools.] 

With the generous funding that’s going to the Community Schools Initiative, I’m hopeful a thorough, thoughtful process of developing and collecting metrics and looking carefully at implementation will take place. [For an example of what this might look like, here’s an evaluation proposal I made to a community schools lead partner in 2013.] 

As with impact, it’s important to look at implementation from a multi-level perspective: students, schools, CBOs/non-profits, and families/communities. Collecting this data will require methodological diversity and creativity. As an example and to wrap up, I want to flesh out an idea given to me by Mary McKay that I think is fantastic, and I’ve long advocated for. 

Years ago she suggested the best way for evaluators (both internal and external) to collect complex, school-based intervention data would be through a dedicated research assistant, one per 1-2 schools. This individual would be responsible for gathering the quotidian implementation data that is impossible to get unless you’re a regular presence in the school – but that if you’re actually doing the work you don’t have the capacity to focus on. This data is absolutely essential to understanding the quality and fidelity of implementation, as well as what kind of modifications and adaptations to the intervention are happening in the field. 

This research assistant would receive training and then follow set protocols to gather the nuanced data that CBOs will never obtain if they go the tried and true school data collection route – relying on school staff to collect the data for them. When you do that you end up with the incomprehensible, incomplete notes of busy social workers; support staff occasionally filling out basic checklists during meetings; constant negotiations with schools about accessing data; and still no idea of how well the work is going in the school. Not to mention the biggest issue: school staff weary of yet another demand placed on them by the organizations that came into their buildings with promises of lightening their burdens. Need I state the obvious – school staff don’t want more paperwork! 

I’d strongly recommend this approach to the Community Schools Initiative. This is a line item that can be written into grants. This might be a great way to utilize AmeriCorps/VISTA staff and give service learning opportunities to recent college graduates. 

 

[* Haskins’ examples of programs that work include two education interventions: Success for All (SFA) and Reading Partners (RP), reading interventions at two very different stages of development and evaluation. These programs in fact demonstrate some of the flaws behind the “what works” approach, but I’ll get to that in another post. This post is focused on the need to create more nuanced, evidence based approaches.]

Getting NYC Community Schools Off the Ground

  In the second half of 2014, New York City Mayor Bill de Blasio not only made community schools a significant piece of his education strategy, but also the centerpiece of his efforts to improve failing schools. In June 2014 de Blasio announced the city would spend $52 million in state funds - specifically, the Attendance Improvement and Dropout Prevention (AIDP) grant administered by the United Way of New York City - to convert more than 40 schools into community schools with services for children and their families. In early November he upped the ante with the announcement the city would, as part of its School Renewal Plan, spend $150 million to make an additional 94 struggling schools into Community Schools. Each school would be “matched” with a community-based organization (CBO) and the work organized through a full-time Resource Coordinator, based in the school but hired and supervised by the partner CBO. While the first round of schools applied to become community schools, the second round of schools have been ordered by the DOE to participate in the initiative. By school year 2016-17 these 94 struggling schools “must demonstrate significant academic achievement” or face the consequences, including possibly changes in leadership or school reorganization.

            For those not familiar with the term, community schools are partnership-focused schools organized around the principle that by providing children and their families with increased access to academic and extra-academic services (through the conduit of the school, where children spend a good chunk of the day), barriers to learning are reduced and student outcomes improve. For almost a century the community schools model is one way New York City has tried to address the wide range of issues children and families that live in the poorest neighborhoods grapple with. But never on such a scale as Mayor de Blasio is attempting. This school year 128 community schools will be launched.

            de Blasio has positioned his community schools strategy as part of the DOE’s “commit(ment) to working collaboratively with parents, families, educators and communities”, and hence a strong departure from previous mayor Michael Bloomberg’s approach to improving struggling schools. Bloomberg granted schools greater autonomy in exchange for greater accountability, and focused on the creation of charters and small schools - dividing struggling, large schools into small ones or even creating different “academies” in a single school. School closure was a consequence for failure to meet accountability requirements (Bloomberg opened 656 schools and closed 157 during his tenure, including some of the small schools he opened). In contrast, de Blasio’s approach, and that of his Schools Chancellor Carmen Fariña, has to-date focused on empowering schools through additional supports, provided either directly from central (e.g., pre-K, renewed authority for superintendents, extended learning time) or indirectly (e.g., community schools, after-school).  

            de Blasio's community schools strategy goes to the heart of what staff working in high-poverty schools have consistently argued students must have to succeed - and believe Bloomberg’s policies ignored - services that address student needs outside school, needs that have a huge impact on their ability to learn. These include health, mental health, family incarceration or death, homelessness, involvement with the juvenile justice system and/or children’s services, immigration status, and food insecurity.

            It’s common knowledge in education circles, informed by an extensive body of research, that poverty has an impact on childhood development, with ramifications for academic performance and other life outcomes. Staff in high-poverty schools push back against the notion they can and should be responsible for addressing all the socioeconomic concerns students bring to the table, even as the bar is raised for what students are expected to demonstrate academic competency in. So yes, administrators and teachers often need and want support services; they ask for social workers, behavioral interventions, after-school programs, health/mental health services, academic remediation, family outreach, and family programs.

            But just as school staff push back against being primarily responsible for meeting students’ non-academic needs, community school partners feel they should not be held primarily responsible for meeting students’ academic needs, particularly in such a high profile initiative as de Blasio’s. Soon after the Mayor announced his Renewal Schools plan, advocates voiced concerns about two elements of his plan: i) compelling struggling schools to participate; and ii) making rapid academic gains an accountability requirement, particularly when the instructional support component of the plan is not yet in place; while the Renewal Schools summary mentions the possibility of instructional support for schools, for example via Master Teachers, no details are provided. Though I’m not familiar with all the Renewal Schools, I have worked in several of them. And I’d venture a very high percentage have received community-based services in the past. Why will this reform effort succeed, when these schools have received instructional and extra-instructional support for years, is a valid question, and one that makes community school proponents quite anxious.

            These larger, outcome focused issues shouldn’t, however, obscure the fact that just getting CBO-school partnerships off the ground is a significant challenge. For the last couple of years I’ve been working with schools to improve how they organize resources, including relationships with CBOs and non-profits. Before that I spent six years doing program development and evaluation for a non-profit that provided student support and instructional services to high-poverty schools in New York City, Washington, D.C. and New Jersey.  I’ve had ample opportunity to learn how partnerships with schools are developed and sustained; see the work, on-site and off-site, that goes into providing school-based services; and understand what tends to go right and what tends to go wrong - on both the school side and the CBO/non-profit side - in these partnerships.

            Based on these experiences, there are three school-based factors that, particularly in the early stages, community school stakeholders must be aware of.

·     CBOs are trying to address complicated issues (on an individual child as well as group basis) while working in an institution they don’t manage, that is in session approximately half the year, and has its own set of priorities, which don’t always match those of its partner(s);

·    Implementation and management of school-CBO partnerships rests heavily on one individual – a Resource Coordinator – who is expected to have an unreasonably wide and deep set of skills and experience.

·    Schools want their CBOs to tackle a range of needs, ones that are not always part of the original partnership agreements.

            These are the issues that, if ignored, start partnerships off on the wrong foot, sometimes never to recover. I’m going to spend my next couple of posts describing these issues in more depth and discussing how they can be successfully addressed.