Evaluating for Sustainability: Eight Steps to Success

Evaluating for Sustainability: Eight Steps to Success

According to the American Heritage Dictionary, "to sustain" means to keep in existence, to supply with necessities, to support from below, to encourage, and to maintain competently. Neither individuals nor nonprofit organizations can do these things without regular, critical reflection; in short, without evaluation. In fact, the better and more institutionalized the evaluation process, the more useful evaluation can be as a tool for achieving sustainability.

Historically, nonprofits have monitored their activities and reported back to funders using traditional measures of success, such as the number of clients served or the number and type of activities undertaken. Increasingly, however, sustainability is linked with doing "what works" — and discontinuing programs that don't achieve their goals. In a world in which needs seem to grow faster than the funding available to address them, funders and grantees alike are increasingly likely to direct resources to activities that produce results and away from those that do not.

At first glance, developing a comprehensive system of evaluation can appear to be an overwhelming task. But it doesn't have to be. As I'll explain in the remainder of this article, there are a number of steps you can take to get and stay on track.

Step 1: Develop an evaluation "blueprint"

...Too often, nonprofits fail to incorporate evaluation into their program blueprints....

Developing a sustainable program, project, or organization is like creating a building. The process starts with a blueprint. In the case of a nonprofit program or project, that blueprint should include an evaluation component. At a minimum, this means clearly identifying the target population, specifying program objectives in measurable terms, identifying key indicators of success, outlining data collection and analysis activities, and developing a timeline to monitor the success of the program on an ongoing basis. Too often, however, nonprofits fail to incorporate evaluation into their program blueprints. Instead, they design and implement evaluation activities after a program is up and running, making it difficult, if not impossible, for evaluators to gather the information they need to accurately measure the success of the program. To avoid that fate, consider bringing in an evaluator before the program or project you plan to evaluate is launched.

Some organizations may feel, with good reason, that they have sufficient in-house experience to plan an effective evaluation without the benefit of outside help. In fact, one way to strengthen the link between evaluation and organizational sustainability is to create an in-house evaluation team or unit. Such a unit should operate at the upper levels of the organization, have the authority to present controversial findings internally, be insulated from inappropriate influences, and be able to affect decision making.

Step 2: Lay a solid foundation by anticipating data-collection needs

A building is only as durable as the foundation on which it rests. Similarly, evaluation findings are only as good as the data upon which they are based. Part of designing a program for evaluation means thinking about the data needed to monitor performance. To do that well, organizations must start by asking: What do we want to know? This is also an opportune time for your organization to reflect on the concept of sustainability. Do you want to know more about the cost-effectiveness of your program(s)? Staffing structures? Participant outcomes? Stakeholder support? Reflecting on questions in the context of sustainability will help you to identify the types of information you need to collect.

Anticipating future data needs can be a challenge. Often, research questions of interest don't reveal themselves until a program is under way. As such, it is important that your data-collection strategies be flexible enough to accommodate changes to the program without jeopardizing the quality of the data being collected.

Step 3: Design a comprehensive but focused data-collection system

Once your organization is armed with a clear sense of its information needs, your next step is to design a data-collection system that accommodates two types of data: information on processes and information on outcomes. Process data are used to assess the implementation of a program, while outcome data provide information about the effectiveness of a program in achieving its goals. Both types of data are important for determining whether a program or project is a success. For example, outcome data may reveal that single mothers participating in an employment program do no better at gaining and retaining employment than a control group of single mothers who are not in the program. On the surface, this would seem to suggest the program is failing to achieve its goal. However, the process data might reveal that participants did not receive all of the intended services, or that staff-to-client ratios were lower than intended — in short, that the implementation of the program was flawed. Such shortcomings would need to be addressed before an evaluation could conclude whether the program was successful or not.

Data can be collected in a number of ways, including through surveys, focus groups, in-depth interviews, and administrative record-keeping. A word of warning about surveys, however: They are as popular as ever, and the Internet is making them easy to create and distribute. But knowing what, how, and whom to ask are critical factors in determining the usefulness of the data you'll collect. Some common mistakes made when designing and fielding surveys include:

  1. Asking for information you don't know how to use
  2. Asking for information that is readily available elsewhere
  3. Using open-ended questions without a methodology for evaluating responses
  4. Including questions that unduly encourage a favorable response
  5. Creating a survey that is unnecessarily long or complicated
  6. Distributing the survey to the wrong respondents (e.g., mailing a survey to executive directors that should have been sent to IT coordinators)
  7. Distributing a survey to those you know (e.g., school superintendents on your mailing list), instead of those you need (e.g., all superintendents in the target area)
  8. Failing to provide sufficient incentives for responding
  9. Failing to anticipate how the survey data will be recorded for analysis

If important decisions will be made on the basis of survey data, take care to consult an expert when designing and fielding your survey.

One other thing to keep in mind: A common mistake when designing a data-collection system is to use a "sawed-off shotgun approach," collecting information on everything from everybody all the time. Don't make that mistake. Collect only what you need, and use what you collect.

Step 4: Pre-test your data-collection system

...The importance of collecting high-quality data cannot be overstated....

If a contractor were building your house, you'd want to him to use reliable, high-quality materials. So it is with evaluation data. The importance of collecting high-quality data cannot be overstated. High-quality data, in this context, are data that are both valid and reliable. Valid data measures what they are supposed to measure. Reliable data measure the same thing, in the same way, every time.

Developing a data-collection system that will produce valid and reliable data takes experience. A poor data-collection instrument, poorly defined procedures, inadequate staff training, and careless data entry can all affect the reliability of your data. It's important, therefore, to pre-test your data-collection instruments, adjust and refine them as needed, train staff in the use of them, and then carefully monitor their implementation. Testing also applies to software. Before beginning any data-collection effort, make sure any software you intend to use works as advertised, that your data-entry procedures are clear and straightforward, and that files can be shared easily by all parties involved in the evaluation.

Step 5: Collect baseline data

Sustainability has little meaning if it occurs in a vacuum. For this reason, organizations need to establish a starting point, or baseline, to determine whether a program or project is having an impact. In many cases, baseline data should be collected not only from program participants but from a control group. Using a control group (e.g., individuals who are not participants in the program) as part of your evaluation allows you to establish what would have happened to the target population had the program not been implemented.

Although it's easiest to think about baseline data collection in terms of health and social welfare programs — for instance, measuring depression prior to a mental health intervention, or measuring school attendance prior to an educational intervention — baseline data can be collected for all sorts of initiatives, including organizational ones. Imagine, for example, that a homeless shelter views staff morale as a key component of organizational sustainability. In order to increase morale, the shelter modifies its flex-time policies to allow staff to deviate from a standard work schedule once a week. Administrative records will be used to collect process data (e.g., how many staff take advantage of flex-time arrangements, in which departments, how often, etc.), and a staff survey will be used to measure morale (the outcome) before and after the policy is implemented.

If you've already launched your program but haven't collected baseline data, don't worry. Depending on the program's objectives and the indicators that will be used to measure those objectives, you may be able to gather baseline information from other sources. Organizations that routinely generate high-quality administrative or secondary data (e.g., school grades, attendance records, employment or earnings histories, community crime rates, etc.) often are able to use this information to assess performance prior to the launch of a new program. But again, to avoid any hint of bias in the final results, try to gather your baseline data prior to program implementation.

Step 6: Implement your program...and stick to the plan

Whew! It sure has taken a long time to get here. Once you're ready to implement your program, there are a few things you'll want to keep in mind in order to facilitate an effective evaluation. First, stick with the plan. Don't deviate from the program's design unless it is absolutely necessary. If changes to the program are made, be sure to document what they were and why and when they were made. It's essential that you incorporate this information in the final assessment of the program.

...Remember: Good evaluation requires discipline....

Second, don't let data collection and documentation fall by the wayside. There's always a great deal of enthusiasm for evaluation early in the process. Then, as days, months, or even years pass, it becomes easier to miss a few deadlines, put off paperwork, and put the process on the back burner. Remember: Good evaluation requires discipline, and maintaining that discipline will most assuredly enhance the quality and usefulness of your results.

Step 7: Hire strategically and stay involved

As I mentioned earlier, designing and conducting an evaluation can be done in-house if you already have expertise on staff. Alternatively, organizations can partner with outside consultants or research institutions to evaluate their programs. The technique known as "empowerment evaluation" employs a trained evaluator as a coach or facilitator whose primary objective is to build the capacity of the organization to self-evaluate.

When hiring an evaluator, whether an individual or an organization, approach the task like any other hiring process. Be clear about your expectations, recruit strategically, and select an evaluator who has both the skills and disposition to work effectively with your organization. Experienced evaluators possess a range of interdisciplinary skills in areas such as quantitative and qualitative research, economics, management, public policy, sociology, education, and communications. They should be able to produce a complete evaluation design consisting of: 1) the key research questions; 2) the methodological strategies for getting answers to those questions; 3) a comprehensive data-collection plan; 4) a plan for analyzing the data; and 5) a complete description of the final evaluation products.

While it is possible to "outsource" an evaluation, I would discourage most organizations from doing so. A great deal of what you learn from an evaluation comes from participating in its design and implementation. Organizations that completely outsource their evaluation needs may find that the end products are not useful, do not accurately reflect the organization's culture, do not encourage ownership of the results, and do not empower stakeholders to take responsibility for organizational performance — in the short term or over time. These shortcomings can also occur if you fail to provide sufficient funds, time, facilities, and support for evaluation efforts. This is all the more reason to commit to and stay involved with the process.

Step 8: Use the results again and again

Evaluation is not a "one-shot deal." Once baseline data has been collected, comparisons can be made at multiple points in time. In between these measurements, organizations have the opportunity to tweak program implementation, fine-tune data collection strategies, and adjust funding levels. Repeated measurement is particularly useful for social welfare programs that have difficulty anticipating when desired outcomes will come to pass. In such a case, organizations should take pains to identify shorter-term milestones that can signal success toward achieving long-term goals.

...The iterative process of assessment and adjustment is what links evaluation to sustainability.....

This iterative process of assessment and adjustment is what links evaluation to sustainability. Improving program performance, enhancing stakeholder accountability for that performance, and encouraging organizational learning are not things that just happen. They require planning and "good" evaluation. A good evaluation is not the same thing as an evaluation that paints a rosy picture. Rather, a good evaluation highlights program or organizational strengths and weaknesses, summarizes key findings, and makes actionable recommendations for the future. "Actionable" in this context means changes that can be achieved with the support of key stakeholders.

Key stakeholders — board members, program staff, clients, community members, funders, and/or staff from other organizations — should be aware of your evaluation efforts, be included in evaluation activities, and be informed of evaluation findings. Summarizing and communicating those findings effectively is also an important part of the process. Not all audiences will need the same information, and organizations should be prepared to present their evaluation findings in different formats (e.g., formal reports, PowerPoint presentations, community meetings, small working groups) depending on the audience.

Conclusion

The definition of sustainability I opened with omitted a central measure of success for nonprofits: improving lives through good works. In fact, most nonprofits exist to achieve this goal and sustain a positive impact over time. They deserve to succeed. You deserve to succeed! It's my hope that the strategies I've presented here will help you to integrate evaluation into the daily life of your organization and will bring you many more than eight steps closer to success and sustainability. Good luck!