Home » Quality Research » An Informal Evaluation of 2017

An Informal Evaluation of 2017

It’s hard to believe that 2017 is already over! REC was involved in so many different kinds of research and evaluation projects this past year. From early childhood education programs to evaluating services for older adults, REC was able to apply our quality research and evaluation skills to help the nonprofit community. As founder and principal consultant of REC, I wanted to share some thoughts  about serving as an external evaluator and researcher for various nonprofits this past year.

• Nonprofits are making evaluation a priority – In all our projects, our clients came to us knowing that they either needed to develop how their organization conducted evaluation or strengthen existing efforts. What “evaluation” meant to nonprofit organizations sometimes varied. However, our clients saw evaluation as a way to tell their story and a tool to serve the larger community. Whether it was presenting findings to their boards, having the right data to solicit new funds and grants, or improving and developing existing programs, nonprofits are certainly making evaluation a priority.

• Less data is more – Data is how nonprofit organizations tell their story. Whether data is being collected via surveys or focus groups, this information is vital. As REC worked with different clients, we discovered that some organizations believed that more data was better. However, collecting more information resulted in unnecessary stress for staff, greater confusion, and lead to less support for all data collection efforts. Worst of all, much of the data that was collected was not being utilized to make an impact. Therefore, REC recommends that nonprofits collect less data, but make sure that the data that is collected meets the following criteria:

• The data is valuable to the organization
• The organization has the capacity to collect the data
• The data is meaningful and actionable

By meeting these criteria, nonprofits can save valuable time, money, and resources.

• Consistency is key – Evaluation is an iterative process that is typically guided by a good plan of action. Programs should be implemented as intended before outcomes and impact are assessed. The process of evaluation itself is methodical and systematic. While there is room for creativity in developing and evaluating programs, consistency is key to determine the efficacy and effectiveness of anything that is evaluated. This means that the same evaluation tools (e.g., surveys, interview protocols) should be used; that those collecting the data are given instructions, training, and / or guidance; that data is entered or collected in the same manner; and that different locations and sites receive similar if not the same version of whatever is being evaluated. When consistency is key, organizations will be better able to evaluate the outcomes and impacts of their programs.

• Evaluation is a driver of change – Something I learned this year is that working with REC became a catalyst for change for many of our clients. When nonprofits receive reliable and valid results, they are equipped with the information they need to improve their programs, services, and activities. When organizations are “ready” to work with an external evaluator or researcher, they have made a commitment to do better. Evaluation becomes a driver for change and improvement. Our clients show so much courage and strength in being open to change, sharing their story, and letting REC provide actionable recommendations to help improve their organization.

At the end of the day, we all want to make a positive impact in this world. It is a privilege and honor to have worked with so many amazing organizations in 2017 and continue our mission of providing organizations with results you can trust into 2018!

Feel free to contact me with any research and evaluation needs at annette@researchevaluationconsulting.com.