“Chapter 8: Evaluation”
Chapter 8: Writing an Effective Evaluation Plan
An evaluation plan is a significant component of any grant application. It outlines how you will assess the success of your grant-funded project. An effective evaluation plan not only meets the requirements of funders but also provides valuable insights for your organization, helping you improve your project and future initiatives.
While funders typically require an evaluation plan, there are important benefits for your organization:
- Identifies what worked and what didn’t.
- Provides data to improve your project and future work.
- Demonstrates accountability and transparency to stakeholders.
Writing an evaluation plan for a grant requires a dive deep into your project to clarify and solidify the intended outputs and outcomes. Outputs are direct results or products which are generally expressed numerically (such as the number of clients served by a program). Outcomes, on the other hand, are longer-term. These would include changes in attitude, behavior, knowledge, or skills, for example.
An evaluation should clearly address several key questions: How will you determine if your organization has achieved its planned objectives? How will you recognize if there has been an outcome or result from the project? What insights does the organization hope to gain from the project? How will you share what you’ve learned so that others can replicate a successful project or take an alternative path if the project does not meet its goals?
External vs. Internal Evaluations
An internal evaluation is conducted by the applicant themselves, whereas an external evaluation is performed by a third-party evaluator who is not part of the applicant organization. Requests for Proposals (RFP) generally specify whether an internal or third-party evaluation is required. If the evaluation format is not specified, the choice between internal and external evaluation is left to the organization. So long as processes and data are transparent and straightforward, internal evaluations are both common and acceptable. For example, you probably don’t need an external evaluator to determine the effectiveness of a project teaching new Americans how to file their taxes. This can be evaluated internally through pre- and post-tests of tax filing skills, participant satisfaction surveys, and self-reports on how many participants were able to successfully file that year’s taxes.
Third-party evaluators are experts that you should bring in when it is required, when the evaluation is overly burdensome or complex, or when the funder is interested in questions that are outside of your organization's scope. As outside evaluators can be expensive, if it is not in your budget to use one it is often possible to hire an evaluator to help you design an evaluation, and then have the organization implement it. Evaluators can also be hired to assist with just the analysis phase.
Types of Evaluation
There are several types of evaluations. The most common are formative, process, outcome, and impact. The type of evaluation you choose depends on the information you are hoping to gather.
Formative Evaluation
Formative evaluation is an ongoing process that takes place during the implementation of a project or program. Its primary goal is to monitor the progress of the project and provide continuous feedback that can be used to improve the project as it unfolds. This type of evaluation helps ensure that the project stays on track, meets its objectives, and addresses any issues that arise along the way.
Key Characteristics of Formative Evaluation
- Timing: Conducted during the project's development and implementation phases.
- Purpose: To improve the project's design and performance by identifying strengths and weaknesses.
- Focus: On the process of implementation rather than the final outcomes.
- Feedback: Provides immediate information that can be used to make adjustments and improvements.
Components of Formative Evaluation
- Monitoring: Regularly tracking the progress of the project activities. This involves collecting data on various aspects of the project, such as participant engagement, resource utilization, and adherence to the project plan.
- Assessment: Evaluating the effectiveness of the project activities in achieving the desired objectives. This involves analyzing the collected data to identify areas where the project is succeeding and where it may need adjustments.
- Feedback: Providing timely and actionable feedback to project staff and stakeholders. This feedback helps inform decision-making and enables the project team to make necessary adjustments to improve the project's effectiveness.
Benefits of Formative Evaluation
- Improvement: Helps refine and improve the project while it is still in progress, leading to better overall outcomes.
- Adaptability: Allows the project team to respond to challenges and changes in the environment, ensuring that the project remains relevant and effective.
- Stakeholder Engagement: Engages stakeholders in the evaluation process, fostering a sense of ownership and commitment to the project's success.
- Resource Efficiency: Ensures that resources are being used effectively and efficiently by identifying and addressing issues early on.
Examples of Formative Evaluation Methods
- Surveys and Questionnaires: Collecting feedback from participants and stakeholders about their experiences and satisfaction with the project.
- Focus Groups: Conducting group discussions with participants to gain deeper insights into their perceptions and experiences.
- Interviews: Interviewing key stakeholders and project staff to gather detailed information about the project's implementation and effectiveness.
- Observation: Observing project activities and interactions to assess their effectiveness and identify areas for improvement.
- Data Analysis: Analyzing quantitative data, such as participation rates and resource utilization, to monitor progress and identify trends.
Example Scenario
An after-school tutoring program has been launched to help middle school students struggling with math and reading. The program aims to enhance their academic skills through personalized tutoring sessions held twice a week. The program will run for six months and is expected to serve 50 students from various schools in the district. A formative evaluation for this program might involve:
- A program coordinator observes 10% of the tutoring sessions each month, noting adherence to lesson plans and student engagement.
- Weekly student satisfaction surveys include questions like, "How clear was today's lesson?" and "How engaged did you feel during the session?"
- Monthly interviews with tutors ask about their experiences, any difficulties they encounter, and suggestions for improving the sessions.
- Bi-weekly focus groups with students discuss their overall experience, including any difficulties they are facing and suggestions for making the sessions more effective.
- Mid-program grade comparisons involve collecting students' math and reading grades from their schools and comparing them to baseline grades recorded at the program's start.
Process Evaluation
Process evaluation is a type of evaluation that focuses on the implementation of a project or program. It examines the activities, resources, and procedures used to deliver the program, assessing how well the program is being executed according to the plan. The goal of process evaluation is to understand how the program works, identify any problems in its delivery, and ensure that it is being implemented as intended.
Key Characteristics of Process Evaluation
- Timing: Conducted during the implementation phase of a project.
- Purpose: To assess the fidelity, quality, and efficiency of the program's implementation.
- Focus: On the processes and activities involved in delivering the program rather than on the final outcomes.
Components of Process Evaluation
- Implementation Fidelity: Evaluates whether the program is being delivered as planned. This involves checking if the activities and interventions are being conducted as specified in the program design.
- Quality of Delivery: Assesses the quality of the program's implementation. This can include evaluating the skills and effectiveness of the staff delivering the program, the appropriateness of the materials used, and the overall participant experience.
- Participant Engagement: Measures how well participants are engaging with the program. This includes tracking attendance, participation levels, and participant feedback.
- Context: Examines the external factors that may influence the program's implementation. This can include environmental, organizational, and community factors that impact the program's success.
- Resources: Assesses the adequacy and use of resources allocated to the program. This includes funding, personnel, materials, and time.
Benefits of Process Evaluation
- Improvement: Identifies areas for improvement in the program's delivery, allowing for adjustments to be made in real-time.
- Accountability: Ensures that the program is being implemented as planned and that resources are being used effectively.
- Transparency: Provides stakeholders with detailed information about how the program is being executed, fostering trust and support.
- Insight: Offers valuable insights into the operational aspects of the program, which can inform future program planning and development.
Examples of Process Evaluation Methods
- Checklists and Logs: Keeping detailed records of program activities, including dates, times, locations, and participants.
- Observations: Observing program activities to assess how they are being delivered and how participants are engaging.
- Interviews and Focus Groups: Conducting interviews and focus groups with program staff, participants, and other stakeholders to gather qualitative data on their experiences and perceptions.
- Surveys: Administering surveys to participants and staff to collect feedback on various aspects of the program's implementation.
- Document Review: Reviewing program documents, such as training materials, manuals, and reports, to ensure they align with the program's design and goals.
Example Scenario
Imagine a health education program designed to promote healthy eating habits among school children. A process evaluation for this program might involve:
- Checking if the educational sessions are being conducted according to the schedule.
- Assessing the quality of the materials used in the sessions.
- Observing the sessions to see how engaged the children are and how effectively the educators are delivering the content.
- Collecting feedback from the children, their parents, and the educators through surveys and interviews.
- Reviewing records to ensure all planned activities are being completed and resources are being used as intended.
Outcome Evaluation
Outcome evaluation is a type of evaluation that focuses on the results or impacts of a program or project. It assesses the extent to which the program's objectives have been achieved and the overall effectiveness of the program in bringing about desired changes. This type of evaluation is usually conducted after the program has been implemented and seeks to determine whether the program has made a significant difference in the target population or community.
Key Characteristics of Outcome Evaluation
- Timing: Conducted at the end of the program or after sufficient time has passed to observe the effects of the program.
- Purpose: To determine the effectiveness of the program in achieving its goals and objectives.
- Focus: On the changes or benefits resulting from the program, such as changes in behavior, knowledge, attitudes, skills, or conditions.
- Measurement: Uses specific indicators and benchmarks to assess the extent of change.
Components of Outcome Evaluation
- Objectives and Goals: Clearly defined goals and objectives of the program that specify the desired outcomes.
- Indicators: Specific, measurable indicators that can be used to assess whether the desired outcomes have been achieved. These indicators should be directly linked to the program's objectives.
- Data Collection: Gathering data through various methods such as surveys, interviews, tests, observations, and administrative records to measure the indicators.
- Analysis: Analyzing the data to determine the extent to which the program's objectives have been met and to identify any unintended outcomes.
- Reporting: Summarizing the findings in a report that highlights the effectiveness of the program, lessons learned, and recommendations for future programs.
Benefits of Outcome Evaluation
- Effectiveness Assessment: Provides evidence of the program's success or failure in achieving its goals.
- Accountability: Demonstrates to funders, stakeholders, and the community that the program has made a positive impact.
- Improvement: Identifies strengths and weaknesses in the program, offering insights for future improvements.
- Replication: Offers valuable information that can be used to replicate successful programs in other settings.
Examples of Outcome Evaluation Methods
- Surveys and Questionnaires: Collecting quantitative data on participants' knowledge, attitudes, behaviors, and skills before and after the program.
- Pre- and Post-Tests: Measuring changes in specific knowledge or skills by comparing test results before and after the program.
- Interviews and Focus Groups: Gathering qualitative data from participants, staff, and stakeholders about their experiences and perceptions of the program's impact.
- Observations: Observing changes in behavior or conditions directly related to the program.
- Administrative Data: Analyzing existing records or databases to track changes over time, such as attendance rates, health outcomes, or academic performance.
Example Scenario
Program: Community Health Initiative to Reduce Obesity
Objective: Decrease the obesity rates among adults in the community by promoting healthier eating habits and increasing physical activity.
Data Collection
- Conduct pre- and post-program surveys to assess changes in dietary habits and physical activity levels.
- Measure participants' BMI at the start and end of the program.
- Hold focus groups with participants to gather qualitative data on their experiences and the perceived impact of the program.
Analysis
- Compare BMI Data: Analyze changes in participants' BMI from the beginning to the end of the program.
- Survey Results: Compare pre- and post-program survey results to measure changes in physical activity and dietary habits.
- Qualitative Analysis: Analyze focus group transcripts to identify common themes and insights about the program's impact.
Impact Evaluation
Impact evaluation is a type of evaluation that assesses the broader, long-term effects of a program or project. It goes beyond immediate outcomes to examine the overall changes and benefits that can be attributed to the program, determining whether it has made a significant difference in the target population or community. This type of evaluation helps understand the extent to which the program has achieved its ultimate goals and contributed to desired societal changes.
Key Characteristics of Impact Evaluation
- Timing: Conducted after the program has been implemented and sufficient time has passed to observe its long-term effects.
- Purpose: To assess the overall effectiveness and sustainability of the program's impact on the target population or community.
- Focus: On the long-term changes and benefits resulting from the program, such as improvements in health, education, economic conditions, or social well-being.
- Causality: Seeks to establish a cause-and-effect relationship between the program and the observed changes.
Components of Impact Evaluation
- Long-Term Goals: Clearly defined long-term goals of the program that specify the desired impact.
- Indicators: Specific, measurable indicators that can be used to assess the long-term impact of the program. These indicators should be directly linked to the program's goals.
- Counterfactual Analysis: Comparing the outcomes of the program participants with a control or comparison group that did not participate in the program. This helps establish causality by showing what would have happened in the absence of the program.
- Data Collection: Gathering data through various methods such as longitudinal studies, surveys, interviews, and observations to measure the indicators.
- Analysis: Analyzing the data to determine the extent of the program's impact and to identify any unintended or negative effects.
- Reporting: Summarizing the findings in a report that highlights the program's impact, lessons learned, and recommendations for future programs.
Benefits of Impact Evaluation
- Effectiveness Assessment: Provides robust evidence of the program's long-term success or failure in achieving its goals.
- Accountability: Demonstrates to funders, stakeholders, and the community the significant impact of the program.
- Improvement: Identifies areas for improvement and informs future program design and implementation.
- Replication: Offers valuable information that can be used to replicate successful programs in other settings.
Examples of Impact Evaluation Methods
- Randomized Controlled Trials (RCTs): Randomly assigning participants to either the program group or a control group to compare outcomes.
- Quasi-Experimental Designs: Using non-randomized methods to create a comparison group, such as matching or propensity score analysis.
- Longitudinal Studies: Following participants over an extended period to observe long-term changes and impacts.
- Surveys and Questionnaires: Collecting data on participants' behaviors, attitudes, and conditions before and after the program.
- Interviews and Focus Groups: Gathering qualitative data from participants, staff, and stakeholders about their experiences and the perceived long-term impact of the program.
- Administrative Data: Analyzing existing records or databases to track changes over time, such as health outcomes, employment rates, or academic performance.
Example Scenario
Program: Rural Education Improvement Initiative
Objective: Improve educational outcomes and long-term socio-economic conditions in rural communities.
Long-Term Goals and Indicators
- Goal: Increase high school graduation rates and improve employment opportunities for graduates.
- Indicators: High school graduation rates, college enrollment rates, employment rates, and income levels of program participants.
Data Collection
- Track students from the start of the program through high school graduation and into their early careers.
- Conduct annual surveys to collect data on educational progress, college enrollment, and employment status.
- Conduct in-depth interviews with participants, educators, and community leaders to gather qualitative insights.
Analysis
- Comparison of Outcomes: Analyze differences in graduation rates, college enrollment, and employment between program participants and the control group.
- Long-Term Impact: Assess the program's long-term impact on participants' socio-economic conditions.
Developing an Evaluation Plan
Creating an effective evaluation plan is essential to ensure your project can be accurately assessed for success. Follow these steps to develop a comprehensive evaluation plan:
Clarifying Program Objectives and Goals
Begin by clearly defining the objectives and goals of the initiative. Identify what it aims to accomplish and the strategies planned to achieve these goals. This clarity helps pinpoint where to focus evaluation efforts. Consider creating a table (such as Table 8.1) that outlines each program component and its elements to organize your thoughts.
Table 8.1: Organizational Tool for Goals and Objectives
Increase voter registration in census block area by 15% | Numbers from state, survey |
Have 6 cars available all day on election day to give rides to the polls | Yes or no, satisfaction survey? Numbers? |
Increase turnout on election day among POC by 10% | Numbers from state, survey, exit polls |
Media blast with voting rights info | Survey, tracking numbers |
Developing Evaluation Questions
Evaluation questions typically fall into four categories:
- Planning and Implementation.
- Who participates in the program?
- Is there diversity among participants?
- Why do participants join or leave the program?
- Are the services meeting local needs?
- Are there capacity issues at the organization?
- Attainment of Objectives.
- How many people participate?
- How much time are participants spending?
- What is the net change since implementation?
- What is the dollar equivalent of the program benefits?
- Participant Impact.
- How has behavior changed as a result of participation?
- Were there any problems as a consequence of participation?
- Are participants satisfied with their experience?
- Have there been any particular success stories?
- Community Impact:
- What were the results of the program?
- Do the benefits of the program outweigh the costs?
- Were there any negative externalities associated with the program?
- Have any community-level indicators changed?
Developing Evaluation Methods
Once you have your evaluation questions, decide on the best methods to answer them. Common evaluation methods include:
- Monitoring Systems: These systems include process measures (work done, or actions taken), outcome measures (the results), and observational systems (real-time tracking).
- Surveys: Collect feedback on goals, processes, and outcomes through surveys conducted before, during, and after the initiative.
- Interviews: Gather insights from leaders and participants about the program’s quality and impact.
- Community-Level Indicators: Use data to assess program outcomes.
Setting Up a Timeline for Evaluation Activities
Start evaluating right away. Early and continuous evaluation allows for a more accurate and comprehensive understanding of the project's progress and impact.
Starting the evaluation at the beginning of the project ensures that baseline data is collected, which serves as a reference point for measuring changes and outcomes. This initial phase involves defining key evaluation questions and identifying the types of measures needed to answer them. Data collection methods and experimental designs should also be outlined to ensure the validity and reliability of the data gathered.
Throughout the project, it is essential to provide periodic feedback and reports. Regular updates help keep the project on track and allow for timely adjustments based on interim findings. These updates should be shared at meetings with the steering committee or coalition to keep all stakeholders informed and engaged. The frequency of these updates will depend on the needs of the project and the preferences of funding partners, but they should occur often enough to catch any emerging issues and to capitalize on opportunities for improvement.
The evaluation timeline should also include specific milestones for data collection and analysis. For example, if the project spans a year, quarterly data collection and analysis might be appropriate. This approach allows for the identification of trends and patterns over time, providing a clearer picture of the project's trajectory.
As the project nears its conclusion, it is important to plan for the final evaluation activities. This phase involves gathering and analyzing all remaining data, synthesizing findings, and preparing the final evaluation report. The timing of these activities should align with the project's end date, allowing for a timely and thorough assessment while details are still fresh in the minds of those involved.
Logic Models
Similar to their usefulness in LOIs, logic models can be good tools for the evaluation process. They provide a visual representation of the relationship between a program's resources, activities, outputs, outcomes, and impacts. By outlining these elements, logic models help clarify the program's underlying assumptions and theory of change. Here’s how and when logic models are typically used in evaluation:
Inputs: This will be information on what raw materials (including personnel efforts) will go into the program or initiative. This can also include constraints as well, as sort of anti-resources.
Activities: What interventions will you be doing with those resources in order to affect the change you want to see?
Outputs: This will include any hard evidence that the activities that you had planned, happened. What events occurred? How many people were there? How many volunteers were trained? Who presented?
Outcomes: These are your bigger results, in the short-term. What changes occurred as a result of the activities that you did?
Impact: This is your long-term result. How has the community changed because of your program?
Figure 8.1. Sample Logic Model
In the early stages of evaluation, logic models are instrumental in developing specific evaluation questions. These questions are designed to systematically assess various components of the program as outlined in the logic model. By breaking down the program into its fundamental parts—inputs, activities, outputs, short-term outcomes, long-term outcomes, and overall impact—evaluators can create targeted questions that address each aspect comprehensively.
Inputs
Evaluation questions regarding inputs might include:
- Are the resources (e.g., funding, personnel, materials) being utilized as planned?
- Are there sufficient resources to support the program activities?
Activities
Questions about activities might focus on:
- How effectively are the planned activities being implemented?
- Are the activities reaching the intended participants?
Outputs
For outputs, evaluators might ask:
- How many tutoring sessions were conducted?
- What was the level of participation in each session?
Short-Term Outcomes
Questions here could include:
- Are students showing improved understanding of math and reading concepts shortly after the tutoring sessions?
- What changes in homework completion rates are observed?
Long-Term Outcomes
Evaluators might explore:
- Have students' overall academic performances improved over the course of the program?
- Are there sustained improvements in students' self-confidence regarding their academic abilities?
Impact
Finally, for impact, questions could focus on:
- What is the overall effect of the program on the educational attainment in the community?
- How has the program contributed to reducing educational disparities?
By aligning these evaluation questions with the logic model, evaluators can ensure a comprehensive and systematic assessment of the program’s implementation and outcomes.
During the implementation phase, logic models serve as a blueprint for guiding data collection and analysis. They help evaluators determine the specific types of data needed at each stage of the program and align data collection methods with the program’s activities and intended outcomes.
Inputs and Activities Data Collection
- Administrative Records: Collect data on the allocation and utilization of resources, such as funding reports, staff time logs, and material inventories.
- Activity Logs: Track the implementation of program activities, including session schedules, attendance records, and participation rates.
Outputs Data Collection
- Quantitative Measures: Gather data on the number of tutoring sessions held, the number of students attending, and the frequency of sessions.
- Qualitative Measures: Collect feedback from participants about the quality and relevance of the sessions.
Short-Term and Long-Term Outcomes Data Collection
- Surveys and Tests: Use pre- and post-assessment surveys to measure changes in students' academic skills and knowledge.
- Interviews: Conduct interviews with students, parents, and tutors to gather qualitative insights into the program’s effectiveness.
- Observations: Observe tutoring sessions to assess student engagement and tutor performance.
Impact Data Collection
- Longitudinal Studies: Track students’ academic performance over time to measure sustained impacts.
- Community Surveys: Gather data from the broader community to assess changes in educational attainment and disparities.
By using a variety of data collection methods, evaluators can obtain a comprehensive view of the program's implementation and outcomes, ensuring that the data collected is both relevant and robust.
Logic models are crucial in formative evaluation, which involves ongoing monitoring of the program to make necessary adjustments. By regularly comparing actual activities and outputs against the logic model, evaluators can identify any deviations from the original plan. This comparison helps in understanding the implications of these deviations and in making informed, data-driven decisions to enhance the program’s effectiveness.
For example, if fewer tutoring sessions are being held than planned, evaluators can investigate the reasons—such as tutor availability or student attendance issues—and recommend corrective actions. This might include recruiting additional tutors or adjusting the session times to better fit students' schedules.
Regular formative evaluation allows program managers to:
- Address issues in real-time, ensuring the program remains on track.
- Improve resource allocation based on observed needs.
- Enhance participant engagement by adjusting activities to better meet their needs.
In summative evaluation, logic models help assess the overall success of the program by linking inputs and activities to final outcomes and impacts. Evaluators use the logic model to frame their analysis, determining whether the program achieved its intended results and how these results were produced. This structured approach ensures that the evaluation report clearly communicates the program’s effectiveness and provides insights into how it achieved its goals.
A summative evaluation might involve:
- Outcome Analysis: Comparing pre- and post-program data to measure changes in student academic performance and other targeted outcomes.
- Impact Assessment: Evaluating the broader effects of the program on the community, such as changes in educational attainment and reduction in disparities.
- Cost-Benefit Analysis: Assessing whether the benefits of the program justify the costs incurred.
The final evaluation report should:
- Summarize the findings of the evaluation, highlighting key successes and areas for improvement.
- Provide evidence of the program's impact, using data and testimonials.
- Offer recommendations for future programs, based on the insights gained from the evaluation.
By using logic models throughout the evaluation process, evaluators can ensure a thorough and systematic assessment that not only measures the program’s effectiveness but also provides valuable insights for future initiatives.
Sample Grant Evaluation Plan Template
Evaluation Plan Outline
<<Insert Program Name>>
Evaluation Plan
<<Insert Prepared by:>>
<<Insert Names>>
<<Insert Affiliations>>
<<Insert Date>>
Introduction
Provide an overview of the goals and objectives of the evaluation.
Evaluation Purpose
- What do you hope to achieve with this evaluation?
- How will the findings from this evaluation be used?
Stakeholders
- Who are the stakeholders for this evaluation?
- What is the nature of the roles of the stakeholders?
- How will the stakeholders be involved in the implementation of findings from evaluation?
Stakeholder | Nature of Interest | Role in Evaluation and Implementation |
Description of What is Being Evaluated
Provide an overview of what you are evaluating. Make sure to be clear about how far along you are in your project.
Context
- What are the contextual factors for this evaluation?
- What environmental elements could be influencing your inputs, outputs, and outcomes?
Need
- What is the need for what you are evaluating?
Clientele
- Who are you serving with this project?
Inputs and Resources
- What resources have been available to support the project?
- What resources have not been available?
Activities
- What specific activities have been undertaken to achieve the outcomes?
- What activities are still upcoming to achieve the outcomes?
- What activities were planned but did not take place, and why?
Outputs
- What outputs have been produced as a result of the activities performed?
- What outputs were planned but did not come to fruition, and why?
Outcomes
- What are the program’s intended outcomes (intended outcomes are short-term, intermediate, or long-term)?
- What do you ultimately want to change as a result of your activities (long-term outcomes)?
- What occurs between your activities and the point at which you see these ultimate outcomes (short-term and intermediate outcomes)?
Inputs | Activities | Outputs | Outcomes | ||
| Complete or Ongoing | Upcoming |
| Short-Term | Long-Term |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Logic Model
- Include a logic model.
Evaluation Design
Provide a brief summary on how you will design your evaluation here.
Evaluation Questions
- What specific questions do you hope to answer with this evaluation?
Evaluation Design
- What is the design for this evaluation?
- Why is this the best design for answering your questions?
Data Collection
Provide a brief summary of the methods you will use to gather data.
Data Collection Methods
- Will new data be collected or will secondary data be used?
- What methods will be used to acquire data?
- Will a sample be used? If so, how will the sample be selected?
- How will the quality of existing data be determined?
- How will the data be protected?
Data Relevance
How does the data relate to the evaluation questions proposed?
Evaluation Question | Collection Method(s) | Data Source(s) |
Data Analysis and Interpretation
In this section, summarize the indicators that you will use to judge success. Explain how you will analyze your evaluation findings and interpret and justify your conclusions.
Performance Standards and Indicators
- To what standards will you compare your evaluation findings to determine success?
- What are some measurable or observable elements that can indicate the performance of what is being evaluated?
Evaluation Question | Indicator | Standard for Success |
Analysis
- What method will you use to analyze your data
- Who will do the analysis and what are their qualifications for doing so?
Interpretation
- Who will be involved in drawing conclusions, and why?
- Who will be involved in interpreting conclusions, and why?
- Who will be involved in justifying conclusions, and why?
Evaluation Management
Give a brief overview of how the evaluation will be managed and implemented.
Evaluation Team
- Who will be involved? Please include the resume/CV of the lead evaluator.
Individual | Role | Responsibilities |
Data Management
- Who will be responsible for managing data?
Timeline
- When will planning and administrative tasks occur?
- Will there be pilot testing? If so, when?
- When will formal data collection and analysis tasks occur?
- When will information dissemination tasks occur?
Budget
- What is the budget for the evaluation?
- Where is the money coming from?
References
Bond, S. L., Boyd, S. E., & Rapp, K. A. (1997). Taking stock: A practical guide to evaluating your own programs. Horizon Research, Inc.
Burke Smith, N., & Gabriel Works, E. (2012). The complete book of grant writing. Sourcebooks.
Floersch, B. (2014). How to evaluate a grant development proposal. The Grantsmanship Center. https://www.tgci.com/blog/2014/06/how-evaluate-grant-development-professional
Frechtling Westat, J. (2002). The 2002 user friendly handbook for project evaluation. National Science Foundation.
Hug, S. E., & Aeschbach, M. (2020). Criteria for assessing grant applications: A systematic review. Palgrave Communications, 6, 37. https://doi.org/10.1057/s41599-020-0412-9
McCawley, P. F. (2001). The logic model for program planning and evaluation. University of Idaho Extension.
National Institute of Environmental Health Sciences & U.S. Department of Health and Human Services. (2012). Partnerships for environmental public health evaluation metrics manual (NIH Publication No. 12-7825).
Scherer, S. (2016). Developing a framework for grant evaluation: Integrating accountability and learning. The Foundation Review, 8(2). https://doi.org/10.9707/1944-5660.1297
Shackman, G. (2009). What is program evaluation? A beginner's guide. The Global Social Change Research Project. https://www.ebhoward.com/evaluating-grant-funded-programs-and-projects-a-guide-for-grantees/
Yuen, F. K. O., Terao, K. L., & Schmidt, A. M. (2013). Effective Grant Writing and Program Evaluation for Human Service Professionals. Wiley: Hoboken, N.J.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.