
Evaluation Toolkit
Evaluation Primer for Public Health Programs
Return to main evaluation page
Introduction
This primer contains basic definitions and resources on evaluation types, methodologies, steps, reporting, and standards as they relate to public health and as referenced in the Evaluation Choose and Use Guide.
For more detailed guides on evaluation, please see the list of key resources in the Evaluation Toolkit.
Evaluation Types
- Context Evaluation. An investigation of how a program interacts with social, political, physical, or economic environments. This type of evaluation could include a community needs or organizational assessment.1 Sample question: What are the environmental barriers to accessing program services?2
- Formative Evaluation. A strategy of assessing needs that a new program should fulfill,3 examining the early stages of a program's development,4 or testing a program on a small scale before broad dissemination.5 Sample question: Who is the intended audience for the program?6
- Impact Evaluation. A process that assesses the intended and unintended changes that can be attributed to a particular intervention, such as a project, program or policy.7 Some evaluators limit these changes to those occurring immediately.8 Sample question: Did participant knowledge change after attending the program?9
- Longitudinal Evaluation. A study that captures data over a period of time to track the long-term effects of changes in products, processes, or environment. A longitudinal study involves the repeated observations of a group of users over time, at regular intervals, with respect to one or more study variables. Longitudinal studies are mainly done in order to follow changes in perception, behaviors, attitudes, and motivation of use.10
- Outcome Evaluation. A process of assessing the short and long-term results of a program. It provides an "assessment of the effects of a program on the ultimate objectives, including changes in health and social benefits or quality of life."11 Sample question: What are the long-term positive effects of program participation?12
- Performance or Program Evaluation. A systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.13 This method of evaluation attempts to determine if the program is having the intended effect as planned and how the program could be improved.14 Similar to process evaluation, differing only by providing regular updates of evaluation results to stakeholders rather than summarizing results at the evaluation's conclusion.15
- Process Evaluation. An examining of the implementation and operation of program components "that assesses whether the program is performing as intended or according to some appropriate standard."16 Sample question: Was the program administered as planned?17
- Quasi/Experimental Evaluation. Designs that use comparison groups rather than randomly-assigned control groups as the baseline against which to measure net program impacts. Evaluations using these kinds of comparison groups can effectively test for the effects of program participation on outcomes under certain conditions.18
- Statistical Evaluation. The use of data, gathered from surveys, observation, or data mining with the goal of highlighting useful trends and suggesting conclusions based on mathematical or computational techniques.19
References
- W.K. Kellogg Foundation (1998). Evaluation Handbook.
- Puett R. (2000). Program Evaluation 101. The Medical University of South Carolina: National Violence Against Women Prevention Research Center. Accessed July 9, 2013.
- Short L, Hennessy M, Campbell J. (1996). "Tracking the Work". In Family Violence: Building a Coordinated Community Response: A Guide for Communities.
- Rossi PH, Freeman HE. (1993). Evaluation: A systematic approach(5th ed.). Newbury Park, CA: Sage Publications, Inc.
- Coyle SL, Boruch RF, Turner F. (Eds.). (1991). Evaluating AIDS prevention programs: Expanded edition. Washington DC: National Academy Press.
- Puett R (2000). Program Evaluation 101. The Medical University of South Carolina: National Violence Against Women Prevention Research Center. Accessed July 9, 2013.
- Khandker SR, Koolwal GB, Samad HA. (2010). Handbook on Impact Evaluation: Quantitative Methods and Practices. The World Bank.
- Green, LW, Kreuter, MW. (1991). Health Promotion Planning: An Educational and Environmental Approach (2nd ed.). Mountain View, CA: Mayfield Publishing Company.
- Puett R. (2000).
- User Experience Professionals' Association. Usability Body of Knowledge. Longitudinal Study. Accessed July 9, 2013.
- Green, LW, Kreuter, MW. (1991).
- Puett R. (2000).
- Administration for Children and Families. 2nd. Edition. (2018). The Program Manager's Guide to Evaluation. Chapter 2: What is program evaluation?
- Shackman G. "What Is Program Evaluation: A Beginner's Guide". The Global Social Change Research Project.
- Rossi PH, Lipsey MW, Freeman HE. (2004). Evaluation: A Systematic Approach (7th ed.) London: Sage Publications.
- Ibid.
- Puett R. (2000).
- Heckman JJ, Hotz VJ, Dabos M. (1987). Do we need experimental data to evaluate the impact of manpower on earnings? Evaluation Review, 11, 395-427.
- Given LM. (2008). The Sage encyclopedia of qualitative research methods. Los Angeles, CA: Sage Publications.
Evaluation Methodologies
- Case study. An analysis of persons, events, decisions, periods, projects, policies, institutions, or other systems that is studied holistically by one or more methods. "The case that is the subject of the inquiry will be an instance of a class of phenomena that provides an analytical frame — an object — within which the study is conducted and which the case illuminates and explicates."1
- Chart audit. A systematic process of reviewing medical records to measure performance often for the purposes of quality improvement.2
- Data analysis. The process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making.
- Document review. The collection of data by reviewing existing documents. "The documents may be internal to a program or organization (such as records of what components of an asthma management program were implemented in schools) or may be external (such as records of emergency room visits by students served by an asthma management program)."3
- Focus group. A group interview used to obtain information about a topic related to how it is perceived socially where participants are asked to share their perceptions, opinions, and attitudes on a topic in an interactive setting where they are encouraged to interact with each other.4
- Interview. A conversation, either in person or over communication media, between two or more people where questions are asked of the interviewee by the interviewer to elicit facts and opinions.
- Key informant. Collecting opinions from different members of a community who are knowledgeable about a topic in the context of working or living within a community or health care system. "The interviews provide structure and consistency to information-gathering and are especially suited to getting a picture of a particular environment and how it works – a local health system, political relationships, community organization."5
- Logic model. A depiction of a program showing what it will do and what it aims to accomplish, showing relationships between investments, activities, and results in a series of "if-then" relationships that, if implemented as intended, will lead to the desired outcomes.6
- Needs assessment. A systematic process to identify strengths and weaknesses of a group, organization, community, or organization in order to meet existing and future challenges and make ongoing improvements.7
- Observation. The collection of data to document activities, behavior, and physical actions/attributes without depending on subjects' willingness and ability to respond to questions.8
- Questionnaire/survey/checklist. Research instruments consisting of questions or other prompts to gather information (often quantitative) from respondents.
- Site visit. A physical meeting, often structured, in an official capacity to examine a site, program, or organization for evaluation.
- Thomas G. (2011). A Typology for the Case Study in Social Science Following a Review of Definition, Discourse and Structure. Qualitative Inquiry, 17, 6, 511-521
- Kaprielian V et al. (2003). Chart Audit: The How's and Why's. Durham, NC: Duke University Medical Center.
- Centers for Disease Control and Prevention. Data Collection Methods for Evaluation: Document Review. Evaluation Briefs. Number 18, January 2009.
- Community Health Education Comcepte. Focus Group Guide for Public Health Professionals.
- Sherry ST, Marlow A. (1999). Getting the Lay of the Land on Health: A Guide for Using Interviews to Gather Information (Key Informant Interviews). The Access Project with Brandeis University's Heller Gratuate School and the Collaborative for Community Health Development.
- Haverkate R. (2013). Why All the Excitement About Logic Models?. Rockville, MD: Office of Minority Health Resource Center.
- Peterson DJ, Alexander GR. (2001). Needs Assessment in Public Health: A Practical Guide for Students and Professionals. Hingham, MA: Kluwer Academic Publishers.
- Taylor-Powell E, Steele S. (1996). Collecting Evaluation Data: Direct Observation. Madison, WI: University of Wisconsin.
Evaluation Steps and Effective Reporting
The Centers for Disease Control and Prevention's (CDC's) Office of the Associate Director for Program/Program Evaluation has developed six steps for conducting public health evaluations:
- Engaging stakeholders.
- Describing the program.
- Focusing the evaluation design.
- Gathering credible evidence.
- Justifying conclusions.
- Ensuring use and sharing lessons learned.
In addition, a checklist for ensuring effective evaluation reports has been adapted from Worthen BR, Sanders JR, Fitzpatrick JL. Program evaluation: alternative approaches and practical guidelines. 2nd ed. New York, NY: Addison, Wesley Logman, Inc. 1997.
Evaluation Standards
- CDC's Office of the Associate Director for Program/Program Evaluation has developed a set of standards adopted from the Joint Committee on Standards for Educational Evaluation. These 30 standards are organized into four groups:
- Utility standards ensure that evaluations meet the needs of intended users.
- Feasibility standards ensure that evaluations are realistic, thoughtful, and fiscally responsible.
- Propriety standards ensure that evaluation are conducted legally, ethically and with appropriate concern for the welfare of those involved with and affected by such evaluations.
- Accuracy standards ensure that evaluations convey appropriate information about the features that determine worth or merit of evaluated program. (adapted from CDC's Standards).
Evaluation Primer for Public Health Programs (July 2013. Updated May 2024)
Author: John Richards, M.A., AITP, MCH Digital Library
Editor: Ruth Barzel, M.A., MCH Digital Library