SOCW 6311 Social Work Research in Practice II Please note that this is a master level September 2023

SOCW 6311 Social Work Research in Practice II Please note that this is a master level 2023

 

SOCW 6311: Social Work Research in Practice II

Please note that this is a master level course so master level work. Please check the grammar, use APA format and you have to use the reading that I have provided to you. You must answer all the questions that I post. Thank you. Please NOTE that my school is checking for plagiaries through Safe Assign

Week 10 

Readings

• Dudley, J. R. (2014). Social work evaluation: Enhancing what we do. (2nd ed.) Chicago, IL: Lyceum Books.

o Chapter 9, “Is the Intervention Effective?” (pp. 213–250)

o Chapter 10, “Analyzing Evaluation Data” (pp. 255-275)

• McNamara, C. (2006a). Contents of an evaluation plan. In Basic guide to program evaluation (including outcomes evaluation). Retrieved from http://managementhelp.org/evaluation/program-evaluation- guide.htm – anchor1586742

 HomeTranslate Share A A A

How to Use It
Share Feedback
Suggested Books
Meet Your Guides
To Get Updates

Basic Guide to Program Evaluation (Including Outcomes Evaluation)

© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.

Much of the content of this topic came from this book:

This document provides guidance toward planning and implementing an evaluation process for for-profit or nonprofit programs — there are many kinds of evaluations that can be applied to programs, for example, goals-based, process-based and outcomes-based. Nonprofit organizations are increasingly interested in outcomes-based evaluation. If you are interested in learning more about outcomes-based evaluation, then see the sections Outcomes-Evaluation and Outcomes-Based Evaluations in Nonprofit Organizations.

Sections of This Topic Include

Program Evaluation: carefully getting information to make decisions about programs
Where Program Evaluation is Helpful
Basic Ingredients (you need an organization and program(s))
Planning Program Evaluation (what do you want to learn about, what info is needed)
Major Types of Program Evaluation (evaluating program processes, goals, outcomes, etc.)
Overview of Methods to Collect Information (questionnaires, interviews, focus groups, etc.)
Selecting Which Methods to Use (which methods work best to get needed info from audiences)
Analyzing and Interpreting Information
Reporting Evaluation Results
Who Should Carry Out the Evaluation?
Contents of an Evaluation Plan
Pitfalls to Avoid

Online Guides, etc.
Outcomes-Evaluation
General Resources

Also consider
Evaluations (many kinds)
Related Library Topics

Learn More in the Library’s Blogs Related to Program Evaluations

In addition to the articles on this current page, see the following blogs which have posts related to Program Evaluations. Scan down the blog’s page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click on “next” near the bottom of a post in the blog.

Library’s Business Planning Blog
Library’s Building a Business Blog
Library’s Strategic Planning Blog

A Brief Introduction …

Note that the concept of program evaluation can include a wide variety of methods to evaluate many aspects of programs in nonprofit or for-profit organizations. There are numerous books and other materials that provide in-depth analysis of evaluations, their designs, methods, combination of methods and techniques of analysis. However, personnel do not have to be experts in these topics to carry out a useful program evaluation. The “20-80” rule applies here, that 20% of effort generates 80% of the needed results. It’s better to do what might turn out to be an average effort at evaluation than to do no evaluation at all. (Besides, if you resort to bringing in an evaluation consultant, you should be a smart consumer. Far too many program evaluations generate information that is either impractical or irrelevant — if the information is understood at all.) This document orients personnel to the nature of program evaluation and how it can be carried out in a realistic and practical fashion.

Note that much of the information in this section was gleaned from various works of Michael Quinn Patton.

Program Evaluation
Some Myths About Program Evaluation

1. Many people believe evaluation is a useless activity that generates lots of boring data with useless conclusions. This was a problem with evaluations in the past when program evaluation methods were chosen largely on the basis of achieving complete scientific accuracy, reliability and validity. This approach often generated extensive data from which very carefully chosen conclusions were drawn. Generalizations and recommendations were avoided. As a result, evaluation reports tended to reiterate the obvious and left program administrators disappointed and skeptical about the value of evaluation in general. More recently (especially as a result of Michael Patton’s development of utilization-focused evaluation), evaluation has focused on utility, relevance and practicality at least as much as scientific validity.

2. Many people believe that evaluation is about proving the success or failure of a program. This myth assumes that success is implementing the perfect program and never having to hear from employees, customers or clients again — the program will now run itself perfectly. This doesn’t happen in real life. Success is remaining open to continuing feedback and adjusting the program accordingly. Evaluation gives you this continuing feedback.

3. Many believe that evaluation is a highly unique and complex process that occurs at a certain time in a certain way, and almost always includes the use of outside experts. Many people believe they must completely understand terms such as validity and reliability. They don’t have to. They do have to consider what information they need in order to make current decisions about program issues or needs. And they have to be willing to commit to understanding what is really going on. Note that many people regularly undertake some nature of program evaluation — they just don’t do it in a formal fashion so they don’t get the most out of their efforts or they make conclusions that are inaccurate (some evaluators would disagree that this is program evaluation if not done methodically). Consequently, they miss precious opportunities to make more of difference for their customer and clients, or to get a bigger bang for their buck.

So What is Program Evaluation?

First, we’ll consider “what is a program?” Typically, organizations work from their mission to identify several overall goals which must be reached to accomplish their mission. In nonprofits, each of these goals often becomes a program. Nonprofit programs are organized methods to provide certain related services to constituents, e.g., clients, customers, patients, etc. Programs must be evaluated to decide if the programs are indeed useful to constituents. In a for-profit, a program is often a one-time effort to produce a new product or line of products.

So, still, what is program evaluation? Program evaluation is carefully collecting information about a program or some aspect of a program in order to make necessary decisions about the program. Program evaluation can include any or a variety of at least 35 different types of evaluation, such as for needs assessments, accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative, goal-based, process, outcomes, etc. The type of evaluation you undertake to improve your programs depends on what you want to learn about the program. Don’t worry about what type of evaluation you need or are doing — worry about what you need to know to make the program decisions you need to make, and worry about how you can accurately collect and understand that information.

Where Program Evaluation is Helpful
Frequent Reasons:

Program evaluation can:
1. Understand, verify or increase the impact of products or services on customers or clients – These “outcomes” evaluations are increasingly required by nonprofit funders as verification that the nonprofits are indeed helping their constituents. Too often, service providers (for-profit or nonprofit) rely on their own instincts and passions to conclude what their customers or clients really need and whether the products or services are providing what is needed. Over time, these organizations find themselves in a lot of guessing about what would be a good product or service, and trial and error about how new products or services could be delivered.
2. Improve delivery mechanisms to be more efficient and less costly – Over time, product or service delivery ends up to be an inefficient collection of activities that are less efficient and more costly than need be. Evaluations can identify program strengths and weaknesses to improve the program.
3. Verify that you’re doing what you think you’re doing – Typically, plans about how to deliver services, end up changing substantially as those plans are put into place. Evaluations can verify if the program is really running as originally planned.

Other Reasons:

Program evaluation can:
4. Facilitate management’s really thinking about what their program is all about, including its goals, how it meets it goals and how it will know if it has met its goals or not.
5. Produce data or verify results that can be used for public relations and promoting services in the community.
6. Produce valid comparisons between programs to decide which should be retained, e.g., in the face of pending budget cuts.
7. Fully examine and describe effective programs for duplication elsewhere.

Basic Ingredients: Organization and Program(s)
You Need An Organization:

This may seem too obvious to discuss, but before an organization embarks on evaluating a program, it should have well established means to conduct itself as an organization, e.g., (in the case of a nonprofit) the board should be in good working order, the organization should be staffed and organized to conduct activities to work toward the mission of the organization, and there should be no current crisis that is clearly more important to address than evaluating programs.

You Need Program(s):

To effectively conduct program evaluation, you should first have programs. That is, you need a strong impression of what your customers or clients actually need. (You may have used a needs assessment to determine these needs — itself a form of evaluation, but usually the first step in a good marketing plan). Next, you need some effective methods to meet each of those goals. These methods are usually in the form of programs.

It often helps to think of your programs in terms of inputs, process, outputs and outcomes. Inputs are the various resources needed to run the program, e.g., money, facilities, customers, clients, program staff, etc. The process is how the program is carried out, e.g., customers are served, clients are counseled, children are cared for, art is created, association members are supported, etc. The outputs are the units of service, e.g., number of customers serviced, number of clients counseled, children cared for, artistic pieces produced, or members in the association. Outcomes are the impacts on the customers or on clients receiving services, e.g., increased mental health, safe and secure development, richer artistic appreciation and perspectives in life, increased effectiveness among members, etc.

Planning Your Program Evaluation
Depends on What Information You Need to Make Your Decisions and On Your Resources.

Often, management wants to know everything about their products, services or programs. However, limited resources usually force managers to prioritize what they need to know to make current decisions.

Your program evaluation plans depend on what information you need to collect in order to make major decisions. Usually, management is faced with having to make major decisions due to decreased funding, ongoing complaints, unmet needs among customers and clients, the need to polish service delivery, etc. For example, do you want to know more about what is actually going on in your programs, whether your programs are meeting their goals, the impact of your programs on customers, etc? You may want other information or a combination of these. Ultimately, it’s up to you.

But the more focused you are about what you want to examine by the evaluation, the more efficient you can be in your evaluation, the shorter the time it will take you and ultimately the less it will cost you (whether in your own time, the time of your employees and/or the time of a consultant).

There are trade offs, too, in the breadth and depth of information you get. The more breadth you want, usually the less depth you get (unless you have a great deal of resources to carry out the evaluation). On the other hand, if you want to examine a certain aspect of a program in great detail, you will likely not get as much information about other aspects of the program.

For those starting out in program evaluation or who have very limited resources, they can use various methods to get a good mix of breadth and depth of information. They can both understand more about certain areas of their programs and not go bankrupt doing so.

Key Considerations:

Consider the following key questions when designing a program evaluation.
1. For what purposes is the evaluation being done, i.e., what do you want to be able to decide as a result of the evaluation?
2. Who are the audiences for the information from the evaluation, e.g., customers, bankers, funders, board, management, staff, customers, clients, etc.
3. What kinds of information are needed to make the decision you need to make and/or enlighten your intended audiences, e.g., information to really understand the process of the product or program (its inputs, activities and outputs), the customers or clients who experience the product or program, strengths and weaknesses of the product or program, benefits to customers or clients (outcomes), how the product or program failed and why, etc.
4. From what sources should the information be collected, e.g., employees, customers, clients, groups of customers or clients and employees together, program documentation, etc.
5. How can that information be collected in a reasonable fashion, e.g., questionnaires, interviews, examining documentation, observing customers or employees, conducting focus groups among customers or employees, etc.
6. When is the information needed (so, by when must it be collected)?
7. What resources are available to collect the information?

Some Major Types of Program Evaluation

When designing your evaluation approach, it may be helpful to review the following three types of evaluations, which are rather common in organizations. Note that you should not design your evaluation approach simply by choosing which of the following three types you will use — you should design your evaluation approach by carefully addressing the above key considerations.

Goals-Based Evaluation

Often programs are established to meet one or more specific goals. These goals are often described in the original program plans.

Goal-based evaluations are evaluating the extent to which programs are meeting predetermined goals or objectives. Questions to ask yourself when designing an evaluation to see if you reached your goals, are:
1. How were the program goals (and objectives, is applicable) established? Was the process effective?
2. What is the status of the program’s progress toward achieving the goals?
3. Will the goals be achieved according to the timelines specified in the program implementation or operations plan? If not, then why?
4. Do personnel have adequate resources (money, equipment, facilities, training, etc.) to achieve the goals?
5. How should priorities be changed to put more focus on achieving the goals? (Depending on the context, this question might be viewed as a program management decision, more than an evaluation question.)
6. How should timelines be changed (be careful about making these changes – know why efforts are behind schedule before timelines are changed)?
7. How should goals be changed (be careful about making these changes – know why efforts are not achieving the goals before changing the goals)? Should any goals be added or removed? Why?
8. How should goals be established in the future?

Process-Based Evaluations

Process-based evaluations are geared to fully understanding how a program works — how does it produce that results that it does. These evaluations are useful if programs are long-standing and have changed over the years, employees or customers report a large number of complaints about the program, there appear to be large inefficiencies in delivering program services and they are also useful for accurately portraying to outside parties how a program truly operates (e.g., for replication elsewhere).

There are numerous questions that might be addressed in a process evaluation. These questions can be selected by carefully considering what is important to know about the program. Examples of questions to ask yourself when designing an evaluation to understand and/or closely examine the processes in your programs, are:
1. On what basis do employees and/or the customers decide that products or services are needed?
2. What is required of employees in order to deliver the product or services?
3. How are employees trained about how to deliver the product or services?
4. How do customers or clients come into the program?
5. What is required of customers or client?
6. How do employees select which products or services will be provided to the customer or client?
7. What is the general process that customers or clients go through with the product or program?
8. What do customers or clients consider to be strengths of the program?
9. What do staff consider to be strengths of the product or program?
10. What typical complaints are heard from employees and/or customers?
11. What do employees and/or customers recommend to improve the product or program?
12. On what basis do employees and/or the customer decide that the product or services are no longer needed?

Outcomes-Based Evaluation

Program evaluation with an outcomes focus is increasingly important for nonprofits and asked for by funders.An outcomes-based evaluation facilitates your asking if your organization is really doing the right program activities to bring about the outcomes you believe (or better yet, you’ve verified) to be needed by your clients (rather than just engaging in busy activities which seem reasonable to do at the time). Outcomes are benefits to clients from participation in the program. Outcomes are usually in terms of enhanced learning (knowledge, perceptions/attitudes or skills) or conditions, e.g., increased literacy, self-reliance, etc. Outcomes are often confused with program outputs or units of services, e.g., the number of clients who went through a program.

The United Way of America (http://www.unitedway.org/outcomes/) provides an excellent overview of outcomes-based evaluation, including introduction to outcomes measurement, a program outcome model, why to measure outcomes, use of program outcome findings by agencies, eight steps to success for measuring outcomes, examples of outcomes and outcome indicators for various programs and the resources needed for measuring outcomes. The following information is a top-level summary of information from this site.

To accomplish an outcomes-based evaluation, you should first pilot, or test, this evaluation approach on one or two programs at most (before doing all programs).

The general steps to accomplish an outcomes-based evaluation include to:
1. Identify the major outcomes that you want to examine or verify for the program under evaluation. You might reflect on your mission (the overall purpose of your organization) and ask yourself what impacts you will have on your clients as you work towards your mission. For example, if your overall mission is to provide shelter and resources to abused women, then ask yourself what benefits this will have on those women if you effectively provide them shelter and other services or resources. As a last resort, you might ask yourself, “What major activities are we doing now?” and then for each activity, ask “Why are we doing that?” The answer to this “Why?” question is usually an outcome. This “last resort” approach, though, may just end up justifying ineffective activities you are doing now, rather than examining what you should be doing in the first place.
2. Choose the outcomes that you want to examine, prioritize the outcomes and, if your time and resources are limited, pick the top two to four most important outcomes to examine for now.
3. For each outcome, specify what observable measures, or indicators, will suggest that you’re achieving that key outcome with your clients. This is often the most important and enlightening step in outcomes-based evaluation. However, it is often the most challenging and even confusing step, too, because you’re suddenly going from a rather intangible concept, e.g., increased self-reliance, to specific activities, e.g., supporting clients to get themselves to and from work, staying off drugs and alcohol, etc. It helps to have a “devil’s advocate” during this phase of identifying indicators, i.e., someone who can question why you can assume that an outcome was reached because certain associated indicators were present.
4. Specify a “target” goal of clients, i.e., what number or percent of clients you commit to achieving specific outcomes with, e.g., “increased self-reliance (an outcome) for 70% of adult, African American women living in the inner city of Minneapolis as evidenced by the following measures (indicators) …”
5. Identify what information is needed to show these indicators, e.g., you’ll need to know how many clients in the target group went through the program, how many of them reliably undertook their own transportation to work and stayed off drugs, etc. If your program is new, you may need to evaluate the process in the program to verify that the program is indeed carried out according to your original plans. (Michael Patton, prominent researcher, writer and consultant in evaluation, suggests that the most important type of evaluation to carry out may be this implementation evaluation to verify that your program ended up to be implemented as you originally planned.)
6. Decide how can that information be efficiently and realistically gathered (see Selecting Which Methods to Use below). Consider program documentation, observation of program personnel and clients in the program, questionnaires and interviews about clients perceived benefits from the program, case studies of program failures and successes, etc. You may not need all of the above. (see Overview of Methods to Collect Information below).
7. Analyze and report the findings (see Analyzing and Interpreting Information below).

Overview of Methods to Collect Information

The following table provides an overview of the major methods used for collecting data during evaluations.

Method

Overall Purpose

Advantages

Challenges

questionnaires, surveys,
checklistswhen need to quickly and/or easily get lots of information from people in a non threatening way-can complete anonymously
-inexpensive to administer
-easy to compare and analyze
-administer to many people
-can get lots of data
-many sample questionnaires already exist-might not get careful feedback
-wording can bias client’s responses
-are impersonal
-in surveys, may need sampling expert
– doesn’t get full storyinterviewswhen want to fully understand someone’s impressions or experiences, or learn more about their answers to questionnaires-get full range and depth of information
-develops relationship with client
-can be flexible with client-can take much time
-can be hard to analyze and compare
-can be costly
-interviewer can bias client’s responsesdocumentation reviewwhen want impression of how program operates without interrupting the program; is from review of applications, finances, memos, minutes, etc.-get comprehensive and historical information
-doesn’t interrupt program or client’s routine in program
-information already exists
-few biases about information-often takes much time
-info may be incomplete
-need to be quite clear about what looking for
-not flexible means to get data; data restricted to what already existsobservationto gather accurate information about how a program actually operates, particularly about processes-view operations of a program as they are actually occurring
-can adapt to events as they occur-can be difficult to interpret seen behaviors
-can be complex to categorize observations
-can influence behaviors of program participants
-can be expensivefocus groupsexplore a topic in depth through group discussion, e.g., about reactions to an experience or suggestion, understanding common complaints, etc.; useful in evaluation and marketing-quickly and reliably get common impressions
-can be efficient way to get much range and depth of information in short time
– can convey key information about programs-can be hard to analyze responses
-need good facilitator for safety and closure
-difficult to schedule 6-8 people togethercase studiesto fully understand or depict client’s experiences in a program, and conduct comprehensive examination through cross comparison of cases-fully depicts client’s experience in program input, process and results
-powerful means to portray program to outsiders-usually quite time consuming to collect, organize and describe
-represents depth of information, rather than breadth

Also consider
Appreciative Inquiry
Survey Design

Ethics: Informed Consent from Program Participants

Note that if you plan to include in your evaluation, the focus and reporting on personal information about customers or clients participating in the evaluation, then you should first gain their consent to do so. They should understand what you’re doing with them in the evaluation and how any information associated with them will be reported. You should clearly convey terms of confidentiality regarding access to evaluation results. They should have the right to participate or not. Have participants review and sign an informed consent form. See the sample informed-consent form.

How to Apply Certain Methods

Purposes and Formats of Questions Developing Questionnaires
Conducting Interviews
Conducting Focus Groups
Developing Case Studies

Selecting Which Methods to Use
Overall Goal in Selecting Methods:

The overall goal in selecting evaluation method(s) is to get the most useful information to key decision makers in the most cost-effective and realistic fashion. Consider the following questions:
1. What information is needed to make current decisions about a product or program?
2. Of this information, how much can be collected and analyzed in a low-cost and practical manner, e.g., using questionnaires, surveys and checklists?
3. How accurate will the information be (reference the above table for disadvantages of methods)?
4. Will the methods get all of the needed information?
5. What additional methods should and could be used if additional information is needed?
6. Will the information appear as credible to decision makers, e.g., to funders or top management?
7. Will the nature of the audience conform to the methods, e.g., will they fill out questionnaires carefully, engage in interviews or focus groups, let you examine their documentations, etc.?
8. Who can administer the methods now or is training required?
9. How can the information be analyzed?

Note that, ideally, the evaluator uses a combination of methods, for example, a questionnaire to quickly collect a great deal of information from a lot of people, and then interviews to get more in-depth information from certain respondents to the questionnaires. Perhaps case studies could then be used for more in-depth analysis of unique and notable cases, e.g., those who benefited or not from the program, those who quit the program, etc.

Four Levels of Evaluation:

There are four levels of evaluation information that can be gathered from clients, including getting their:
1. reactions and feelings (feelings are often poor indicators that your service made lasting impact)
2. learning (enhanced attitudes, perceptions or knowledge)
3. changes in skills (applied the learning to enhance behaviors)
4. effectiveness (improved performance because of enhanced behaviors)

Usually, the farther your evaluation information gets down the list, the more useful is your evaluation. Unfortunately, it is quite difficult to reliably get information about effectiveness. Still, information about learning and skills is quite useful.

Analyzing and Interpreting Information

Analyzing quantitative and qualitative data is often the topic of advanced research and evaluation methods. There are certain basics which can help to make sense of reams of data.

Always start with your evaluation goals:
When analyzing data (whether from questionnaires, interviews, focus groups, or whatever), always start from review of your evaluation goals, i.e., the reason you undertook the evaluation in the first place. This will help you organize your data and focus your analysis. For example, if you wanted to improve your program by identifying its strengths and weaknesses, you can organize data into program strengths, weaknesses and suggestions to improve the program. If you wanted to fully understand how your program works, you could organize data in the chronological order in which clients go through your program. If you are conducting an outcomes-based evaluation, you can categorize data according to the indicators for each outcome.

Basic analysis of “quantitative” information (for information other than commentary, e.g., ratings, rankings, yes’s, no’s, etc.):
1. Make copies of your data and store the master copy away. Use the copy for making edits, cutting and pasting, etc.
2. Tabulate the information, i.e., add up the number of ratings, rankings, yes’s, no’s for each question.
3. For ratings and rankings, consider computing a mean, or average, for each question. For example, “For question #1, the average ranking was 2.4”. This is more meaningful than indicating, e.g., how many respondents ranked 1, 2, or 3.
4. Consider conveying the range of answers, e.g., 20 people ranked “1”, 30 ranked “2”, and 20 people ranked “3”.

Basic analysis of “qualitative” information (respondents’ verbal answers in interviews, focus groups, or written commentary on questionnaires):
1. Read through all the data.
2. Organize comments into similar categories, e.g., concerns, suggestions, strengths, weaknesses, similar experiences, program inputs, recommendations, outputs, outcome indicators, etc.
3. Label the categories or themes, e.g., concerns, suggestions, etc.
4. Attempt to identify patterns, or associations and causal relationships in the themes, e.g., all people who attended programs in the evening had similar concerns, most people came from the same geographic area, most people were in the same salary range, what processes or events respondents experience during the program, etc.
4. Keep all commentary for several years after completion in case needed for future reference.

Interpreting Information:

1. Attempt to put the information in perspective, e.g., compare results to what you expected, promised results; management or program staff; any common standards for your services; original program goals (especially if you’re conducting a program evaluation); indications of accomplishing outcomes (especially if you’re conducting an outcomes evaluation); description of the program’s experiences, strengths, weaknesses, etc. (especially if you’re conducting a process evaluation).
2. Consider recommendations to help program staff improve the program, conclusions about program operations or meeting goals, etc.
3. Record conclusions and recommendations in a report document, and associate interpretations to justify your conclusions or recommendations.

Reporting Evaluation Results

1.The level and scope of content depends on to whom the report is intended, e.g., to bankers, funders, employees, customers, clients, the public, etc.
2. Be sure employees have a chance to carefully review and discuss the report. Translate recommendations to action plans, including who is going to do what about the program and by when.
3. Bankers or funders will likely require a report that includes an executive summary (this is a summary of conclusions and recommendations, not a listing of what sections of information are in the report — that’s a table of contents); description of the organization and the program under evaluation; explanation of the evaluation goals, methods, and analysis procedures; listing of conclusions and recommendations; and any relevant attachments, e.g., inclusion of evaluation questionnaires, interview guides, etc. The banker or funder may want the report to be delivered as a presentation, accompanied by an overview of the report. Or, the banker or funder may want to review the report alone.
4. Be sure to record the evaluation plans and activities in an evaluation plan which can be referenced when a similar program evaluation is needed in the future.

Contents of an Evaluation Report — Example

An example of evaluation report contents is included later on below in this document. Click Contents of an Evaluation Plan but, don’t forget to look at the next section “Who Should Carry Out the Evaluation”.

Who Should Carry Out the Evaluation?

Ideally, management decides what the evaluation goals should be. Then an evaluation expert helps the organization to determine what the evaluation methods should be, and how the resulting data will be analyzed and reported back to the organization. Most organizations do not have the resources to carry out the ideal evaluation.

Still, they can do the 20% of effort needed to generate 80% of what they need to know to make a decision about a program. If they can afford any outside help at all, it should be for identifying the appropriate evaluation methods and how the data can be collected. The organization might find a less expensive resource to apply the methods, e.g., conduct interviews, send out and analyze results of questionnaires, etc.

If no outside help can be obtained, the organization can still learn a great deal by applying the methods and analyzing results themselves. However, there is a strong chance that data about the strengths and weaknesses of a program will not be interpreted fairly if the data are analyzed by the people responsible for ensuring the program is a good one. Program managers will be “policing” themselves. This caution is not to fault program managers, but to recognize the strong biases inherent in trying to objectively look at and publicly (at least within the organization) report about their programs. Therefore, if at all possible, have someone other than the program managers look at and determine evaluation results.

Contents of an Evaluation Plan

Develop an evaluation plan to ensure your program evaluations are carried out efficiently in the future. Note that bankers or funders may want or benefit from a copy of this plan.

Ensure your evaluation plan is documented so you can regularly and efficiently carry out your evaluation activities. Record enough information in the plan so that someone outside of the organization can understand what you’re evaluating and how. Consider the following format for your report:
1. Title Page (name of the organization that is being, or has a product/service/program that is being, evaluated; date)
2. Table of Contents
3. Executive Summary (one-page, concise overview of findings and recommendations)
4. Purpose of the Report (what type of evaluation(s) was conducted, what decisions are being aided by the findings of the evaluation, who is making the decision, etc.)
5. Background About Organization and Product/Service/Program that is being evaluated
a) Organization Description/History
b) Product/Service/Program Description (that is being evaluated)
i) Problem Statement (in the case of nonprofits, description of the community need that is being met by the product/service/program)
ii) Overall Goal(s) of Product/Service/Program
iii) Outcomes (or client/customer impacts) and Performance Measures (that can be measured as indicators toward the outcomes)
iv) Activities/Technologies of the Product/Service/Program (general description of how the product/service/program is developed and delivered)
v) Staffing (description of the number of personnel and roles in the organization that are relevant to developing and delivering the product/service/program)
6) Overall Evaluation Goals (eg, what questions are being answered by the evaluation)
7) Methodology
a) Types of data/information that were collected
b) How data/information were collected (what instruments were used, etc.)
c) How data/information were analyzed
d) Limitations of the evaluation (eg, cautions about findings/conclusions and how to use the findings/conclusions, etc.)
8) Interpretations and Conclusions (from analysis of the data/information)
9) Recommendations (regarding the decisions that must be made about the product/service/program)
Appendices: content of the appendices depends on the goals of the evaluation report, eg.:
a) Instruments used to collect data/information
b) Data, eg, in tabular format, etc.
c) Testimonials, comments made by users of the product/service/program
d) Case studies of users of the product/service/program
e) Any related literature

Pitfalls to Avoid

1. Don’t balk at evaluation because it seems far too “scientific.” It’s not. Usually the first 20% of effort will generate the first 80% of the plan, and this is far better than nothing.
2. There is no “perfect” evaluation design. Don’t worry about the plan being perfect. It’s far more important to do something, than to wait until every last detail has been tested.
3. Work hard to include some interviews in your evaluation methods. Questionnaires don’t capture “the story,” and the story is usually the most powerful depiction of the benefits of your services.
4. Don’t interview just the successes. You’ll learn a great deal about the program by understanding its failures, dropouts, etc.
5. Don’t throw away evaluation results once a report has been generated. Results don’t take up much room, and they can provide precious information later when trying to understand changes in the program.

Online Guides

Program Manager’s Guide to Evaluation
Basic Guide to Program Evaluation
What is a Program Logic Model? (logic model captures inputs, activities, outputs, outcomes)
Outcome Measurement: Showing Results (wonderful overview of outcomes, myths, etc.)
W.K. Kellogg Foundation Evaluation Handbook

Outcomes-Evaluation

Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources
What is a Program Logic Model? (logic model captures inputs, activities, outputs, outcomes)
Basic Guide to Outcomes-Based Evaluation
Paul Duignan on DoView and Visually Representing Outcomes

General Resources

(Thanks to Gene Shackman for suggesting many of the following resources.)
American Evaluation Association
Online library of resources for evaluation of mental health programs
Free Resources for Program Evaluation and Social Research Methods
Evaluation portal and links collection Guides for many types of evaluation
Susan Kistler on Online Journals for Evaluators
Program Manager’s Guide to Evaluation
What is program evaluation: A set of beginners guides
Hiring and Working With an Evaluator
Our Ineffectiveness at Measuring Effectiveness
Outcome Indicators Project
Sudharshan Seshadri on Resources for Program Evaluation
The Critical Need for Program Accountability & Evaluation
How to Address Fears about Program Evaluation
How to Maximize Funding by Tapping into Hidden Potential: Program Evaluation
How Traditional Planning and Evaluation Interact
Four Differences between Research and Program Evaluation
Which Is More Important—the Means or the Ends? Process, Impact and Outcome Evaluations
A Guide to Navigating the Evaluation Maze: “A Framework for Evaluation” from the CDC, Part 1
A Guide to Navigating the Evaluation Maze: “A Framework for Evaluation” from the CDC, Part 2
Tips on How to Conduct Interviews for Program Evaluation (part 1)
Tips on How to Conduct Interviews for Program Evaluation (Part 2)
Measuring and/or Estimating Social Value Creation: Insights Into Eight Integrated Cost Approaches
How to Evaluate on a Budget: DIY or Outsource?

For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.

Related Library Topics

Recommended Books

Home
Index
Your Learning
Arrange Peer Support
Build Your Learning Plan
General Resources
Blog Directories
Free Trainings
Job Banks
Online Groups
Orgs That Help
Periodicals
Reference Material
Supersites
Categories
Yourself
Leadership
Entrepreneurship
Products
Sales
Personnel
Finances
Organizations
Consulting

About Feedback Legal Privacy Policy Contact Us

Copyright, Free Management Library Copyright, Authenticity Consulting, LLC
Graphics by Wylde Hare LLC
Website maintained by Caitlin Cahill

By continuing to use this site, you agree to our Privacy Policy.X

• McNamara, C. (2006b). Reasons for priority on implementing an outcomes-based evaluation. In a Basic guide to outcomes-based evaluation for nonprofit organizations with very limited resources. Retrieved from http://managementhelp.org/evaluation/outcomes-evaluation- guide.htm#anchor30249

• Plummer, S.-B., Makris, S., & Brocksen S. (Eds.). (2014b). Social work case studies: Concentration year. Baltimore, MD: Laureate International Universities Publishing. [Vital Source e- reader].

Read the following section:

o “Social Work Research: Planning a Program Evaluation”

 HomeTranslate Share A A A

How to Use It
Share Feedback
Suggested Books
Meet Your Guides
To Get Updates

Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources

© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.

Much of the content of this topic came from this book:

Description

This document provides guidance toward basic planning and implementation of an outcomes-based evaluation process (also called outcomes evaluation) in nonprofit organizations. This document provides basic guidance — particularly to small nonprofits with very limited resources.

NOTE: This free, basic, online guide makes occasional references to certain pages in the United Way of America’s book, Measuring Program Outcomes: A Practical Approach (1996). That United Way book is an excellent resource! However, it can be somewhat overwhelming for nonprofits that have very limited resources. This free online guide (that are reading now) can help nonprofits carry out their own basic outcomes evaluation planning. This online guide can also help small nonprofits to make the most of that United Way book — however, you do not have to have that United Way book in order to carry out your own basic outcomes evaluation plan by using this online guide. (Still, small nonprofits are encouraged to get the United Way book, for example, to later round out basic evaluation plans developed from this online guide and/or to learn more than provided in this basic guide about outcomes evaluation. To get the United Way book, call 703-212-6300 and ask about item #0989.)

NOTE: Outcomes-based evaluation is but one type of evaluation — there are many types of evaluations. The reader would gain deeper understanding about outcomes-based evaluation by reading about the broader topic of evaluation. To do so, read Basic Guide to Program Evaluation. This online basic guide about outcomes-based evaluation was designed by modified the Basic Guide to Program Evaluation.

Table of Contents

Reasons for Priority on Outcomes-Based Evaluation
Basic Principles for Small Nonprofits to Remember Before Starting Outcomes Planning
What is Outcomes-Based Evaluation?
Myths to Get Out of the Way Before You Start Your Outcomes Planning
Planning Any Type of Evaluation Includes Answers to These Very Basic Questions
Planning Your Outcomes Evaluation — Step 1: Getting Ready
Planning Your Outcomes Evaluation — Step 2: Choosing Outcomes
Planning Your Outcomes Evaluation — Step 3: Selecting Indicators
Planning Your Outcomes Evaluation — Step 4: Planning Data/Info Collection
Planning Your Outcomes Evaluation — Step 5: Piloting/Testing
Planning Your Outcomes Evaluation — Step 6: Analyzing/Reporting Results
Useful Online Resources

Also consider
Related Library Topics

Learn More in the Library’s Blogs Related to Outcomes Evaluations

In addition to the articles on this current page, see the following blogs which have posts related to Outcomes Evaluations. Scan down the blog’s page to see various posts. Also see the section “Recent Blog Posts” in the sidebar of the blog or click on “next” near the bottom of a post in the blog.

Library’s Business Planning Blog
Library’s Building a Business Blog
Library’s Strategic Planning Blog

Reasons for Priority on Implementing Outcomes-Based Evaluation
There are decreasing funds for nonprofits
Yet there are increasing community needs
Thus, there is more focus on whether nonprofit programs are really making a difference — and outcomes evaluation focuses on whether programs are really making a difference for clients
Previous evaluation measures were on, for example, how much money spent, number of people served and on client satisfaction — these measures don’t really assess impacts on clients
Outcomes evaluation looks at impacts/benefits to clients during and after participation in your programs
Basic Principles for Small Nonprofits to Remember Before Starting

Nonprofit personnel do not have to be experts in outcomes-based evaluation in order to carry out a useful outcomes evaluation plan.

In most major activities in life and work, there is a “20% of effort that generates 80% of the results”. This basic guide will give you the direction to accomplish that 20% needed to develop an outcomes evaluation plan for your organization.
Once you’ve carried out the guidelines in this basic guide, you can probably let experience and funders help you with the rest of your outcomes evaluation planning, particularly as you implement your evaluation plan during its first year.
In life (particularly for us adults), problems exist often because we’re making things far too complex, not because we’re making things far too simple. Often, people who are new to evaluation get “mindcramp”, that is, they think too hard about evaluation. It’s actually a fairly simple notion — just don’t think so hard about it!
Start small, start now and grow as you’re able.
Ready, fire, aim!
What is Outcomes-Based Evaluation?
A Basic Definition

As noted above, outcomes evaluation looks at impacts/benefits/changes to your clients (as a result of your program(s) efforts) during and/or after their participation in your programs. Outcomes evaluation can examine these changes in the short-term, intermediate term and long-term (we’ll talk more about this later on below.)

Basic Components and Key Terms in Outcomes Evaluation

Outcomes evaluation is often described first by looking at its basic components. Outcomes evaluation looks at programs as systems that have inputs, activities/processes, outputs and outcomes — this system’s view is useful in examining any program!

Inputs –
These are materials and resources that the program uses in its activities, or processes, to serve clients, eg, equipment, staff, volunteers, facilities, money, etc. These are often easy to identify and many of the inputs seem common to many organizations and programs.
Activities –
These are the activities, or processes, that the program undertakes with/to the client in order to meet the clients’ needs, for example, teaching, counseling, sheltering, feeding, clothing, etc. Note that when identifying the activities in a program, the focus is still pretty much on the organization or program itself, and still is not so much on actual changes in the client.
Outputs –
These are the units of service regarding your program, for example, the number of people taught, counseled, sheltered, fed, clothed, etc. The number of clients served, books published, etc., very often indicates nothing at all about the actual impacts/benefits/changes in your clients who went through the program — the number of clients served merely indicates the numerical number of clients who went through your program.
Outcomes –
These are actual impacts/benefits/changes for participants during or after your program
— for example, for a smoking cessation program, an outcome might be “participants quit smoking” (notice that this outcome is quite different than outputs, such as the “number of clients who went through the cessation program”)
— These changes, or outcomes, are usually expressed in terms of:
— — knowledge and skills (these are often considered to be rather short-term outcomes)
— — behaviors (these are often considered to be rather intermediate-term outcomes)
— — values, conditions and status (these are often considered to be rather long-term outcomes)
Outcome targets –
These are the number and percent of participants that you want to achieve the outcome, for example, an outcome goal of 5,000 teens (10% of teens in Indianapolis) who quit smoking over the next year
Outcome indicators –
These are observable and measurable “milestones” toward an outcome target. These are what you’d see, hear, read, etc., that would indicate to you whether you’re making any progress toward your outcome target or not, for example, the number and percent of teen participants who quit smoking right after the program and six months after the program — these indicators give you a strong impression as to whether 5,000 teens will quit or not over the next year from completing your program.

NOTE: Take a few minutes and really notice the differences between:
— Outputs (which indicate hardly anything about the changes in clients — they’re usually just numbers)
— Outcomes (which indicate true changes in your clients)
— Outcome targets (which specify how much of your outcome you hope to achieve)
— Outcome indicators (which you can see, hear, read, etc. and suggest that you’re making progress toward your outcome target or not)

Typically, the above concepts are organized into a logic model, which depicts the general order in which the concepts are integrated with each other. For more clarity, see Guidelines and Framework for Developing a Basic Logic Model

Common Myths to Get Out of the Way Before You Start Planning
Myth: Evaluation is a complex science. I don’t have time to learn it!

No! It’s a practical activity. If you can run an organization, you can surely implement an evaluation process!

Myth: It’s an event to get over with and then move on!

No! Outcomes evaluation is an ongoing process. It takes months to develop, test and polish — however, many of the activities required to carry out outcomes evaluation are activities that you’re either already doing or you should be doing. Read on …

Myth: Evaluation is a whole new set of activities – we don’t have the resources

No! Most of these activities in the outcomes evaluation process are normal management activities that need to be carried out anyway in order to evolve your organization to the next level.

Myth: There’s a “right” way to do outcomes evaluation. What if I don’t get it right?

No! Each outcomes evaluation process is somewhat different, depending on the needs and nature of the nonprofit organization and its programs. Consequently, each nonprofit is the “expert” at their outcomes plan. Therefore, start simple, but start and learn as you go along in your outcomes planning and implementation.

Myth: Funders will accept or reject my outcomes plan

No! Enlightened funders will (at least, should?) work with you, for example, to polish your outcomes, indicators and outcomes targets. Especially if your’s is a new nonprofit and/or new program, then you very likely will need some help — and time — to develop and polish your outcomes plan.

Myth: I always know what my clients need – I don’t need outcomes evaluation to tell me if I’m really meeting the needs of my clients or not

You don’t always know what you don’t know about the needs of your clients – outcomes evaluation helps ensure that you always know the needs of your clients. Outcomes evaluation sets up structures in your organization so that you and your organization are very likely always focused on the current needs of your clients. Also, you won’t always be around – outcomes help ensure that your organization is always focused on the most appropriate, current needs of clients even after you’ve left your organization.

Planning Any Type of Evaluation Includes Answers to These Very Basic Questions

Evaluation often seems like a “heavy”, complex activity to those who are not familiar with the real nature of evaluation. Actually, planning any kind of evaluation often requires answers to some very basic questions, including:

What decisions do you want to be able to make as a result of your evaluation?
Who are primary audiences for the results?
What kinds of info are needed?
When is info needed?
Where get that info and how?
What resources are available to get the info, analyze it and report it?
How report that info in useful fashion?
Planning Your Outcomes Evaluation — Step 1: Getting Ready
Read Step 1 (Chapter 1) of UW book Measuring Program Outcomes: A Practical Approach (1996) if you have it (otherwise, you’ll still benefit from this section on this web page)
You can very likely draft your own version of most of your outcomes evaluation plan and then have others review your drafts of those sections of the plan. (This “short-cut” approach to outcomes evaluation planning might be questioned by some experts on outcomes — but then small nonprofits rarely have the resources to fully carry out the comprehensive and detailed steps often recommended by outcomes evaluation resources.)
Remember that you don’t have to be an expert to start the planning process — each plan is different — ultimately, you’re the expert at your process and your plan
Do consider getting a grant to support development of your plan, eg, maybe $3,000 to $5,000, particularly to have evaluation expertise to review your plans and your methods of data collection — if you can’t get this grant, you still can proceed with your plan
DO tap the many resources available to help you (useful online resources are listed below)
Now pick one program to evaluate that has a reasonably clear group of clients and clear methods to provide services to them — in other words, make sure that you have a program to evaluate!
NOTE: Soon, you should train at least one board member and staff member about outcomes — consider using this very basic online guide
Planning Your Outcomes Evaluation — Step 2: Choosing Outcomes
Preparation
Note that a logic model for your program is depiction of inputs, activities, outputs and outcomes (short-term, intermediate and long-term) regarding your program. Take a look at the information in Introduction to Program Logic Model
Reread the myths listed above – don’t worry about competing the “perfect” logic model – ultimately, you’re the expert here
Now Identify Your Outcomes (including short-term, intermediate and long-term)
Now fill in a logic model for the program to which you want to apply outcomes-based evaluation — see the example logic model and framework — BUT first read the next several bullets below in this section:
To identify outcomes, consider: “enhanced …”, “increased …”, “more …”, “new …”, “altered …”, etc.
Note that it can be quite a challenge to identify outcomes for some types of programs, including those that are preventative (health programs, etc.), developmental (educational, etc.), or “one-time” or anonymous (food shelves, etc.) in nature. In these cases, it’s fair to give your best shot to outcomes planning and then learn more as you actually apply your outcomes evaluation plan. Also seek help and ideas about outcomes from other nonprofits that provide services similar to yours. Programs that are remedial in nature (that is, that are geared to address current and observable problems, such as teen delinquency, etc.) are often easier to associate with outcomes.
Start with short-term outcomes
Regarding identifying short-term outcomes, think 0-6 months:
— Imagine your client in the program or a day after leaving the program
— What knowledge and skills do you prefer? Actually see?
Regarding identifying intermediate outcomes, think 3-9 months:
— Imagine your client 3-9 months after leaving the program
— What behaviors do you prefer? Actually see?
Regarding long-term outcomes, think 6-12 months:
— Imagine your client 6-12 months after leaving the program
— What values, attitudes, status would you prefer to be the fullest extent of benefit for the client? Actually see?
Now “chain” the short-term, intermediate- and long-term outcomes by applying the following sentence to them:
— “if this short-term occurs, then the intermediate occurs, and if this intermediate occurs, then this long-term occurs — AGAIN, don’t worry about getting it perfect — trust your intuition
Planning Your Outcomes Evaluation — Step 3: Selecting Indicators
Preparation
Read Step 3 in UW book Measuring Program Outcomes: A Practical Approach (1996) if you have it (otherwise, you’ll still benefit from this section on this web page) – especially look at examples on pages 66-67.
Identify at least one indicator per outcome (note that sometimes indicators are called performance standards)
When selecting indicators, ask:
— What would I see, hear, read about clients that means progress toward the outcome?
— Include numbers and percent regarding the client’s behavior , eg, “2,000 of the participants (50%) of our participants will quick smoking by the end of the program” and “3,000 of the participants (75%) of our participants will quick smoking one month after the program”
— If is your first outcomes plan that you’ve ever done or the program is just getting started, then don’t spend a great deal of time trying to find the perfect numbers and percentages for your indicators
Fill in your indicators in the Framework for a Basic Outcomes-Based Evaluation Plan. Also, carry over the outcomes you identified from the example logic model to the basic evaluation plan.
Planning Your Outcomes Evaluation — Step 4: Planning Data/Information
Preparation
Read Step 4 in UW book Measuring Program Outcomes: A Practical Approach (1996) if you have it (otherwise, you’ll still benefit from this section on this web page) — especially look at
— Page 86 (+/-’s of data sources)
— Page 88 (major data collection methods)
— Pages 90-93
A useful resource at this point might be Overview of Useful Methods to Collect Information
Now might be the best time to get some evaluation expertise, for example, a consultant or utilize a local nonprofit service provider to help you review your drafted outcomes and indicators. The expert is also worth their “weight in gold” when reviewing methods to collect data.
Get Your Work Reviewed Now By Others
If you’ve drafted outcomes and indicators yourself, get them reviewed by:
— Board members
— Staff
— Client in program? Finished with the program?
— Evaluation consultant?
Identify Data Sources and Methods to Collect Data
For each indicator, identify what information you will need to collect/measure to assess that indicator. Consider:
— Current program records and data collection
— What you see during the program
— Ask staff for ideas
Is it practical to get that data?
— What will it cost?
— Who will do it?
— How can you make the time?
When to collect data?
— Depends on indicator
— Consider: before/after program, 6 months after, 12 months after
Data collection methods:
— Questionnaires?
— Interviews?
— Surveys?
— Document review?
— Other(s)?
Get evaluation consultant/expertise?
Pretest your data collection methods (eg, have a few staff quickly answer the questionnaires to ensure the questions are understandable)
Write a brief procedure to specify:
— What data is collected?
— Who collects it?
— How they collect it?
— When they collect it?
— What do they do with it?
Planning Your Outcomes Evaluation — Step 5: Piloting/Testing
If your’s is a small nonprofit, then it’s very likely that you don’t have nearly the resources to invest in applying your complete outcomes evaluation process in order to test it out.
In that case, then the first year of applying your outcomes process is the same as piloting your process.
During the first year, notice problems and improvements, etc.
Document these in your evaluations plan.
If something happens to you so that you leave the organization, the organization should not have to completely recreate an outcomes plan. Be sure that write down any suggestions to improve the plan.
Planning Your Outcomes Evaluation — Step 6: Analyzing/Reporting
Preparation
Strongly consider getting evaluation expertise now to review, not only your methods of data collection mentioned above, but also how you can analyze the data that you collect and how to report results of that analyses.
Before you analyze your data, always make and retain copies of your data.
Analyzing Your Data
For dealing with numerical data with numbers, rankings:
— Tabulate the information, i.e., add up the ratings, rankings, yes’s, no’s for each question.
— For ratings and rankings, consider computing a mean, or average, for each question.
— Consider conveying the range of answers, e.g., 20 people ranked “1”, 30 ranked “2”, and 20 people ranked “3”.
To analyze comments, etc. (that is, data that is not numerical in nature):
— Read through all the data
— Organize comments into similar categories, e.g., concerns, suggestions, strengths, etc.
— Label the categories or themes, e.g., concerns, suggestions, etc.
— Attempt to identify patterns, or associations and causal relationships in the themes
Reporting Your Evaluation Results
Level and scope of information in report depends for whom the report is intended, e.g., funders, board, staff, clients, etc.
Be sure employees have a chance to carefully review and discuss the report before sent out
Funders will likely require a report that includes executive summary – the summary should highlight key points from the evaluation, and not be a Table of Contents
Example of Evaluation Report Contents
Title Page (name of the organization that is being, or has a product/service/program that is being, evaluated; date)
Table of Contents
Executive Summary (one-page, concise overview of findings and recommendations)
Purpose of the Report (what type of evaluation(s) was conducted, what decisions are being aided by the findings of the evaluation, who is making the decision, etc.)
Background About Organization and Product/Service/Program that is being evaluated
— a) Organization Description/History
— b) Product/Service/Program Description (that is being evaluated)
— — i) Problem Statement (in the case of nonprofits, description of the community need that is being met by the product/service/program)
— — ii) Overall Goal(s) of Product/Service/Program
— — iii) Outcomes (or client/customer impacts) and Performance Measures (that can be measured as indicators toward the outcomes)
— — iv) Activities/Technologies of the Product/Service/Program (general description of how the product/service/program is developed and delivered)
— — v) Staffing (description of the number of personnel and roles in the organization that are relevant to developing and delivering the product/service/program)
Overall Evaluation Goals (eg, what questions are being answered by the evaluation)
Methodology
— a) Types of data/information that were collected
— b) How data/information were collected (what instruments were used, etc.)
— c) How data/information were analyzed
— d) Limitations of the evaluation (eg, cautions about findings/conclusions and how to use the findings/conclusions, etc.)
Interpretations and Conclusions (from analysis of the data/information)
Recommendations (regarding the decisions that must be made about the product/service/program)
Appendices: content of the appendices depends on the goals of the evaluation report, eg.:
— a) Instruments used to collect data/information
— b) Data, eg, in tabular format, etc.
— c) Testimonials, comments made by users of the product/service/program
— d) Case studies of users of the product/service/program
— e) Logic model
— f) Evaluation plan with specified outcomes, sources to collect data, data collection methods, who will collect data, etc.
Useful Online Resources

Note that specific online resources are listed above in the sections in which those resources are most appropriate.

General Resources

Program Evaluation
What is a Program Logic Model? (logic model captures inputs, activities, outputs, outcomes)
Program Manager’s Guide to Evaluation
Outcome Indicators Project
Maran Subramain on Communications Challenges in Evaluation
Measuring Outcomes
Developing a Plan for Outcomes Measurement

For the Category of Evaluations (Many Kinds):

To round out your knowledge of this Library topic, you may want to review some related topics, available from the link below. Each of the related topics includes free, online resources.

Also, scan the Recommended Books listed below. They have been selected for their relevance and highly practical nature.

Related Library Topics

Recommended Books

Home
Index
Your Learning
Arrange Peer Support
Build Your Learning Plan
General Resources
Blog Directories
Free Trainings
Job Banks
Online Groups
Orgs That Help
Periodicals
Reference Material
Supersites
Categories
Yourself
Leadership
Entrepreneurship
Products
Sales
Personnel
Finances
Organizations
Consulting

About Feedback Legal Privacy Policy Contact Us

Copyright, Free Management Library Copyright, Authenticity Consulting, LLC
Graphics by Wylde Hare LLC
Website maintained by Caitlin Cahill

By continuing to use this site, you agree to our Privacy Policy.X

Assignment: Designing a Plan for Outcome Evaluation

Social workers can apply knowledge and skills learned from conducting one type of evaluation to others. Moreover, evaluations themselves can inform and complement each other throughout the life of a program. This week, you apply all that you have learned about program evaluation throughout this course to aid you in program evaluation.

To prepare for this Assignment, review the “Basic Guide to Program Evaluation (Including Outcomes Evaluation)” from this week’s resources, Plummer, S.-B., Makris, S., & Brocksen S. (Eds.). (2014b). Social work case studies: Concentration year. Retrieved from http://www.vitalsource.com, especially the sections titled “Outcomes-Based Evaluation” and “Contents of an Evaluation Plan.” Then, select a program that you would like to evaluate. You should build on work that you have done in previous assignments, but be sure to self-cite any written work that you have already submitted. Complete as many areas of the “Contents of an Evaluation Plan” as possible, leaving out items that assume you have already collected and analyzed the data.

Designing a Plan for Outcome Evaluation

a 4- to 5-page paper that outlines a plan for a program evaluation focused on outcomes. Be specific and elaborate. Include the following information:

• The purpose of the evaluation, including specific questions to be answered

• The outcomes to be evaluated

• The indicators or instruments to be used to measure those outcomes, including the strengths and limitations of those measures to be used to evaluate the outcomes

• A rationale for selecting among the six group research design

• The methods for collecting, organizing and analyzing data

 

Need help with a similar or different Task?

We have the best writers to help you. Hire Writer Now

ORDER NOW
Scroll to Top