Quantcast
Channel: BetterEvaluation - Reporting

Report and support use

$
0
0

From the first step of the evaluation process, even though it may be one of the last evaluation tasks, explicitly discuss the content, sharing, and use of reports during the initial planning of the evaluation and return to the discussion thereafter. Most importantly, identify who your primary intended users are. Use of the evaluation often depends on how well the report meets the needs and learning gaps of the primary intended users.

 
English
AttachmentSize
Image iconReportCompactThumb.gif28.82 KB
PDF iconReport - Compact.pdf252.96 KB

Besides the primary intended users (identified as part of framing the evaluation), your findings can be communicated to others for different reasons. For example, lessons learned from the evaluation can be helpful to other evaluators or project staff working in the same field; or it may be worthwhile remolding some of the findings into articles or stories to attract wider attention to an organisations' work, or to spread news about a particular situation.

You will share the findings of the evaluation with the primary intended users and also other evaluation stakeholders.

Don’t limit yourself to thinking of sharing evaluation findings through a report. Although a final evaluation report is important it is not the only way to distribute findings. Depending on your audience and budget, it may be important to consider different ways of delivering evaluation findings:

  • Presenting findings at staff forums and subject matter conferences
  • Developing a short video version of findings
  • Sharing findings on the organisation intra-net
  • Sharing stories, pictures and drawings from the evaluation (depending on what options you have used to gather data)
  • Creating large posters or infographics of findings for display
  • Producing a series of short memos

Tasks

Tasks related to this component include:

1. Identify Reporting Requirements

Identify the primary intended stakeholders and determine their reporting needs, including their decision-making timelines. Develop a communication plan.

2. Develop Reporting Media

Produce appropriate written, visual, and/or verbal products that communicate the findings.

3. Ensure Accessibility

Plan the reporting products to make sure they are accessible, including addressing issues such as limited time, low literacy, and disabilities.

4. Develop Recommendations

Draw on the findings and an understanding of the implementation environment to make recommendations such as how the programme can be improved, how the risk of programme failure can be reduced or whether the programme should continue. (This is often useful but not always needed).

5. Support Use

Plan processes to support primary intended users to make decisions and take action on the basis of the findings.

Tags: 

Identify reporting requirements

$
0
0

Before you begin to gather and analyze your data, consider how you can ensure your collection efforts will meet the reporting needs of your primary intended users.

English

From the very beginning, reporting is an integral part of evaluation which allows you to:

  • communicate what you do;
  • monitor and track progress;
  • demonstrate impact;
  • document lessons learned;
  • and be accountable and transparent to donors, partners and benefiting communities.

"Evaluation reports may be the only lasting record of a programme or project, including the results achieved and the lessons that were learned from its implementation" (Oxfam Evaluation Guidelines p.11).

Different groups of primary intended users will have varying needs for the evaluation report. When your evaluation plan was developed at the beginning of the process, you should have determined the different groups of primary intended users and begun to ask questions about how the report could be most useful. This information should then be reviewed periodically. Once the reporting deadline nears ensure there is clarity on each of the stakeholder groups’ reporting requirements (what needs to be reported and when).

Some questions that may arise include:

  • What do you need to include in different kinds of reports?
  • At what point do you need to get feedback on your findings - and from whom?
  • Will your findings be presented in draft form?
  • Are you willing to share draft findings?
  • Will you have any influence over the way the findings are re-presented?

Reporting timelines often present a major constraint on the evaluation plan. In particular, the need to report findings in time to inform funding decisions for the next phase of a program often means that reports are needed before impacts can be observed. In these situations, it will be necessary to report on interim outcomes, and to present any research evidence that shows how these are important predictors or pre-requisites to the final impacts.  (See the tasks Develop Program Theory/Logic Model and Collect and/or Retrieve Data for more information on this).

Work with the intended users to determine key points in their own reporting and project cycle. For example, the evaluation may be a necessary part of their legislative requirement for an annual review. If that is the case, you need to know their time and internal pressures. Alternatively, they may be presenting at a major conference and want an update from the evaluation team.

With the primary intended users, their learning needs, and their timelines in mind, develop a communication plan to guide the evaluation reporting process. A communication plan can be as simple as a table that organizes this information. Use the communication plan to align data collection activities with reporting needs and to prioritize the time spent on reporting. (Consider the full range of reporting mediums before finalizing the plan. Not everyone will want a full technical report. For ideas on how to make your report more creative, go to the Develop Reporting Media task page.)

Options

  • Communication plan: developing a plan that outlines the strategies which will be used to communicate the results of your evaluation.
  • Reporting needs analysis: working with your client to determine their reporting needs

Resources

Guides

Sources

Oxfam GB (nd) Oxfam GB Evaluation Guidelines, Oxfam, London. Retrieved from http://policy-practice.oxfam.org.uk/~/media/Files/policy_and_practice/methods_approaches/monitoring_evaluation/ogb_evaluation_guidelines.ashx

 

 

Tags: 

An Executive Summary is Not Enough: Reporting Alternatives for Evaluators

$
0
0
English

You Will Learn

  • the role of reporting in good evaluation practice
  • 3 principles for effectively communicating your evaluation results
  • at least 3 alternatives to writing a final report.

Who Should Take this Webinar

Professional evaluators and individuals tasked with doing program evaluation in their organization. 

Tags: 
Event Suggested By: 
Kylie Hutchinson:uid:1662
Geofield: 
POINT (-82.797057 27.915552)
Online
10th December, 2014 to 11th December, 2014
Event City: 
Event State/Province: 
Event cost: 
Paid
Event type: 
Webinar

Evaluation Reporting: A Guide to Help Ensure Use of Evaluation Findings

$
0
0
English

This guide addresses the issue of ensuring that evaluation findings are used by stakeholders. It guides readers through the process of creating effective evaluation reports, focusing on the key considerations that need to be taken into account, the essential elements of reports, the importance of dissemination, and offers tools and resources to help with this task. Although created with assist evaluators of heart disease and stroke prevention activities in mind, this guide will be useful for program managers, evaluators and other stakeholders who wish to identify appropriate evaluation products, effectively communicate findings, and find effective dissemination efforts. 

Centers for Disease Control and Prevention. Evaluation Reporting: A Guide to Help Ensure Use of Evaluation Findings. Atlanta, GA: US Dept of Health and Human Services; 2013.

http://www.cdc.gov/dhdsp/docs/Evaluation_Reporting_Guide.pdf
2013

Extract

"Importance of Evaluation Reporting to Ensure Use 

There are various aspects of evaluation reporting that can afect how information is used. Stakeholder needs, the evaluation purpose, and target audience should be considered when communicating results. Evaluation reporting should not only identify what, when, how, and to what extent information should be shared but take into account how information might be received and used. 

In a 2006 survey of American Evaluation Association members, 68% self-reported that their evaluation results were not used. Findings such as this suggest a greater need for evaluation results to make it of the bookshelf and into the hands of intended audiences. Similarly in the CDC Framework for Program Evaluation, the “utility evaluation standard” charges evaluators to carry out evaluations that lead to actionable indings for intended users. This commitment to conducting evaluations that improve the lives of participants serves as the inspiration for this guide." (CDC 2013, 2).

Contents

  • PURPOSE OF THE EVALUATION GUIDES    1
  • INTRODUCTION    2
    • Importance of Evaluation Reporting to Ensure Use    2
  • KEY CONSIDERATIONS FOR EFFECTIVELY REPORTING 
  • EVALUATION FINDINGS    3
    • Engage Stakeholders    3
    • Revisit the Evaluation Purpose    4
    • Deine Your Target Audience    6
  • MAKING EVALUATION REPORTS WORK FOR YOU    7
    • Types of Evaluation Reports    7
    • The Anatomy of a Report    10
  • KEEPING IT OFF THE BOOKSHELF— 
  • THE IMPORTANCE OF DISSEMINATION    13
    • Step 1: Create a Dissemination Plan    13
    • Step 2: Identify a Person to Oversee the Dissemination Plan    14
    • Step 3: Know the Current Landscape    14
    • Step 4: Consider the Timing and Frequency    14
    • Step 5: Stay Involved    14
  • CONCLUSION    15
  • RESOURCES    16
  • APPENDIX    17
    • Checklist for Evaluation Reporting: A Guide to Help Ensure Use of Evaluation Findings    17
  • REFERENCES    18
Evaluation Methods: 
Guide
4
Average: 4(1 vote)
Author: 
Alice Macfarlan:uid:6181

AEA eStudy 053: Monitoring and Evaluation (M&E) Planning for Programs/Projects

$
0
0
English

Details

This eStudy is based upon the Results Based Management (RBM)  approach to programs/projects.  Therefore, it will review the initial needs assessment and program project design that informs the M&E planning, as well as the other stages of the project/program cycle and planning and their corresponding M&E activities (events).

Particular emphasis will be given to planning for data collection and management using an example M&E planning table for indicators and assumptions. The eStudy will walk through how to develop an M&E planning table that builds upon a project/program’s logframe to detail key M&E requirements for each indicator and assumption. Like the logframe, M&E planning tables are becoming common practice in both international and domestic programming – and with good reason. They not only make data collection and reporting more efficient and reliable, but they also help better plan and manage projects/program through careful consideration of what is being implemented and measured.

Course Structure

Class times:

12:30-2:00 EST

6 contact hours total

Day 1: Tuesday, February  17th: 

Identify the purpose and scope of the M&E system; this builds upon the initial assessment and project design (logical framework).

Day 2: Thursday, February 19th   

Plan for data collection.

Day 3: Tuesday, February 24th:   

Plan for data  management and analysis.

Day 4: Thursday, February 26th: 

Plan for data reporting and use, and human and financial resources.

Important Dates

Register before 10am on February 17th (EST)

eStudy Registration Fees

American Evaluation Association Member: $150

Nonmember: $200

Student: $80

Student Nonmember: $110

Geofield: 
POINT (-82.797057 27.915552)
Online
16th February, 2015 to 25th February, 2015
Event cost: 
Paid
Event type: 
Webinar

eStudy 054: Reporting Alternatives for Evaluators

Canva

$
0
0
English

Canva is a very simple, free to use, online infographic creation platform. It has a drag and drop interface and a range of templates that you can adapt. You can upload your own images and choose from a large number of pre-configured layouts.

Canva (2015). Canva [Website]. Retrieved from https://www.canva.com/create/infographics/

https://www.canva.com/create/infographics/
2015

This resource and the following information was contributed to BetterEvaluation by Alice Macfarlan (BetterEvaluation and ANZSOG).

Authors and their affiliation

Canva.com 

Year of release

2012

Type of launch

Tool

Key features

  • A range of templates for designing things such as: flyers, social media images, infographics (including templates for timelines, informational/advocacy (charity), business reporting), banners, brochures, logos
  • Click-and-drag interface
  • Ability to upload own images or purchase stock images from within platform
  • Paid upgrade available which enables ability to upload brand fonts
  • Multiple file types available when downloading graphics

Who is this resource useful for?

  • Evaluators
  • Those involved in evaluation capacity strengthening
  • Communications teams

How have you used or intend on using this resource?

I typically use InDesign if I need to make something particularly design heavy, however if I just need to quickly resize or combine two images, or add a bit of text, Canva can be an easier and quicker option. 

Why would you recommend it to other people?

I've recommended this tool to people who aren't design experts as an easy option for creating graphics. The template library is a great starting point for design inspiration and the templates are easy to edit. 

As with all infographic tools, while they can offer a helpful way to create your graphics, they are only as powerful as the thought that has gone into the process and design, so I'd probably also recommend people take a look at Joitske Hulsebosch's BetterEvaluation guestblog on creating infographics to make your results go viral.

Tool
0
No votes yet
Groups audience: 

Effective Reporting for Public Health Evaluation

$
0
0
English

Learning Outcomes

At the end of this webinar you will be able to:

  • describe the role of reporting in good public health evaluation practice
  • list 3 principles for effectively communicating your evaluation results
  • state 3 alternatives to writing a final report.

Help us tailor the webinar content even more to your needs by answering a needs assessment survey beforehand.

When

September 22nd and 24th, 1:00 - 2:30 pm ET

This is a three hour webinar conveniently split into two 1.5 hour sessions.

Cost

$95Cdn plus GST

Event Suggested By: 
Kylie Hutchinson:uid:1662
Geofield: 
POINT (-96.6049468 32.9066839)
Online
22nd September, 2015 to 24th September, 2015
Event cost: 
Paid
Event type: 
Webinar

7 Strategies to improve evaluation use and influence - Part 2

$
0
0
English

This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year. While the session focused particularly on strategies to use after an evaluation report has been produced, it is important to address use before and during an evaluation.

In last week’s blog I discussed 3 strategies:

  1. Identify intended users and uses early on
  2. Anticipate barriers to use
  3. Identify key processes and times when findings are needed - and consider a series of analysis and reporting cycles

Here are 4 more strategies to consider using and building into organisational processes:

4. Choose appropriate reporting formats and ensure accessibility

There are many exciting new options for reporting the findings from evaluations.  Making the right choices can increase the likelihood that they will know about the findings and understand what they mean and why they are important.

It’s likely that a variety of different knowledge products and reporting processes will be needed throughout the evaluation period and after its formal completion.

For example, evaluation managers who are using an evaluation for symbolic purposes might want a large report with substantial technical appendices to demonstrate its credibility. A brief, plain language summary of findings might be appropriate to support discussion with non-technical stakeholders, including community members about the implications of findings for changes to processes and policies or resource allocation. 

At our session at the AEA conference, Nick Petten suggested  some innovative ways of reporting results, including:

  • developing an interactive webpage on the evaluation client’s webpage with evaluation results with passive, ongoing data collection to substantiate results
  • a public exhibition of the results for the community such as a permanent or semi-permanent, mural in a public space

Some other options discussed included:

  • producing a  video for reporting back to the community
  • doing joint conference presentations that involve the evaluator, the evaluation commissioner and ideally other stakeholders (such as community members, or program staff).

Read more

The BetterEvaluation site has information about a wide range of reporting formats and strategies to improve accessibility, including accommodating literacy and disability requirements.  [We’re in the process of expanding these – please share suggestions on how we can improve them]

Check out the new book by Kylie Hutchinson on Innovative Evaluation Reporting which includes even more options, including Graphic Recording, Slidedocs, and Podcasts.  You can read some pages for free through Amazon.

5. Actively and visibly follow up what happens after the evaluation

There are a number of strategies that can be embedded in organisational processes to ensure that the process of doing an evaluation (or having a evaluation done) does not end with reporting findings.  Some of these include:

  • Developing a management response to the findings, which can then be included in an evaluation report
  • Tracking responses to  recommendations including whether or not (and how) they have been implemented if accepted

At our AEA session, Stephen Axelrod suggested that evaluation could learn from the new field of implementation science, which looks at how findings from research can be applied in practice. This includes identifying the changes needed to existing practices on the basis of new information and what is needed to produce and maintain these changes.  It can include doing developmental formative evaluation on effectiveness trials to identify and overcome barriers to adoption of evidence based practices.

These activities are not necessarily undertaken by an evaluator or an evaluation team.  Instead there might be a transition process from an external evaluation that produces findings to internal processes that support change.

Read more

For more information on implementation science, this free access BMJ article (authored by Mark  Bauer, Laura Damschroder, Hildi Hagedorn, Jeffrey Smith and Amy Kilbourne) provides a useful overview.  There are also links to options that are useful for this in the Support Use task in the Rainbow Framework.

6. Ensure there are adequate resources to support follow up activities and the development of additional knowledge products

One of the liveliest discussions at the AEA conference session was about how feasible or reasonable it was for evaluators to undertake additional work after acceptance of the last deliverable, such as producing additional reports or engaging in other processes. 

Some of the options to mitigate this issue might be:

  • Building in a notional number of days for the evaluator to be engaged after the final report, with these days to be allocated to particular processes or developing additional material as required – or the funding allocation not used
  • Funding a subsequent project that produces additional knowledge products and/or works with people to think through specific implications of findings for their practice
  • Allocating the time of internal people to undertake these activities as part of their role in the evaluation

For example, some years ago I led a major evaluation for the Australian government which produced a number of reports about the sustainability of projects with short-term funding, including an issues paper and a report of research into the sustained impacts of completed projects, which found that projects with a plan for sustainability were more likely to have sustained impacts even if the project ended. My group was then engaged under a separate contract to develop a plain language version of the issues paper to be used by projects, do additional research with local projects and report this, and then work with new projects to develop sustainability plans, drawing on these documents.

Share your experience

Do you have examples of Terms of Reference for an evaluation that include resources for multiple types of reporting, or for activities to support use after a report has been produced? Do you have examples of Terms of Reference for post-evaluation projects to repackage findings into new knowledge products or to conduct learning events

7. Document these strategies in a formal communication and dissemination plan that documents all of this – and update it as needed

Some organisations now require that evaluation plans include a plan for communicating and disseminating the findings, including providing interim results. 

Share your experience

Do you have examples of communication plans for evaluations that you could share?  What has been your experience of developing and using these plans?

The limitations of advance planning

Despite the emphasis on planning for use from the beginning , I don't want to suggest that it is possible to anticipate all the ways that evaluation findings might be useful in the future.

In a comment on last week’s blog, BetterEvaluation member Bob Williams cautioned against thinking about planning for use as if this were possible:

Is anyone else feeling uneasy about the concept of 'intended use for intended users'? It's become a mantra, but to me is an idea stuck in the 90's when we pretended that interventions operated in simple predictable environments.   …  These days, I try not to start at intended use for intended users, but start at the desired consequences (outcome) of an evaluation.  Then we work out the influences needed to achieve those consequences (generally using backcasting approaches) and then identify who could be the main people who ought to use the evaluation in an influential way.  It's not easy, and is a work in progress, but it's no different from what the designers and managers of interventions have to do.

Bob Williams,
comment on Part 1 of this series

As Bob reminds us, the process of identifying and prioritising the primary intended uses of an evaluation is not a simple, linear process that can be done at the beginning of an evaluation and then used to develop a static evaluation communication plan.   For a start, it can be difficult to identify all the potential uses for an evaluation.  Using iterative processes of reporting some data, and discussing its interpretation and implications, can help to build more capacity to use evaluations and ideally help to shape what kinds of information are being generated and how and when they are being made available.  And evaluations can have more use and impact when they are supported by internal champions who can connect potential users opportunistically.

Share your experience

What do you think of these suggested strategies?  Do you have additional strategies to recommend or good examples?  How can our evaluation practices and systems address staff turnover and  changing information needs that can produce big changes in what are seen as ‘intended uses’ and ‘intended users’ ?

 

In case you missed it, you can read part 1 of this two-part series here:

7 Strategies to improve evaluation use and influence- Part 1

25th January 2018

 

What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?  

Tags: 
AEA
Reporting
supporting_use

The Psychology of Climate Change Communication: A Guide for Scientists, Journalists, Educators, Political Aides, and the Interested Public

$
0
0

This guide by the Center for Research on Environmental Decisions, while focused on communicating research on climate change, will be useful for anyone interested in the theory behind communication and behaviour change and those who need to communicate evaluation results effectively to specific target audiences or the general public.

English

Center for Research on Environmental Decisions. (2009). The Psychology of Climate Change Communication: A Guide for Scientists, Journalists, Educators, Political Audes, and the Interested Public. New York. Retrieved from: http://guide.cred.columbia.edu/pdfs/CREDguide_low-res.pdf

http://guide.cred.columbia.edu/pdfs/CREDguide_low-res.pdf
2009

This resource and the following information was contributed by Alice Macfarlan.

Authors and their affiliation

Debra Shome and Sabine Marx, The Center for Research on Environmental Decisions

Year of publication

2009

Type of resource

  • Guide

Key features

This guide covers a number of topics related to scientific communication. Key chapters include:

  • Know your audience - which focuses on understanding what mental models are and how these affect the way that people take new information on board.
  • Get you audience's attention - a discussion on the 'framing' of messages, including what a frame is and types of framing (e.g. a Gain vs. Loss, Now vs. Future)
  • Translate scientific data into concrete experience - this chapter deals specifically with the different ways of presenting scientific messages so that it has a better chance of cutting through. It touches briefly on dataviz, but focuses primarily on analytical messages vs. emotional messages.
  • Address scientific and climate uncertainties - useful discussion of ways scientific communicators have tried to make clearer the probability of uncertain predictions to their audiences, and how these audiences have interpreted this. Includes some useful tips about best practices.

There are also chapters that talk more directly on how to persuade groups of people, touching on when best to use emotional appeals, the dynamics of processing information in groups as opposed to individually, and using different sorts of incentives for behaviour change.

Who is this resource useful for?

  • Advocates for evaluation;
  • Commissioners/managers of evaluation;
  • Evaluation users;
  • Other – communicators and advocates

How have you used or intend on using this resource?

I have always been interested in communication generally, and specifically how to best communicate evaluation results. This guide has been useful in giving me an overview of some of the theories behind communicating messages to audiences that may be resistant to this, as well as some practical guidance about language and presenting evidence.

Why would you recommend it to other people?

This guide tackles the trickier aspects of communication - moving away from what reporting formats work well to dive deep into what makes people change their minds about their beliefs. It won't be useful for everyone - not every evaluator or evaluation team is responsible for the communication of evaluation findings beyond the delivering of a report. But for those whose roles do include this, and those who commission evaluations and use evidence, I think this will be a useful, quick read. For those who want a brief overview of the key points and guidance at the end of the brief, as well as a reading list for those who want to go deeper.

Guide
Development themes: 
0
No votes yet
Resource Suggested By: 
Alice Macfarlan:uid:6181




Latest Images