Evaluating Science Communication Projects
Published on SciDev.Net, this article provides straightforward, practical advice on evaluating science communication projects that are often part of a wider campaign to engage people in science. Drawing on a variety of communication strategies, these projects may include public talks, debates, exhibitions, publications, science theatre, television documentaries, and citizen-centred initiatives like consensus conferences. The goals of such projects may include encouraging young people to consider science careers, supporting dialogue and informed decision-making about science and society issues, promoting public acceptance of new technologies, changing attitudes or behaviour, or simply fostering a science "culture".
For author Marina Joubert, the "why?" - the rationale for evaluating such projects - is clear (i.e. it allows one to improve the project as it develops, document what works/does not work, and assess how effectively goals have been met). It is the "how" - the actual strategy for measuring the achievement of such science communication goals - that shapes Joubert's guidance, which is divided into the following sections:
- Planning for evaluation - Key points include: Develop a credible evaluation plan from the very start, considering carefully what needs to be evaluated and why, how the evaluation strategy will be implemented, and how the information will be used. The setting of clear and measurable objectives is crucial ("don't be too ambitious"). Participation and collaboration could be helpful in this process; Joubert suggests involving co-workers in designing the evaluation plan, and also drawing on evaluation reports from other science communicators.
- Types of evaluation - Key points include: Both quantitative and qualitative evaluations can be important in determining a project's effectiveness - a determination which needs to happen before, during, and after a project. That is, early feedback, continuous monitoring/periodic checks, and a summative evaluation are all useful tools in guiding and assessing a science communication project.
- Evaluation tools - Key points include: The appropriateness of the specific tools chosen depends on the nature and context of a project, but may incorporate:
- observational tools such as photographs and video footage, which can be used to observe people's behaviour in science centres/museums, or to understand how they interact/respond during a public debate
- website monitoring, which can be used to assess how people navigate a science communication website and to incorporate user-friendly response options to gather online feedback
- simple questionnaires (to document what worked) and/or an attendance register (to obtain a demographic profile of an audience) for events;
- interviews and/or focus group discussions to seek defined feedback on a project's relevance
- analysis of press clippings and/or radio/television coverage of an event or science communication programme to help judge a project's wider impact
- Evaluation in practice - Key points include: Keep the evaluation simple, practical, and user-friendly. Following pre-testing, choose tools that are easy to implement and tailored to address both a specific project's unique challenges/objectives and to fit with the intended audience (e.g., if working with teenagers, use young interviewers and avoid a formal "clipboard" approach to evaluation). Incentives can be developed to entice people to provide feedback.
- Using evaluation results - Key points include: Make sure all collaborators know how the evaluation results will be used; position the evaluation as an empowerment tool; feed results (both positive and negative) into future activities; share results broadly (including mistakes); and communicate outcomes to team members, partners, and sponsors.
Editor's note: This article is part of SciDev.Net's broader E-Guide to Science Communication.
SciDev.Net Weekly Update, January 6-12 2007.
- Log in to post comments











































