It is impossible to maximize effectiveness if effectiveness is not firstly defined and secondly measured. Science, technology, engineering and mathematics (STEM) outreach programs have long basked in a reputation of assumed positive impact; programs are often touted with producing many benefits such as improving attitudes towards STEM fields, raising awareness and engagement in STEM careers, increasing enrolments in STEM subjects and generally reengaging students with STEM. However, much of the evidence of these impacts is anecdotal. Increasingly program providers are being asked to empirically demonstrate their program’s effectiveness and impact,that is, they are being held accountable.

Empirically evaluating the impact of these programs is challenging. The issue of impact is multifaceted and it is difficult to account for all of the relevant variables. It is hard to obtain large sample and control groups and also to avoid self-selection of participants. Even knowing what impacts or outcomes to look for can be problematic.

With these challenges in mind, a research project was undertaken to map STEM school-based outreach programs across Australia in terms of their aims, reach, model and evaluation, while also conducting case study evaluations into the impact of two outreach programs. The research used a mixed methods design and included participants (n1,335) across questionnaires, focus groups, interviews and field observations.

This research highlighted a concerning misalignment between the stated objectives of many outreach programs and their evaluative measures of success. The most commonly cited program objectives were to ‘inspire’ or ‘engage’students and to ‘encourage the pursuit of science careers or studies’. However, the evaluation approach of many programs was limited to attendance numbers and informal feedback. Most of the programs detailed in the research cited obsequious objectives statements about intangible outcomes. None reported measurable program objectives that adequately outlined target audiences, intended outcomes or set parameters on these outcomes such as the direction of change, the extent or the timeframe.

There is a need for encouraging STEM outreach programs to adopt objectives-driven, evidence based decision making in program management. This managerial approach, if encouraged across the field, could improve program efficiency and effectiveness as well as provide programs, sponsors and participants with a greater level of program accountability for the resources utilised.

One approach to facilitate evaluation across a program management cycle is the development of an evaluation toolkit for common use among STEM outreach providers with the potential to be tailored to individual program needs.Taut & Alkin (2003) asked outreach program staff to identify the biggest logistical barriers to program evaluation; the most commonly cited barriers were a lack of time, budget and the expertise required to access the data. A common evaluation toolkit would not only drastically reduce the evaluation resource requirements of individual providers, it would encourage a more consistent evaluative measure across the field of STEM outreach. This in turn would allow meta-analyses into the impact of the globally expanding field of STEM outreach. It would also hold great marketing potential for individual providers specifically and the field of STEM outreach generally.

">
 [PCST]
PCST Network

Public Communication of Science and Technology

 

Accountability in science outreach
Aligning impact evaluation with objectives in science outreach to schools

Kira Husher   The University of Newcastle

John OConnor   The University of Newcastle

Sid Bourke   The University of Newcastle

Adrian Page   The University of Newcastle

It is impossible to maximize effectiveness if effectiveness is not firstly defined and secondly measured. Science, technology, engineering and mathematics (STEM) outreach programs have long basked in a reputation of assumed positive impact; programs are often touted with producing many benefits such as improving attitudes towards STEM fields, raising awareness and engagement in STEM careers, increasing enrolments in STEM subjects and generally reengaging students with STEM. However, much of the evidence of these impacts is anecdotal. Increasingly program providers are being asked to empirically demonstrate their program’s effectiveness and impact,that is, they are being held accountable.

Empirically evaluating the impact of these programs is challenging. The issue of impact is multifaceted and it is difficult to account for all of the relevant variables. It is hard to obtain large sample and control groups and also to avoid self-selection of participants. Even knowing what impacts or outcomes to look for can be problematic.

With these challenges in mind, a research project was undertaken to map STEM school-based outreach programs across Australia in terms of their aims, reach, model and evaluation, while also conducting case study evaluations into the impact of two outreach programs. The research used a mixed methods design and included participants (n1,335) across questionnaires, focus groups, interviews and field observations.

This research highlighted a concerning misalignment between the stated objectives of many outreach programs and their evaluative measures of success. The most commonly cited program objectives were to ‘inspire’ or ‘engage’students and to ‘encourage the pursuit of science careers or studies’. However, the evaluation approach of many programs was limited to attendance numbers and informal feedback. Most of the programs detailed in the research cited obsequious objectives statements about intangible outcomes. None reported measurable program objectives that adequately outlined target audiences, intended outcomes or set parameters on these outcomes such as the direction of change, the extent or the timeframe.

There is a need for encouraging STEM outreach programs to adopt objectives-driven, evidence based decision making in program management. This managerial approach, if encouraged across the field, could improve program efficiency and effectiveness as well as provide programs, sponsors and participants with a greater level of program accountability for the resources utilised.

One approach to facilitate evaluation across a program management cycle is the development of an evaluation toolkit for common use among STEM outreach providers with the potential to be tailored to individual program needs.Taut & Alkin (2003) asked outreach program staff to identify the biggest logistical barriers to program evaluation; the most commonly cited barriers were a lack of time, budget and the expertise required to access the data. A common evaluation toolkit would not only drastically reduce the evaluation resource requirements of individual providers, it would encourage a more consistent evaluative measure across the field of STEM outreach. This in turn would allow meta-analyses into the impact of the globally expanding field of STEM outreach. It would also hold great marketing potential for individual providers specifically and the field of STEM outreach generally.

A copy of the full paper has not yet been submitted.

BACK TO TOP