[PCST]
PCST Network

Public Communication of Science and Technology

 

A question of quality
Criteria for the evaluation of science and medical reporting and testing their applicability

Holger Wormer   Institute of Journalism, Chair of Science Journalism, Dortmund University

Marcus Anhäuser   Institute of Journalism, Chair of Science Journalism, Dortmund University

The evaluation of quality in science journalism and communication has often been focused on the question of accuracy. But the opinion what constitutes accuracy may be different among scientists and journalists. Therefore, the acceptance of purely scientifically based advice for better science reporting is low among journalists. However, in recent years different monitoring projects emerged, which try to judge the quality of medical reporting on (new) treatments, tests and procedures. These attempts use a set of defined criteria which focus on questions like: Is the magnitude of the benefit reported? Are the associated risks and costs mentioned? What is the quality of the sources (studies and experts)? But also: Is there a second opinion mentioned and does the report go beyond a press release?
 
Mainly based on the work of Moynihan (2000) the Australian “Media Doctor” started as the first of such projects in 2004 (www.mediadoctor.org.au) followed by a monitoring in Canada and Hong Kong as well as in the USA (www.healthnewsreview.org). In November 2010 the German “Medien-Doktor” (www.medien-doktor.de) started as the first European project in this tradition. However, in Germany the 10 criteria used in other countries were extended by three purely journalistic criteria such as actuality and quality of presentation. In our work we report on the journalistic review process in these projects which was developed alongside scientific peer review, however, by including journalistic criteria. All criteria are discussed and the results of the first 100 evaluations of articles and stories in German mass media are presented. Interestingly, evaluating the data on medien-doktor.de there are about as many highly ranked stories as stories with poor quality. Journalists mainly fail to mention risks and to explain the quality of the evidence of a scientific result (about 76% each). In many cases they do not cite independent experts (63%). Journalists seem to have fewer problems with explaining the novelty of a therapy (22%). Although these results are preliminary by comparing them with US data some suggestions can be made on how to improve reporting on medical sciences. Finally, it will be discussed to what extent the existing criteria could be adapted in order to evaluate other fields of science journalism and communication such as physics or environmental sciences.

[PDF 0.00]Download the full paper (PDF 0.00)

BACK TO TOP