The evaluation of quality in science journalism and science communication has often been ocused on the question of accuracy. But opinion on what constitutes accuracy may be different among scientists and journalists. This may be one reason why the acceptance of purely cientifically based advice for better science reporting is low among journalists (e.g. Oxman 993). However, in recent years different monitoring projects emerged, which try to judge he quality of medical reporting on (new) treatments, tests and procedures. These attempts se a set of defined criteria which focus on questions like: Is the magnitude of the benefit eported? Are the associated risks and costs mentioned? What is the quality of the sources studies and experts)? But also: Is there a second opinion mentioned and does the report go eyond a press release?

Mainly based on the work of Moynihan (2000), the Australian “Media Doctor” started as the irst of such projects in 2004 (www.mediadoctor.org.au) followed by a monitoring in Canada nd Hong Kong as well as in the USA (www.healthnewsreview.org). In November 2010 the erman Medien-Doktor – The German HealthNewsReview (www.medien-doktor.de) started s the first European project in this tradition. However, the 10 criteria used in the other countries were extended by three purely journalistic criteria such as actuality and relevance of the opic, quality of presentation and journalistic accuracy. These criteria were implemented in a ournalistic review process which is developed alongside scientific peer review, however, by orking with reputable science journalists as reviewers (instead of mainly scientists or physicians). This pure “science journalistic peer review” may be regarded as another innovation n our project.

">
 [PCST]
PCST Network

Public Communication of Science and Technology

 

A question of quality
Criteria for the evaluation of science and medical reporting and testing their applicability

Marcus Anhäuser   Dortmund University, Germany

Holger Wormer   Dortmund University, Germany

The evaluation of quality in science journalism and science communication has often been ocused on the question of accuracy. But opinion on what constitutes accuracy may be different among scientists and journalists. This may be one reason why the acceptance of purely cientifically based advice for better science reporting is low among journalists (e.g. Oxman 993). However, in recent years different monitoring projects emerged, which try to judge he quality of medical reporting on (new) treatments, tests and procedures. These attempts se a set of defined criteria which focus on questions like: Is the magnitude of the benefit eported? Are the associated risks and costs mentioned? What is the quality of the sources studies and experts)? But also: Is there a second opinion mentioned and does the report go eyond a press release?

Mainly based on the work of Moynihan (2000), the Australian “Media Doctor” started as the irst of such projects in 2004 (www.mediadoctor.org.au) followed by a monitoring in Canada nd Hong Kong as well as in the USA (www.healthnewsreview.org). In November 2010 the erman Medien-Doktor – The German HealthNewsReview (www.medien-doktor.de) started s the first European project in this tradition. However, the 10 criteria used in the other countries were extended by three purely journalistic criteria such as actuality and relevance of the opic, quality of presentation and journalistic accuracy. These criteria were implemented in a ournalistic review process which is developed alongside scientific peer review, however, by orking with reputable science journalists as reviewers (instead of mainly scientists or physicians). This pure “science journalistic peer review” may be regarded as another innovation n our project.

[PDF 0.00]Download the full paper (PDF 0.00)

BACK TO TOP