(Essay found in Nesselroade & Grimm, 2019; pgs. 57 – 58)
In Box 1.1 we started a series asking whether the scientific method is broken. Public polling suggests most Americans do not possess a ‘great deal of confidence’ in the scientific community (Confidence in Institutions: Trends in Americans’ Attitudes toward Government, Media, and Business, 2016). Part of the problem might be the misrepresentation of scientific data and findings.
Data misrepresentation can occur in a number of different ways. One way concerns how science writers interpret scientific findings for the general public. Since most people get their scientific information from the media, those who interpret scientific findings for the general public bear a tremendous responsibility to convey accurately the findings of scientific investigators. However, many writers of science are not sufficiently familiar with the scientific process or the subtleties of doing and interpreting research. Furthermore, there is no getting around the fact that there is a financial incentive behind eye-catching headlines. This situation can often lead to oversimplified descriptions of findings to the general public. A recent example concerns a team of psychologists who, in 2013, reported no cognitive improvement for preschoolers briefly exposed to a music enrichment experience (Mehr, Schachner, Katz, & Spelke, 2013). It was a limited study designed only to see if effects could be found in young children with just an initial transient exposure to music. Great lengths were taken by the authors to clarify the limits of the study. Nonetheless, headlines soon appeared like this one from the Times of London; ‘Academic benefits of music a myth’ (Devlin, 2013) clearly overstating the study’s modest conclusions, not to mention bucking most people’s strong intuitions to the contrary. Indeed, other research performed just a year later suggests children from disadvantaged backgrounds show improved neuroplasticity and language development with exposure to community music classes (Kraus, Hornickel, Strait, Slater & Thompson, 2014). Some of the public’s distrust of science results from the careless way in which many popular interpreters of science report findings; ‘findings’ oftentimes shown to have been stated in far too simplistic terms.
Another form of data misrepresentation concerns the researchers themselves; either through data collection and/or interpretation. Assuming, for the moment, the purest of motives, researchers can unintentionally bias participant responses through the ordering of questions (which question comes first, then second, and so on), the limited number of response options available, or even the specific wording of the questions. For example, a 2005 Pew Research survey (Pew Research Center, n.d.), found that the 51% of respondents who favored “making it legal for doctors to give terminally ill patients the means to end their lives,” dropped to 44% when asked if they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Phrases that may seem identical to the researcher may be interpreted differently by respondents. In addition, there are hard-to-answer questions regarding how to treat data that does not fit and seems like it may have been gathered incorrectly; so called ‘outliers.’ (Should it be discarded? What if it really is good data?) Some researchers also selectively report findings; only publishing relationships that are stand out even though numerous relationships were compared. Sometimes a proper understanding of a finding can only be found when placed in a broader context; a context some researchers choose to leave out of their report. For instance, would we be impressed by someone if they said they have such mastery over coin flipping that they can control which side of a coin comes up? What if they said they once got a coin to end up on ‘heads’ nine times in a row? Seems impressive, does it not? However, our amazement might be dulled a bit if we found out their reference to a string of nine heads-in-a-row was dug out of the middle of a series of 4000 coin flips. Context matters. (This topic will be explored more in Box 8.1.) Unfortunately, several scientific articles, many of which misrepresented findings unintentionally, are retracted by academic journals every year. Retractionwatch.com is an example of one website that monitors these retractions.
Finally, there is the issue of academic fraud (e.g., Carey, 2016). Science, we must remember, is not practiced by purely objective robots or angels, but rather by people; people possessing the frailties, temptations, and pressures common to us all. Science is also a cultural enterprise, with its own hierarchy of authority, internal rewarding structure, and value system; a value system which places a premium on new findings, new ideas, and numerous publications. Researchers that do not make original discoveries, propose interesting innovative theories, or generate numerous publications often find themselves out of a job. Given this reality, we should not be surprised to learn that just as the enterprise of professional sports, financial investment, politics, and virtually all other human communities deal with different cheating scandals, this practice can and does take place within the world of scientific investigation. Thankfully, just as in these other professions, there are correcting mechanisms in science; mechanisms designed to ferret-out falsehoods and eventually get to the truth. Nonetheless, when the public finds out that a headline may be incorrect, a journal article needs to be retracted, the journal itself is fake, or a scientist is found to be fraudulent, we should not be surprised to learn that to some people it feels as if ‘science’ is broken.
Find this and other essays regarding “Is the Scientific Method Broken?” in the Nesselroade & Grimm textbook.
Carey, K. (2016, December 29). A peek inside the strange world of fake academia. The New York Times, A3. Retrieved from http://www.nytimes.com/2016/12/29/upshot/fake-academe-looking-much-like-the-real-thing.html?_r=1
Confidence in Institutions: Trends in Americans’ Attitudes toward Government, Media, and Business. (2016). In The Associated Press–NORC Center for Public Affairs Research. Retrieved from http://www.apnorc.org/projects/Pages/HTML%20Reports/confidence-in-institutions-trends-in-americans-attitudes-toward-government-media-and-business0310-2333.aspx
Devlin, H. (2013, December 12). Academic benefits of music ‘a myth.’ The Times of London. Retrieved from http://www.thetimes.co.uk/tto/science/article3946476.ece
Kraus, N., Hornickel, J., Strait, D., Slater, J., & Thompson, E. (2014). Engagement in community music classes sparks neuroplasticity and language development in children from disadvantaged backgrounds. Frontiers in Psychology. Retrieved from journal.frontiersin.org/article/10.3389/fpsyg.2014.01403/full
Mehr, S. A., Schachner, A., Katz, R. C., & Spelke, E. S. (2013). Two randomized trials provide no consistent evidence for nonmusical cognitive benefits of brief preschool music enrichment. PLoS ONE 8:e82007. doi: 10.1371/journal.pone.0082007
Pew Research Center (n.d.). Questionnaire Design. Retrieved from http://www.pewresearch.org/ methodology/u-s-survey-research/questionnaire-design/