Uncommon Descent Serving The Intelligent Design Community

Can too much publishing be bad for science?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Geek Psychologist wonders:

At a time when more research studies are being published than ever before and vast quantities of information are available on the internet to be mined, synthesized, and analyzed, we need to implement changes to popular science communication – changes that emphasize a data-driven approach to communicating research and that make it easier for members of the general public to synthesize a large volume of research findings. After all, most people understand that no scientific study is without limitations, and that each individual finding must be taken with a grain of salt. Therefore, focusing exclusively on new individual research findings likely does little to change minds, aid decision-making, and support scientific literacy in the general public. If we don’t do more to support a primarily data-driven approach that emphasizes communication of general scientific trends rather than merely the latest disparate collection of eye-catching findings, then arguably we are failing as science communicators and as advocates of an informed citizenry. More.

It’s not the amount of publishing that’s the problem. When a firehose of data is coming at us without any mediation, we heed the first blast only, or as the geek said, “the latest disparate collection of eye-catching findings.”

Lots of eyecatchers: Hey, the evolutionary psychology knows why you tip too much.

Don’t let Mars fool you. Those exoplanets teem with life!

We found a theory of the origin of life that works … okay, for five minutes

We’ve come up with another reason why human beings are bipeds …

See also: “RNA makes palladium” paper to be retracted? One wonders, if the paper hadn’t been cited so often, might it have just fallen into oblivion? Raised now and then, about a hypothesis re RNA, but never replicated?

GM crops data, cited by Italian lawmakers, manipulated? Investigator: Image sections obliterated, and apparently identical images linked with different experiments

and

Replication as key science reform

Follow UD News at Twitter!

Comments
#1 addendum If the pointed observation is confirmed as an error, could a possible explanation for it to have gone under the peer review radar be that the reviewers were experts that could read the given paper fast, without paying attention to details? Then it may take an ignorant outsider to detect the potential mistake, right?Dionisio
January 25, 2016
January
01
Jan
25
25
2016
02:42 PM
2
02
42
PM
PDT
This was posted in another thread (https://uncommondescent.com/peer-review/is-peer-review-a-sacred-cow/#comment-593408) over a month ago, but no one answered the questions. Let's try it again here: This paper: http://journal.frontiersin.org/article/10.3389/fcell.2015.00008/full#h1 seems to have a terminology error in the conclusion. On the first eight pages the term “post-translational modifications (PTMs)” (both plural and singular) seems to appear around 10 times. The term “post-transcriptional modifications” doesn’t seem to be mentioned even once. However, on the ninth page the “Conclusion” refers to “post-transcriptional modifications (PTMs)” instead. That seems like an error, doesn’t it? If that’s the case, then how did that error pass the review? How did it go unnoticed by the reviewers? Maybe that’s not an error after all? Can someone read it and tell us whether that’s an error or not? Thanks. BTW, note the article shows who reviewed the given paper and how long it took for the paper to get through peer-review.Dionisio
January 25, 2016
January
01
Jan
25
25
2016
02:26 PM
2
02
26
PM
PDT
When it gets to the point where publishers are competing for articles, then there is too much publishing. The top of the line publications can generally stay above the fray, but the mid level and bottom feeding will publish anything.Ginger Grant
January 25, 2016
January
01
Jan
25
25
2016
01:28 PM
1
01
28
PM
PDT

Leave a Reply