There is No Scientific News II. The hunt for the newest of the newest research threatens the accumulation of knowledge
The desperate hunt for the latest and newest scientific findings, by journalists as well as scientists, frustrates the accumulation of knowledge. Preprints and Google Scholar only make it worse. This is my second call for slow science…
Previously, I argued that the growth of psychological science is thwarted because, due to weak literature searching strategies, researchers often do not know what the state of the art is in their own field. As a solution, I proposed an advanced search technique that I explained further in clips. Another threat to the accumulation of scientific knowledge, however, is the current frantic hunt for the very latest scientific discovery. That's what this blog is about.
“Published last night!”
Both journalists and scientists themselves seem to be focusing more and more on what has just been published. Of course, journalists are always looking for 'news', and “published last night!” is a great eye-catcher. Newspapers have to make money; TV channels want high ratings; and internet sites crave clicks. But the latest discovery is hardly ever the best one, nor the most important. Most research (certainly in sciences such as psychology) must first be replicated a few times, after which a systematic review (preferably a meta-analysis) can provide a preliminary state of the art. That takes months or even years. So, genuinely newsworthy results are not new, but 'old'! Science is a slow process, and a ‘hot finding’ is not yet knowledge.
Nevertheless, even scientists often appear to get their knowledge updates from the news. A revealing study showed that studies covered by the New York Times were cited much more often in the scientific (!) literature, even for many years afterwards. And it has only gotten worse with social media. You would expect scientists to cite all relevant studies, or a review of them. But no: the latest and the hottest. It underlines my point that scientists do not do adequate literature searches for their own specialty.
Preprints: no longer peer review, but democratic science
The hottest new finding is one that has not yet even been published: a so-called preprint, which is a paper posted on internet before peer review. And that is trickier still, of course. Scientists logically wish to see the latest findings in their area quickly, even if these have not yet been accepted for publication. This used to happen at conferences, for example, and preprints are a welcome acceleration of this. But of course, such preprints are still not established knowledge; they have not been peer reviewed. Despite its flaws, peer review is the gold standard for the recognition of scientific knowledge. There is as yet no better alternative. However, the scientific standing of preprints seems to be growing rapidly, even amongst scientists.
Recently, a professor of Quantitative Science Studies announced his intention to publish only preprints from now on. Peer review, he said, is “merely tradition”. In the peer review process, you may be “rejected because of your language”, or “because people think the research was not carried out properly”. To me these seem justifiable reasons to reject a paper: research must be clearly articulated and done well. Or a paper may be rejected because “it does not fit in with the latest trend”. In that case, I would say, send it to another journal; there are so many. The professor goes on to state that through preprints “people can talk about your findings on social media, and journalists can write about them and policymakers can already think about what they can do with these new findings". Science as an ultra-democratic process? But who actually controls this, to prevent it from becoming dilettante science? Who is the arbiter? The whole interview seemed like a satire, but I'm afraid many agree with him. Am I fighting a losing battle?
Something new four times every minute?
Back to the poor journalists. What on earth should they write about? There are occasional exceptions to my rule that ‘scientific news should be scientific olds’: some findings in physics or archaeology, for instance. But how are journalists to distinguish them from the bulk of findings that need replication and review? How to are they to pick them out among the thousands of findings published every day? Psychology alone now has more than 4 million publications, a new one every three minutes. The whole ideal of Open Access suddenly strikes us as not very practical.
Many journalists seem to keep up with just a few of the over 1000 psychology journals, or simply check out the press releases of universities or conferences. They pick out some remarkable findings, which often make for good reading and I enjoy them too… but again, it’s seldom state-of-the-art science. Many journalists actively search for some topic, and have to face the fact that for most typically interesting topics there are thousands of articles to choose from. Looking for up-to-date info on consciousness, for example? The term ‘consciousness’ yields >90,000 articles in Web of Science (if the journalist has access to this database to start with). Or what about ‘worry*’? >40,000. A little more specific perhaps: ‘stress and heart disease’? >70. 000. Most topics will yield far too many hits to handle. Searching the literature for specific topics is astonishingly difficult, and I don't think many journalists have the advanced skills required – especially since scientists generally don't either, as I explained in my first blog. So… what’s to be done?
Of course… Google Scholar(ish)
Use what everyone uses, even (oh horror!) many scientists: Google Scholar! Note that I also find GS super useful for quickly 'getting some information', but it is absolutely unsuitable for gaining real state-of-the-art knowledge. First of all: it searches in everything that seems remotely scientific (books, conference presentations, scientists’ personal websites, and preprints etc.), not just in peer-reviewed articles. You get way too much material, especially a lot of irrelevant stuff. Let’s take an example: the combined search for ‘stress* and “facebook use”’ yields 48 hits in Web of Science… and >20,000 hits in GS. Even nonsensical combinations like ‘accountant and brosschot’ generate >2000 sources, and the less fuzzy search for "accountant" and brosschot still yields 36 irrelevant hits. GS tries to 'help' you by sorting by 'relevance', using an opaque algorithm in which the number of citations plays a major role. In short, what you see is what gets the most clicks. Is that science? Moreover, results are 'personalized', which is pretty well the opposite of scientific. Not to mention the fact that you can't use more than 200 characters (far too few for a good search), or that the entire texts of articles are searched, instead of just the title and abstract (which is asking for trouble), and more besides. That’s why GS has sometimes been referred to as Google Scholarish.
Notwithstanding all of this, GS citation scores are increasingly used by scientific organizations as an indiciation of the quality of individual researchers. Again, I am fighting a losing battle?
Even more AI
There are of course all kinds of other AI apps to find 'relevant' research, which are not sufficiently accurate, or worse. CoCites, for example, is based on citations and is therefore - no matter how creative and sophisticated it is - dependent on scientists’ good citation practice and therefore search practice, which are seriously inadequate. With citation-based apps (including GS), studies that are difficult to find sink even further into oblivion – and this may include potentially very important studies. Such apps lead to scientific amnesia. Another app, evidencehunt.com, appears to be severely limited in suggested keywords. But if you know of better ones, let me know! And I needn’t even mention ChatGPT, of course: it just makes up citations itself.
The solution: bite the bullet and learn better literature search techniques. And above all, take your time. No quick fixes: slow science.
Photo by Dan Dimmock on Unsplash