Pilgor the goat -
The replication crisis goes beyond pyschology. Biology and it's flavours (microbiology etc) are also hit by this but to a lesser extent. Scientific method needs to be respected always, otherwise don't call yourself a science. I first heard about this when a paper was published (maybe even a phd thesis, I forget) where the author(s) tried to repro 100 famous recent psych experiments. Less than 50 were reproducable.
Danny Kahneman and Sam Harris talked about this at length. Hes a very famous psychologist who wrote a very influential book that relied heavily on a few very memorable but ultimately insufficiently scaled experiments. He took the criticism to heart and acknowledged this, which I think is the first step in solving the crises. At its heart, it's a crisis of wishful thinking and confirmation bias, but if people can be skeptical of their own work and sources, then the bar will inevitably rise.
Here's an article that explains it better than I can:
From your link:
Researchers are competing for positions, grant money, and status. In this competition, researchers can gain an unfair advantage by using questionable research practices (QRPs) that inflate effect sizes and increase the chances of obtaining stunning and statistically significant results. To ensure fair competition that benefits the greater good, it is necessary to detect and discourage the use of QRPs. To this aim, I introduce a doping test for science; the replicability index (R-index). The R-Index is a quantitative measure of research integrity that can be used to evaluate the statistical replicability of a set of studies (e.g., journals, individual researchers’ publications). A comparison of the R-Index for the Journal of Abnormal and Social Psychology in 1960 and the Attitudes and Social Cognition section of the Journal of Social and Personality Psychology in 2011 shows an increase in the use of QRPs. Like doping tests in sports, the availability of a scientific doping test should deter researchers from engaging in practices that advance their careers at the expense of everybody else. Demonstrating replicability should become an important criterion of research excellence that can be used by funding agencies and other stakeholders to allocate resources to research that advances science.
Keywords: Power, Publication Bias, Significance, Credibility, Sample Size, Questionable Research Methods, Replicability, Statistical Methods
This is pretty cool, R-Index for replicability index. I wonder how widely it will be adopted.
On another note, there was another famous psychology paper that was debunked but it was used as dogma in thousands of papers (it was an old paper). Think of the amount of garbage generated over decades that have been based on garbage.
Yup, it's like building a house on a bad foundation. The house can never be stronger than what its built upon, and it's the same problem we see with post-modernism taking hold in the humanities with a strangely inverse outcome.
I've done some work in mechanical systems engineering, and post-modernism (as a fundamentally skeptical idea) is a typical feedback correction circuit. Very useful stopping and correcting emergent errors in the logic, but the humanities have tried to bolster it as the fundamental nature of reality itself. Oddly, it's the opposite of what's going on here, where instead of fundamental skepticism of all truth claims, there is an overconfidence. I dont think it's a coincidence though, I think its what's actually driving the schism between humanities and STEM fields and ironically, very closely mirrors the metaphysical logic of the modern political divide. Both sides need to get their shit together, apparently.
Edit: So that editing glitch is never going to be fixed? I've seen other people post the same gibberish, so it's just not me. If you dont press space after moving the placing cursor and deleting characters, it just jumbles shit up into one word. I'm sick of writing everything 10 times and then see that I missed a couple later and my post looks retarded. Jesus christ.