The Reproducibility Crisis

Sdílet
Vložit
  • čas přidán 30. 05. 2024
  • This is an interview with Dorothy Bishop, Professor for Psychology at the University of Oxford, UK. We speak about the reproducibility crisis in psychology and other disciplines. What is the reproducibility crisis? How bad is it? What can be done about it and what has been done about it?
    You can read Prof Bishop's comment for Nature magazine (which is mentioned in the video) here:
    www.nature.com/articles/d4158...
  • Věda a technologie

Komentáře • 850

  • @brianfoley4328
    @brianfoley4328 Před 4 lety +387

    An outstanding interview...this should be mandatory viewing for Graduate and Post Graduate level students.

    • @buzz-es
      @buzz-es Před 4 lety +8

      And climatologists

    • @myothersoul1953
      @myothersoul1953 Před 4 lety +15

      @@buzz-es And climate change skeptics.

    • @buzz-es
      @buzz-es Před 4 lety +6

      @@LuisAldamiz Which makes one wonder why all of their predictions and models are consistently wrong.

    • @tofu-munchingCoalition.ofChaos
      @tofu-munchingCoalition.ofChaos Před 4 lety +10

      @@buzz-es Cite a scientific source which proves exactly this claim (peer reviewed).
      Should be no problem if you are informed.

    • @buzz-es
      @buzz-es Před 4 lety +4

      @@tofu-munchingCoalition.ofChaos Why don't you watch the video a second time and draw some conclusions on the validity of the "peer review process". Better yet there's a good book you can read about the rampant academic fraud going on ...... "Intellectual Imposters". You guys are turning Science into ideological dogma.

  • @isabelab6851
    @isabelab6851 Před 4 lety +69

    My college professor in mathematic statistics class...said to be careful...not to torture the data until it confesses

    • @u.v.s.5583
      @u.v.s.5583 Před 3 lety +10

      Here in Statistical Gestapo we have methods and means to extract exactly the information that we want to have!

    • @Juscz
      @Juscz Před 3 lety +4

      THAT is brilliant!!!!

    • @xaviermachiavelli5236
      @xaviermachiavelli5236 Před 9 měsíci

      ​@@Juscz7&3• : ▪

    • @The_Worst_Guy_Ever
      @The_Worst_Guy_Ever Před 8 měsíci

      The data was under duress

  • @AndrewBlucher
    @AndrewBlucher Před 4 lety +149

    It's ubiquitous. A student of mine in IT wanted to reproduce some interesting published results. We wrote to the authors and asked if we could have their software. They replied that their paper was a thought experiment! Absolutely not what their paper said.

    • @SabineHossenfelder
      @SabineHossenfelder  Před 4 lety +64

      !!

    • @bernhardschmalhofer855
      @bernhardschmalhofer855 Před 4 lety +22

      @rrobertt13 Your comment confused me. The double slit experiment has been done many times. Here is a paper showing diffraction patterns using molecules mit a mass of over 10000 hydrogen atoms: pubs.rsc.org/en/content/articlelanding/2013/CP/c3cp51500a#!divAbstract

    • @vicore5647
      @vicore5647 Před 4 lety +5

      Dr Ioannides entered in on this issue 15 years ago:
      Why Most Published Research Findings Are False
      www.google.com/url?sa=t&source=web&rct=j&url=journals.plos.org/plosmedicine/article%3Fid%3D10.1371/journal.pmed.0020124&ved=2ahUKEwj0rbKCgvboAhVVZM0KHbPJCiEQFjASegQIChAB&usg=AOvVaw3ej46EjYOkYi2cVzeTN8z-

    • @cottawalla
      @cottawalla Před 4 lety +7

      Interesting. The few times I've been contracted to translate a research scientists model from, for example, a spreadsheet into another form I've always found fundamental and sometimes significant math or logic errors that have emerged from the coding itself, not necessarily in the maths that they have developed on paper. These models had already been used as the basis for published papers.
      Although I'd always note the errors and their consequences in detail and provided what I considered a correct version for them to review, the version I delivered had to be consistent with model as provided, of course.

    • @robharwood3538
      @robharwood3538 Před 3 lety +21

      @@benschreyer8295 Yes, certainly! The point is that you run it on *new data* that you've collected yourself. The software encodes the 'method' by which the original researchers analyzed *their data.* So, if you run the exact same, deterministic analysis program on new (not pre-determined) data, you should (i.e. will probably, within a calculable margin of error) get very similar results. If there was some methodological problem with the initial research, however, and their data was biased in some way, then the 'results' they found may very well not be 'reproducible'. The analysis software is deterministic, but the experimental data which it is used to analyze is not! Hope that clears things up a bit. Cheers! 😊

  • @Prayukth
    @Prayukth Před 4 lety +213

    Dr. Sabine is such a wonderful interviewer. She hardly interrupts Prof. Dorothy who brings up some interesting points on why this has become such a huge problem.

    • @yareps
      @yareps Před 4 lety +11

      I came to say the same thing. It is the rare interviewer who lets their guest tell their story uninterrupted. At most, Sabine merely asked pertinent questions to spur Professor Bishop.

    • @rupertchappelle5303
      @rupertchappelle5303 Před 4 lety +6

      I agree, she allows the person to talk. the big problem is that when these things are not reproducible we are loathe to dismiss them as bogus, fake and wrong. Convince enough people that they are mentally ill and get them taking powerful drugs and your society fails. Podlings are not very useful. Especially to science.

    • @jasperlawrence5361
      @jasperlawrence5361 Před 4 lety +2

      @@rupertchappelle5303 hear, hear

    • @kyberuserid
      @kyberuserid Před 4 lety +5

      Even more impressive considering that she has well known similar issues with the physics community which were barely referred to at all.

    • @theultimatereductionist7592
      @theultimatereductionist7592 Před 3 lety +2

      @@rupertchappelle5303 Drug consumption is individual liberty & freedom of choice. Doesn't physically hurt others.
      Eating murder torturing animals by refusing to go vegan, and burning fossil fuels forcing Anthropogenic Global Warming (AGW) onto others, physically hurts others.

  • @bazoo513
    @bazoo513 Před 4 lety +152

    The significance of this cannot be overestimated. Thank you, Drs. Hossenfelder and Bishop!

  • @JonFrumTheFirst
    @JonFrumTheFirst Před 4 lety +88

    When I studied genetics and evolution in the late 1990s, my browsing through journals eventually led me to review articles in ecology that pointed out most of these problems. In my stats classes, power was never mentioned. In our lab, I saw data gone over with different statistical tests until the p threshold was passed. It was clear to me then that there were many problems with the papers that were being published in my field. The truth is that those who knew just kept their mouths shut, and those who didn't know made sure they stayed innocent.
    Within psychology, they had a different problem - what I call the plausibility problem. That is, they claim to have 'proven' things that didn't pass the 'sniff test' in the first place. Before you do a statistical test, you have to know you're dealing with a hypothesis that is at least plausible - I'm thinking of the famous 'leaning in' study, for instance. Ridiculous on the face of it. What they had was lemmings leading each other off the cliff, all going happily, because 'that sounds cool.'
    The truth is that many people got Ph.Ds with no training in scientific rigor at all. As we said in grad school, 'there is no science in social science.' But of course, there has been little science in much of academic science for decades. Careerism has reigned supreme.

    • @oo88oo
      @oo88oo Před 4 lety +7

      JonFrumTheFirst,
      I look forward to universities being bypassed by other ways of learning. Obviously subjects like computer programming can already be completely studied outside the university system.

    • @simonxag
      @simonxag Před 4 lety +5

      I disagree about the "sniff test" - most real science fails it - the spinning earth (when 1st proposed) - Newton's laws (if you quiz most students you'll find they don't believe these now !!) etc. But your last 2 sentences are truer than they ought to be - I can remember stuff from the early 1980s :-(((((

    • @GregerWikstrand
      @GregerWikstrand Před 4 lety +1

      If it doesn't pass the sniff test - what stops you from falsifying the claim?

    • @JonFrumTheFirst
      @JonFrumTheFirst Před 4 lety

      @@simonxag The best way for me to reply is to send you to this article about a controversial 'priming' study in psychology .statmodeling.stat.columbia.edu/2016/02/12/priming-effects-replicate-just-fine-thanks/ Much of this kind of priming work just makes no sense - there's no explanation for how you get from here to there - it just happens, due to statistical magic.

    • @kreek22
      @kreek22 Před 4 lety +9

      The careerism issue is akin to the ancient problem of bureaucratization: bureaucracies are always interested foremost in expanding there size and power. Politicization of research also reinforces the careerism problem. If, as in sociology, everyone is a Leftist--there is no one to call out the "politically desirable," but unsupported research papers.

  • @MrJesseBell
    @MrJesseBell Před 4 lety +47

    Sabine, you are quickly becoming my favourite CZcams personality.

    • @keithtait3214
      @keithtait3214 Před 4 lety +3

      MrJesseBell she certainly is unique She seems to always have a scheptical view of any well found data She seems to be doubtful of any accepted science like dark matter and dark energy and the Big Bang She is a muck taker in the best way possible Go girl Go

  • @rc5989
    @rc5989 Před 4 lety +33

    Back in university a long time ago, my sociology professor had only 2 rules for our questionnaire project:
    1 Hypothesis comes first, then only run the data against that hypothesis.
    2 NEVER run ‘everything against everything’ (fishing for correlations).

    • @kreek22
      @kreek22 Před 4 lety +4

      Fishing for correlations strikes me as a useful way to find new research directions--but, it is corrupt to hunt for correlations unrelated to the original conception of the project, then publish those.

    • @mirkomueller3412
      @mirkomueller3412 Před 4 lety +2

      At least number 2 sometimes generates very funny correlations...

  • @xqt39a
    @xqt39a Před 4 lety +24

    In my probability class in the 1960s, we were told that the misapplication of statistics was a systemic problem in medical studies. Maybe this information will become understood better because of videos like this.

    • @Number6_
      @Number6_ Před 9 měsíci

      People misapply to cover, protect or to gain. Very rarly is it a mistake. Professional researchers cover up fraud to protect reputations and to gain money and position. This is not going to change until scientists stop pussy footing around this issue .

  • @buybuydandavis
    @buybuydandavis Před 4 lety +27

    In the internet age, *raw data* should be published and made universally available.
    Post hoc data analysis is extremely useful to point to question for further studies.

    • @chuckschillingvideos
      @chuckschillingvideos Před 3 lety +3

      Even that is only a partial solution because the public has no idea what subsets of raw data may have been filtered/excluded.

    • @dtibor5903
      @dtibor5903 Před 5 měsíci

      ​@@chuckschillingvideosdata should be organised and labeled correctly

    • @chuckschillingvideos
      @chuckschillingvideos Před 5 měsíci

      @@dtibor5903 Yes, but if it has been excluded, it has been excluded. You cannot collate what has already been excluded.

  • @DavidSmith-kd8mw
    @DavidSmith-kd8mw Před 4 lety +30

    Congratulations to Dorothy Bishop. I imagine you are not greeted warmly in all quarters.

    • @MaryAnnNytowl
      @MaryAnnNytowl Před 2 lety

      Women who bring up issues with a situation, event, or field generally aren't, even moreso than men who do the same. Having spent over 2 decades in a very male-heavy job, I can attest to that.

  • @JonnyDeRico
    @JonnyDeRico Před 4 lety +13

    Anyone that ever tried to build / reproduce something published in a paper, knows the problem that even the most important parameters are often wrong or completely missing. Pressures, flow rates, temperatures, voltages - just guessing everything? 😤

  • @logaandm
    @logaandm Před 4 lety +37

    Michelson-Morley was a null result. One of the most important experiments in all of science.

    • @RockHoward
      @RockHoward Před 4 lety +11

      A null result that went against existing theory though. That made it impoirtant.

    • @StefanTravis
      @StefanTravis Před 4 lety

      So a null result is one that goes against the null hypothesis?

    • @StefanTravis
      @StefanTravis Před 4 lety +2

      @Cosmic Landscape Just for clarity, why don't you tell us what you think the null hypothesis of the experiment was - and why.

    • @WestOfEarth
      @WestOfEarth Před 4 lety

      You touch on the question I had. Suppose you submit your pre-proposal, and it's approved. But in gathering data and analyzing it, you discover your original hypothesis and proposal was incorrect. What happens with publication in this scenario?

    • @fewwiggle
      @fewwiggle Před 4 lety

      @@WestOfEarth The results are still published (if you are talking about her new paradigm)

  • @mello.b3373
    @mello.b3373 Před 4 lety +83

    Exactly the same problems are with nutritional science.

    • @johnathancorgan3994
      @johnathancorgan3994 Před 4 lety +18

      Nutritional science is arguably worse, depending so much on observational studies and "questionnaire"-based self-reporting.

    • @chuckschillingvideos
      @chuckschillingvideos Před 3 lety +5

      The problem rears its ugly head in ALL science where financial remuneration is a consideration.

    • @gbBakuryu
      @gbBakuryu Před 3 lety +2

      @Tarzan Well, social sciences are skewed by politics, I'm not sure which is worse.

    • @anthonymccarthy4164
      @anthonymccarthy4164 Před 3 lety

      @@johnathancorgan3994 I doubt that nutrition has worse methods than psychology regularly allows itself to get away with. Though it's not especially successful.

    • @anthonymccarthy4164
      @anthonymccarthy4164 Před 2 lety

      @@SimonWoodburyForget That's true, but it's no less true in psychology and especially psychiatry which is intimately tied to the pharmaceutical industry. While it is difficult to do science around something as complex as nutrition, where there are actual physical manifestations that could be studied if it were possible, that isn't true in psychology, never was and never will be. Psychology is science due to the fact that science is whatever scientists let be called that but it can never follow valid scientific methods, nor does it seem to even try to.

  • @coppice2778
    @coppice2778 Před 4 lety +15

    It is quite bizarre that statistics is not a core course in so many areas of science. These people are doing work they lack a core competance to do. In my daughter's biology degree course she was taught you use R, but wasn't taught statistics. This is insane.

    • @MyMy-tv7fd
      @MyMy-tv7fd Před 2 lety

      deep statistical concepts require real, deep thinking and wrangling - following a 'recipe' is infinitely easier

    • @annamyob4624
      @annamyob4624 Před rokem +2

      Every scientist should have intro stats and methodology; but not every scientist has a head for it. What we need is for all research, to be overseen and reviewed by stats/methodology experts. I've worked at major institutions that required this; but apparently this is not a widespread practice. It should be mandatory. Both the institutions and the funders need to take responsibility for this.

    • @maalikserebryakov
      @maalikserebryakov Před rokem +1

      @@annamyob4624You are correct and this is an excellent idea. I wonder if the institutions have not thought of this before?

    • @dislikepineapples
      @dislikepineapples Před 6 měsíci

      ​@@maalikserebryakov I think there is a lack on statisticians as well. Almost all uni courses that are not closely related to psychology got way worse statistics classes at my place. At least that is my impression

  • @kubhlaikhan2015
    @kubhlaikhan2015 Před rokem +2

    When I did my Psychology degree, the reproducibility problem was solved by penalising you for doing genuine research and rewarding you for making the results up in the library.

  • @uhmnope4787
    @uhmnope4787 Před 3 lety +5

    I'm glad that in my psychology textbook there was an entire section devoted, explaining why the study we just learned about doesn't explain this or that thing, reminding us that psychology is full of theories and finding substantial proof that something works one way and not the other is incredibly difficult.

    • @dougm275
      @dougm275 Před 2 lety +1

      They did this in my organizational psych book as well.

  • @hrperformance
    @hrperformance Před 4 lety +8

    I actually think data/information analysis is something that should be taught much earlier than university.
    It is a key skill to sidestep the bullshit of so many people/organisations that try to misinform others in order to get what they want. Hopefully then we can raise a generation truely capable of seeing the world as it is!

    • @dawnrock2391
      @dawnrock2391 Před 2 lety

      Hey, could you elaborate a bit? How is your above writing relevant to the topic of the reproduction crisis? I just don’t understand exactly what you mean, but am curious to see more of your logic

    • @icecreamcancer
      @icecreamcancer Před 9 měsíci

      The bosses will just fire those academics who try to maintain truth seeking principles

  • @jaimeriveras
    @jaimeriveras Před 4 lety +37

    Excellent interview. Confidence is a delicate thing; it can be easily lost. Efforts to preserve it, such as this one, are well worth the effort.

    • @celiacresswell6909
      @celiacresswell6909 Před rokem +1

      You’re not wrong! After the last 2 years, what can we trust? Tortured stats, obscure and unpublished method and data, complacent professionals….in many situations I think anecdote and personal experience is more reliable.

  • @jdenmark1287
    @jdenmark1287 Před 2 lety +4

    Thank you, thank you, thank you. I found this so distressing as a non-traditional undergrad and seeing this going on everyday in my psychology department. I still have to deal with it on a graduate level in public health. Mostly with people that think referencing a paper is an adequate argument, without having ever actually read and understood the methods section of the paper.

  • @bhangrafan4480
    @bhangrafan4480 Před 4 lety +4

    As someone who worked in scientific research (molecular biology) for 10 years I can say I fully expected this problem to arise. I'm just surprised it's taken so long to be recognised. The scientific community are a community of human beings like any other, they have pressures, pressures to publish, pressures to produce positive results and foremost of all pressures to get funding. The ways these problems come about varies from sloppy science, i.e. selection bias, to out and out fraud. I often worked with people who would reject experiments which did not give the results they wanted by saying, ("Something went wrong with that one."), while keeping the results that did what they wanted ("It worked properly that time."). From this approach very weak, in fact borderline supporting evidence can be winnowed into very strong supporting evidence for some hypothesis. Some people are just dishonest and know they have to get results if they want to stay on in their careers, so they make results happen. The problem in science is that in theory negative results are equally important to positive results, but in terms of career development the 'great scientist' is the one who comes up with the big idea which is 'proved' correct by experiment, not some nameless person who spent years disproving things but never got a theory or effect named after themselves. Careerism and financial pressures are reasons why people bend their results, it is a structural problem in science. Added to this is the greatly increased commercialism of society, especially as experienced in Universities which are now highly PR focused and want positive stories to publish etc. It is not even only careerism and financial pressures. When I switched to teacher training I mixed with a whole cadre of people from social sciences backgrounds who seemed to share the view that it is okay to fake results as long as you do it for the right political reasons.

    • @ho-mw6qp
      @ho-mw6qp Před 3 lety

      Wonderfully put. This is exactly what is happening.

  • @williammorton8555
    @williammorton8555 Před 4 lety +6

    The issue of reproducibility is one of the major problems facing not only psychology but many different fields of study such as sociology, nutrition, economics; all of the soft sciences.
    The soft sciences have wrecked havoc on cultures in the last 100 years.
    Excellent interview scratching the surface of a profound problem.

    • @judgeomega
      @judgeomega Před 4 lety

      hence why they are called 'soft', and why some have such disdain for them

    • @MaeV808
      @MaeV808 Před 4 lety

      I've read even biomedical sciences and medical/phamaceutical studies are affected not just soft sciences. It's good people are willing to acknowledge and address it. I fear some people use this crisis as a means to dismiss science, soft or not.

  • @LouisGedo
    @LouisGedo Před 4 lety +33

    I agree with the other commenters who praise the importance of this interview

    • @buzz-es
      @buzz-es Před 4 lety +1

      LOL.....I see what you did there

    • @LouisGedo
      @LouisGedo Před 4 lety +1

      @@buzz-es Darn.....you caught me! :)

  • @rbarnes4076
    @rbarnes4076 Před 4 lety +5

    It was highly apparent to me years ago that very little actual science was done in certain branches. I'm glad there is a push to improve rigor in some of these disciplines. We might actually start making real progress.
    Sabine, thanks for shedding a light on this issue. It is one of the best things I've seen in years in science.

  • @dbudelsky
    @dbudelsky Před 4 lety +7

    Reminds me of the "Study Finds that Childhood Leukemia Rates Double Near Nuclear Power Stations" which was a big hype in the beginning of 2012. Never in the hysterical reports was stated what "doubled" means, only that the stats were bad. I think in fact it was 4 positives instead of 2 in a comparison group. The test was made in France were there were 2,700 total cases of the type of child leukemia in question over 5(?) years, so the correct answer would simple be "not measurable".

    • @keithtait3214
      @keithtait3214 Před 4 lety +4

      Kaspar Hauser without sensationalism the newspapers wouldn't have anything to print

    • @TheEulerID
      @TheEulerID Před 3 lety

      There were several such studies (one around the nuclear facility at Windscale aka Seascale). One of the main problems is that the researchers would adjust the size of the area around the facility to find something which appeared to be statistically significant. What's more, they didn't include control areas; when it was done, it was found that there were other childhood leukaemia clusters not associated with nuclear facilities. What's more, the same leukaemia clusters were not found round other nuclear facilities.
      www.ncbi.nlm.nih.gov/pmc/articles/PMC4146329/

  • @mikelouis9389
    @mikelouis9389 Před 4 lety +43

    My good gods, isaac Asimov basically predicted this decades ago. Or, perhaps, he merely incorporated his own experiences in academia into his fiction. I remember this being a major issue in the decadent soon to implode empire his protagonist in his magnificent Foundation series, Hari Seldon ( oddly enough, a psychologist/ mathematician lol) was struggling against. I think that I need to reread his masterpiece with wiser eyes.
    By the way, Mr Asimov was also a professor of Biochemistry at Boston University, his insights are not to be taken lightly.

    • @thorin1045
      @thorin1045 Před 4 lety +9

      It happened in physics and to some lesser degree in other sciences at the end of the 19th century, the idea of we know everything. Than quantum mechanics and relativity happened, and most science realized we know almost nothing. Psychology has the issue of no hard fact based on unchanging rules and stuff, but working with humans and such, that change without any control over time.

    • @vikramkrishnan6414
      @vikramkrishnan6414 Před 3 lety +3

      @@thorin1045 Not really. Physics had new experimental facts come in that upended theory. Here the experiments don't even reproduce

    • @dextrodemon
      @dextrodemon Před 3 lety +2

      he's a psychohistorian not a psychologist :p

    • @jwarmstrong
      @jwarmstrong Před 2 lety +2

      @@dextrodemon Hari was a mathematician who develops & works on psychohistory & in the real world Lloyd deMause developed a formal psychohistorical approach from 1974 onwards

  • @wanderingquestions7501

    Sabine continues to demonstrate she is not only a good speaker but she is also a very good listener

  • @benwitt6902
    @benwitt6902 Před 4 lety +41

    There was a time when a person got respect for the achievement of becoming a professor.Not any more, putting politics ahead of science has squandered public trust.
    Sabine seems fearless and has no sacred cows, she's restoring the faith.

    • @hankhafliger482
      @hankhafliger482 Před 4 lety +3

      Are you kidding me she clearly states her sacred cow is environmentalism and "climate change". somehow trying to stear the conversation about the failure of science away from the abomination of truth that was once"global cooling", then "global warming, now rebranded as all encompassing "climate change".

    • @obsidianjane2267
      @obsidianjane2267 Před 4 lety +1

      The problem is that a large portion of the public accepts the politics and takes the "science" claims at face value without any critical thought. So any one can make any claim they want today, and it won't receive any question or criticism as long as it doesn't counter the political narrative in both academia and public policy.

    • @cyberf1sh
      @cyberf1sh Před 4 lety +3

      @@hankhafliger482/videos/videos Hank, you sounds like a well meaning guy who has been misled by fake news. There was never any scientific consensus for "global cooling". In fact, most predictions made about global warming in 1970's by climate scientists have been proven true. Source: journals.ametsoc.org/doi/abs/10.1175/2008bams2370.1?mobileUi=0&%C2%A0=&
      I hope you educate yourself on the topic and stop spreading this misinformation

    • @obsidianjane2267
      @obsidianjane2267 Před 4 lety +1

      @@cyberf1sh That paper exhibits the same selection bias as Hank and the Deniers (cool band name). There were plenty of theories published, so its easy to pick and choose based upon hind-sight and political bias.
      The reason why (some) '70s climate predictions have proven more accurate was because they were conservative estimates based on the acknowledgement of the error inherit in contemporary and geological climate measurement. That is no longer true. Prior work that stated that its data was abstracted from indirect measurement or even completely speculative is now cited as absolute bedrock fact.
      Its quite apropos that the paper characterizes modern climate science as an "enterprise". When climate science became its own thing in the '80s and 90's, the politicization of it led to ever increasing hyperbole propelled by both the derivative nature of historical climate data that is easy to "interpret" and the need to publish/find funding. And so we find our selves here where people are polarized to either completely dismiss climate change or hyperventilate about a crisis that is always a decade and election cycle away.

    • @jasperlawrence5361
      @jasperlawrence5361 Před 4 lety

      @@hankhafliger482 You are wrong about climate change, nor does a past error have any influence on the following theory, except perhaps to make the scientists more aware of getting it right the next time.

  • @robharwood3538
    @robharwood3538 Před 3 lety +3

    Prof. Hossenfelder, I just watched this video a second time (after already having given it a thumbs up some months ago, but being a bit hazy on what had been covered in it) and I wish I could give it *another* couple of thumbs up! Wonderful interview!
    I actually used the Closed Captions (CC) to read along to make sure I would retain all the information better in the future. 😅 I'm glad I did, because it really gave me a much deeper appreciation for Prof. Bishop's points and various ideas for solutions.
    I feel like at this point in human/scientific history, we have so many journals and so many scientific articles going through the publication process that it may now be worthwhile to start to *formalize* the study of the scientific method (especially the whole peer-review and publication aspect of it) into a kind of Scientific Study of the Methods of Generation of Scientific Knowledge from Scientific Research -- whatever such a field should be called: perhaps something like Meta-Science, or Scientific Epistemology, or -Scientolo- (err, forget I said that! 😉), or Researchology, or something or other.
    This comment got kind of long. 😅 Skip ahead to "TL;DR: Main Idea" if you prefer to get to the main idea first. Otherwise, a little bit of ...
    *Preamble and Justification*
    I know there's already a couple of existing fields, Philosophy of Science, and History of Science which purport to study the methods of science, but what I'm proposing is something more like a well-grounded _Science_ of Science, based on actual scientific research into observations of the natural phenomenon of 'knowledge generation via various well-defined methods'.
    Of course initially this would be focused on how we humans ourselves perform the scientific method, but presumably the same 'meta-science' could be applied to:
    * machine learning and automated science (which is already a thing),
    * potentially to other animal species developing their own cultural knowledge base(s) (which arguably already occurs at least in chimps, though of course far more limited than human science, but still a potentially interesting field for comparative study), and,
    * who knows?, maybe one day to the methods of science we observe in some extra-terrestrial intelligent species/culture (even if it happens to have gone extinct long ago and all we have to go on are some artifacts) discovered somewhere out in the universe, assuming we ever get there.
    Personally, I think Scientific Epistemology would be a decent name, though perhaps it doesn't emphasize the specific kinds of 'meta-research' I'm envisioning quite enough. Which leads to the ...
    *TL;DR: Main Idea*
    For example, my main idea of what could be done, inspired mostly by this specific interview, is to start a *journal* (online, printed, or perhaps/probably both) which a) itself employs the 'best practices' of research that the new field of (for the sake of discussion) Scientific Epistemology has established, and b) publishes various actual studies of *other* past, present, and future scientific publications (journals and papers) in various existing scientific sub-fields: Physics, Chem, Bio, Psych, Sociology, Anthropology, Archaeology, etc., and perhaps even borderline subjects like History (ref. "Note on History" below).
    Such studies in this new journal could, for example, specifically perform 'reproducibility reviews' of specific fields (like Psychology, which sparked this interview), or specific journals or publishers, to establish a kind of 'credibility' or 'reproducibility' rating for different fields or publications.
    Such studies might be done by randomly sampling from among the previously published papers in a specific field or publication, and then performing reproduction 'sub-studies' on each of the results.
    I believe Prof. Bishop mentioned a study which did just this within Psychology, and that's where the 35% reproducibility figure came from, if I'm not mistaken. My main idea is thus to expand this into an entire field of science on its own, specifically with the establishment of a new journal (or many) specialized around publishing such studies (and many other kinds of inter-related studies besides) for *all fields of science,* not just ones that have such glaring problems like Psychology.
    It could even perform the same kinds of studies on Scientific Epistemology *itself,* so that we can eventually learn how well and/or effective Scientific Epistemology itself is!
    *Conclusion*
    Thus, this new field (whatever it's called) and new journal(s) focused specifically within this field would be able to *quantify* and even develop working hypotheses and theories about the natural phenomenon of *science* itself, and all of its sub-fields, and thus be able to provide us with insight on: which sub-fields have reproducibility crises, how bad (or mild) they might be, and (especially) what changes might be undertaken to improve reproducibility within those specific fields (different sub-fields may be in need of, or simply are more amenable to, different methodological 'fixes').
    Of course, developing such a field and journal(s) would require significant public investment, but I could imagine a near future where governments and/or research institutes would become sufficiently aware of the reproducibility crisis (and not just within psychology), and could be motivated enough to invest in further study.
    Also, there's the issue that some studies are initially so time consuming and/or expensive that literally 'reproducing' their results would be far outside the budgets of a fledgling new scientific field. However, as the field itself develops, and establishes new ideas about how the concept of 'reproducibility' can be achieved in different ways (I imagine the field of Information Theory (e.g. information entropy, Bayesian Inference, etc.) could help untangle this), I would wager that even establishing the theoretical 'reproducibility' (or some analog or approximation of it) of hugely expensive experiments such as from the LHC, or even unique historical events (see "Note on History" below) could be adequately established.
    Anyway, I could go on with many more ideas about the broader aspects and applications of 'Scientific Epistemology', beyond just the one issue of reproducibility crises, but I think the comment is too long already. 😅 Hope you found it worthwhile to read, and apologies if not. Cheers! 😊
    *Note on History*
    Check out Dr. Richard Carrier's exploration of applications of Bayes' Theorem to resolving questions about unique historical events and reforming the traditional methods and criteria of the study of History; in his books _Proving History_ and _On the Historicity of Jesus,_ for example.

  • @yv6eda
    @yv6eda Před 4 lety +1

    Great conversation! Thanks Sabine!

  • @erichodge567
    @erichodge567 Před 4 lety +1

    Absolutely first-rate, and very much needed. Thank you!

  • @katg-gk5ox
    @katg-gk5ox Před 3 lety +1

    It is so good to see topics and interviews like this at the level you can bring to them. I am so tired of seeing topics about the sciences written by people who have a journalism degree and nothing else.

  • @raminagrobis6112
    @raminagrobis6112 Před 4 lety +19

    I wholeheartedly endorse the treatment of data reproducibility, Sabine. I have started noting trends in medical science where the "conclusions" (not just the method) of a given research project are announced in advance. Such bogus research is merely confirmation of an already prevalent or intuitive notion of answers to a prospective question and is completely anthetical to "good" research. The worst part is that such projects may succeed better in getting funded since they appear like a "safer" bet on where to inject money into various disciplines.

    • @raminagrobis6112
      @raminagrobis6112 Před 4 lety +3

      @@thealienrobotanthropologist
      You are absolutely right with your assessment. Being a biophysics- and biochemistry-trained biologist with a strong interest in physical chemistry as a framework for analyzing questions in cell biology, I have often faced (1) deep misunderstanding of the true nature of problems at hand among colleagues with a "softer" training in chemistry as well as (2) a flawed evaluation (imho) of the relevance of the topics on which I published. What I'm most sorry about for people who respond this way is how much excitement and sheer satisfaction one gets when one obtains a "hard" answer to a complex biological phenomenon thanks to a "pure science" approach instead of the mumbo jumbo and buzzwords that drain all the attention publicly. Because that's how the real world works: easy answers are preferred because they hurt your brain less, if you struggle to put some order in a more cluttered mind, which sometimes describes biologists' minds rather accurately as compared to mechanistically oriented ones....

    • @custos3249
      @custos3249 Před 4 lety +5

      Welcome to what happens when business is allowed to pollute and displace objectivity. A result is right not because of its verified conjuncture with reality but by how much pizzazz marketing can smear on it.

    • @chuckschillingvideos
      @chuckschillingvideos Před 3 lety +4

      @@custos3249 BUSINESS? Try gummint. The vast, vast, vast majority of pure science research (ie physics, for example) is funded by government research grants. These grants are always awarded to scientists conducting certain types of research considered likely to reach politically desirable conclusions.

  • @RonnieD1970
    @RonnieD1970 Před 4 lety +1

    Excellent discussion and VERY IMPORTANT interview! Two superstar scientist!

  • @tofu-munchingCoalition.ofChaos

    A very simple hack:
    Consider a first paper (usual p-values...) not as a "proof" (not as statistical inference) but as a paper using exploratory statistics.
    Now papers testing - rejecting or not rejecting - the hypothesis of the exploratory paper are the "proofs" or "disproofs" of the hypothesis.
    Beneficial side effect:
    Robustness of the result.

  • @tom23rd
    @tom23rd Před 3 lety +1

    this video restores quite a lot of my regard for scientists, and thus science. tackling such problems in an open way tells me that science does indeed hold itself accountable, and seeks integrity. thank you for this

  • @nova9651
    @nova9651 Před 4 lety +1

    I love how this is a perfect intro for your next video, Sabine

  • @mikehart6708
    @mikehart6708 Před rokem +1

    I am not a scientist but I grew up in a time when science was highly revered and it has become virtually a religion for many of us lay people. It is what we believe in. It is encouraging that some scientists, such as yourselves, are honest enough and concerned enough to try to deal with this problem. I guess it is naïve to think that if a scientist sticks steadfastly to the scientific method, then all will be well and yet, some of this problem seems to stem from some very basic violations of the very basic scientific method that we were all taught.

  • @esmenhamaire6398
    @esmenhamaire6398 Před rokem +1

    Superb! Thank you so much for this, Drs Bishop and Hossenfelder!

  • @djbabbotstown
    @djbabbotstown Před 4 lety +4

    Thanks for the vids Sab. Know you’re busy keep them coming.

  • @mirkomueller3412
    @mirkomueller3412 Před 4 lety +1

    Didn't even notice that this kind of problem exists - even though i always had the feeling that sometimes not the whole story was told, or - at least - was embellished towards desired results. Great step forwards. This will make the world a better place. Hope in humaity restored...

  • @canonwright8397
    @canonwright8397 Před 4 lety +2

    Thanks for this. Glad to know someone somewhere is working on this problem.

  • @MarkSMays
    @MarkSMays Před 4 lety +3

    I like your science posts. This review of creditability is very timely. Thanks.

  • @theultimatereductionist7592

    I agree with your hatred of the wrongness of calling an experiment that confirms the null hypothesis a "failed experiment".
    I hate that stupidity, too. However, I am highly skeptical about all your other claims about what all other scientists do.

  • @kennethcastle847
    @kennethcastle847 Před 4 lety +6

    This was a wonderful discussion. I had no idea that there were problems of this sort affecting scientific papers. I had naively assumed that refereed journals would correct the problems. I agree that it is better to talk openly about as it shows how science does try to make sure that results are valid and reproducible, as a process unto itself. I like the idea of defining the problem first, with the analysis methods defined ahead of the data collection, as a good way to re-program the thinking on fallibility. It is OK to fail, in the sense that you don't get the result you thought you should. This is how science moves forward as it then makes you think deeper about the problem to see why the original hypothesis was erroneous.

    • @zero132132
      @zero132132 Před 4 lety

      Some kinds of problems aren't things a journal can do. You can't ensure that everyone announces that they lost the lottery, and it'd be hard to sort out the odds of winning or losing when you don't know how the lottery is organized and you don't know how many losers there are.

    • @grahamnumber7123
      @grahamnumber7123 Před 3 lety

      Yes it's the big "Climb down" gently does it. These people have reputations to keep .-)

  • @threedot141
    @threedot141 Před 4 lety +1

    Fantastic interview. I think every working scientist should watch this. It certainly makes me think more carefully about my own work.

  • @TheTrumanZoo
    @TheTrumanZoo Před 4 lety +2

    great interview. amazing honesty.

  • @joba1560
    @joba1560 Před 4 lety +1

    Thanks for tackling that issue. As a part of the public I always assumed science tries to get rid of bias as good as possible. Once done, it will boost progress for sure, even if it is harder at first, if knowledge is built on hard facts, not concious or unconcious wishful thinking.

  • @toddq6443
    @toddq6443 Před 4 lety +2

    "We have a problem, but we're working on it."
    Yes my brilliant friend, that's why we keep watching your videos. 😃
    TNQ

  • @kevalan1042
    @kevalan1042 Před 2 lety +2

    Very good! The technique of pre-publishing based on methods is an excellent idea.

  • @hyrocoaster
    @hyrocoaster Před 4 lety +4

    19:56 wonderful. I remember discussing this weird practice with my fellow students in my bachelor's. We thought we were expected to deliver something "exciting" rather than something reliable, that could really be backed up with data or be disproved by it. Negative outcomes were not wished for. So everyone either avoided choosing a topic that might not create the desired results, or we just randomly collected information first and then created a story around it.

  • @dennisdonovan4837
    @dennisdonovan4837 Před 4 lety +1

    It’s great to see, listen and learn from two (obviously) well educated women … ❤️👍🏽❤️

  • @wesbaumguardner8829
    @wesbaumguardner8829 Před 3 lety

    I love Sabine's courage and audacity. She is a real truth seeker.

  • @geanderbacheti2724
    @geanderbacheti2724 Před 8 měsíci

    Nice interview, love your videos Sabine, thanks!

  • @Knervik
    @Knervik Před rokem

    There is no understanding of general psychology without respect for parametric statistics. Confirmation bias and institutions' expectations for quick results sully our understanding. It's good to hear that this is in change.

  • @rickfearn3663
    @rickfearn3663 Před 4 lety +6

    Excellent interview, thanks Dorothy and Sabine. Now about the pharmaceutical industry and their drug testing. When will the truth ever come out about the vested interests throughout the development, distribution and medical sectors. Thanks to both of you. Sincerely Rick

  • @MarkRaymondLuce
    @MarkRaymondLuce Před 4 lety +3

    A great interview with Dr. Bishop at Oxford Sabine, I had no idea the reproducibility crisis was also rampant in the field of psychology - I did know that it was rampant in the field of physics and medical studies, but it only makes sense that it is found in the field of psychology also, and now I suspect in all sciences; I get a very strong feeling that this video interview is the foundation of which is going to lead into a very revealing well thought out in-depth statement on the state of particle physics in your next or one of your upcoming videos.
    It's nice to see and hear you laugh.

    • @toddq6443
      @toddq6443 Před 4 lety

      I wouldn't be altogether surprised if your suspicion is correct. TNQ

    • @keithtait3214
      @keithtait3214 Před 4 lety

      Mark Raymond Luce small sample sizes and cost of getting complex data..Sounds like they need more money

  • @jasperlawrence5361
    @jasperlawrence5361 Před 4 lety

    Sabine is a wonderful interviewer, and presenter for that matter.

  • @sonomabob
    @sonomabob Před 4 lety

    It is so pleasant to listen to intelligent people speak. Soothing. And rare.

  • @DoctorZaeus
    @DoctorZaeus Před 4 lety +3

    This was excellent. Thank you.

  • @steveroberts
    @steveroberts Před 4 lety

    Fantastic stuff Sabine. Thank you

  • @mylespowers3965
    @mylespowers3965 Před 3 lety

    Thank you. Hopefully more light on this topic in all disciplines will make significant changes.

  • @billwehrmacher3842
    @billwehrmacher3842 Před 4 lety

    How refreshing to listen to really intelligent people.

  • @PaulBrassington_flutter_expert

    Great video and another example of showing how difficult the Truth is to find, well done!

  • @xreed8
    @xreed8 Před 4 lety +2

    Thank you for answering why theres a reproducibility crisis early and very succinctly.

  • @kerryjlynch1
    @kerryjlynch1 Před 3 lety

    Thank you for exploring this important issue.

  • @matthewbrightman3398
    @matthewbrightman3398 Před 3 lety

    Couldn’t get past first base? Not a phrase I would have expected! Love it!

  • @applewoodcourt
    @applewoodcourt Před 4 lety +1

    As an undergrad, majoring in psychology, I did a field study class with two professors. The experience turned me off to psychological research and studies, and caused me to have a skeptical view of any scientific studies. It was obvious to me, with only basic statistics classes under my belt, that the study was fundamentally flawed, yet our study and results made it into a journal publication. My "lowly" associate professor (sarcasm) taught Statistics 101 with the companion book "How to Lie With Statistics" and it profoundly influenced my view of research studies. There is also a "peer review crisis" but that is a topic for another video. Cheers!

  • @lawrencejohnson3259
    @lawrencejohnson3259 Před 4 lety

    Excellent video, interview!

  • @KeithCooper-Albuquerque
    @KeithCooper-Albuquerque Před 4 lety +1

    This was an excellent interview! As several people have commented, the significance of this problem cannot be overestimated or overstated. It should be mandatory viewing at all student levels. With all of the science-denying, flat-earth believing sites on the web, these issues become fodder for their arguments. Scientists need to clean house for the good of the scientific method and get on board with steadying reproducibility -- ensuring that up-and-coming scientists take these points to heart and approach their papers with this in mind.

  • @sebastianschulz6531
    @sebastianschulz6531 Před 4 lety +2

    Danke Sabine,
    super Video, bin froh mit meinem FH Abschluss nicht weiter gemacht zu haben. Mich hätte das Promovieren in den Wahnsinn getrieben. Ich sprach mit einigen, die eine Promotion unternahmen, und die Tatsache, dass eine Arbeit erst geschrieben wurde, als man ein Erfolg hatte und interher die Fragestellung darum aufstellen konnte, das war bizarr,für mich ist das 1984, wo der Redner mitten in der Rede den Feind wechseln kann, oder wie Jeopardy im Fernsehen.

  • @FB0102
    @FB0102 Před 3 lety

    super important work, great job!!!

  • @dr.nityasagarjena4512
    @dr.nityasagarjena4512 Před 8 měsíci +1

    Negative results of a study should also be encouraged to publish, and get considered as a scientific work. Hope it will minimise the process to make everything positive even if it doesn't.

  • @haroldbridges515
    @haroldbridges515 Před 3 lety

    What' striking to is how unusual such a discussion between two scientists from different fields is. Much better than listening to science journalists whose grasp of the subject may in fact be tenuous.

  • @patrickegan8866
    @patrickegan8866 Před 2 lety +1

    You get a sense of how bad the problem is when there's such resistance to register studies with open science. I know 1 single researcher who uses it and I've been on non-psych research teams where publishing the paper funded by a govt grant was reliant on finding a sig result.
    We should be registering studies on a blockchain so that anyone can look at what RQ's have been investigated already to narrow down what needs to be done next (replicate, modify or just move on). There's tens of thousands of dissertations around the world sitting in drawers or hard drives doing nothing.
    Also, we learn lots of great stats during undergrad but then for our dissertations we're only allowed to run surveys by the university and then go try and publish it to tick KPI's.

  • @iramkumar78
    @iramkumar78 Před 2 lety

    Thanks Sabine for talking to a Psychologist. This is good.

  • @Alexander_Sannikov
    @Alexander_Sannikov Před 4 lety +1

    this "in-principle acceptance" does sound like a very interesting thing to eliminate the publication bias. i wish something like that becomes popular in other areas of science.

  • @williamkacensky1069
    @williamkacensky1069 Před 4 lety +2

    Excellent reference point.

  • @CaptainGuntu
    @CaptainGuntu Před 4 lety +10

    I would also add that in psychology, there has been an excessive focus on central tendencies in data (primarily as indexed by the mean, but also the median and so-called robust measures of central tendency), when that may be the least interesting and informative aspect of the data for many types of distributions.

  • @elitav5491
    @elitav5491 Před 3 lety

    Thank you, a great and interesting interview!

  • @fraemme9379
    @fraemme9379 Před 2 lety +1

    Thank you for this interesting video! I'm a physicist, not a psychologist but I might try to add another point to all those mentioned (which are all definitely relevant): I might attempt to say (also quoting from a talk from U. Galimberti to the Italian society of psychology) that psychology is not, and cannot be, entirely a science, meaning that it cannot completely rely on the scientific method. This is because the scientific method has the premises of reproducibility and statistics which can be very useful on average (and, as it was said in this talk, on very large samples which are needed for the method to be applicable), based on the fact that all human beings share partially a common 'human' structure. However each human being is also different from any other (unlike for example electrons, photons, or elementary particles which are thought to be indistinguishable) and lives only once in all the history of the universe. Psychology also has to deal with single human beings, which can have preferences, behaviors etc which cannot be entirely reproduced even in principle.
    One could say that this is true also for science, as of course the history of the universe also happens only once. However, it appear that certain events (such as physical event like the motion of planets and so on) show more regularity and are less chaotic or less sensitive to small changes of their internal structure or small changes in their environment, while the psychology of an individual seems much more sensitive to individual differences and/or circumstances. Also there is of course the 'observer' problem, which here is even more relevant than in physics, which we can summarize through the sentence: "Can I predict what I will decide"? (If yes, then I don't decide), or similarly for predicting what another person will decide (if I can ALWAYS predict it, therefore if it is a law of nature, then they don't decide either and as they can do the same prediction about me, I also never decide)...

  • @whoopstheregoesgravitas
    @whoopstheregoesgravitas Před 4 lety +1

    I'm not sure if it's already been mentioned, or even done, but a quick way to reference studies where reproducibility has been achieved must surely help. The mention that _some_ low quality work made it into text books had me thinking, it would be nice to verify if the problem is in the textbooks to the same degree as journals. If not, it might be worthwhile having a sound way to prove that case.

  • @jwrosenbury
    @jwrosenbury Před 3 lety +1

    It's nice to watch the research community finally addressing these issues. They were obvious 40 years ago.
    Science doesn't advance by scientists changing their minds. It changes as old scientists die out. Still, 40-50 years?
    This raises a new problem. How do we keep old scientists occupied as they lose their relevance? Are scientists good at anything else? Or are they doomed to work at Starbucks? (I knew a physics post doc who ended up as a security guard. Sad.)

  • @MrPrime2357
    @MrPrime2357 Před 7 měsíci +1

    One of the big issues with the peer-review-process I find as a PhD in material science is, that the reviewers are not interested in the materials & experimental part. Meaning they certainly somewhat check data & results, but I assume nobody even reads the materials section. I would confindently say that >50% of the shown stuff is not reproduceable, simply because either important stuff (sometimes on purpose) is left out or necessary values are not given (e.g. is the % given in wt%, vol%...). And it´s even easy to explain, the reviewers get nearly nothing back - so why invest unecessary time? & additionally the last time they actually were doing something in the labs is >25 years ago.

  • @ishhanian
    @ishhanian Před rokem

    I am UX designer, and I was thinking that what research and user testing we do is completely unscientific most of the time, yet we swear by it... And this research frequently informs a lot of important decisions when it comes to how different platforms work. If real researchers in an academic setting can fall for confirmation bias and spinning a story after the fact, then I am even afraid to think about the quality of our research.

  • @georgekhumalo5283
    @georgekhumalo5283 Před 4 lety +15

    5 Seconds in and I already clicked like.

  • @rapauli
    @rapauli Před 3 lety

    Certainly an important question. Wish I could hear from a psychology researcher.

  • @js3883
    @js3883 Před 3 lety

    Good honest interview and responses. It is good Bishop is doing this. Many large biology companies require properly designed experiments in the discovery phase far before it has to go to the FDA. Wasting time = Wasting money.

    • @js3883
      @js3883 Před 3 lety

      Of course there was the time a researcher came to me with hundreds of repeated measurements in time on two chimps each with a different treatment. Chimps are very expensive. Did I say the researcher was also a Chimp. I used him for the control. Don't let chimps run experiments!

  • @hrishikeshbalakrishnan3762

    Wonderful interview

  • @josdelijster4505
    @josdelijster4505 Před 4 lety +1

    Very well done, very interesting

  • @jmitteld
    @jmitteld Před 4 lety +2

    People in the parapsychology community should be consulted on this one. They are ultra-careful about testing hypotheses, declaring their criteria in advance, etc because they have been put through the ringer so many times. There is a small community of very careful parapsychologists, some with physics or statistics degrees, who have thought more deeply about reproducibility than anyone else in science.
    One of the things known to these people is the experimenter effect. Results in psychology and to some extent in medicine generally tend to conform to the expectations of the experimenter. This is not only about p-hacking or cutting corners to get the result you want. It's real, and I'd wager it affects mainstream psychology research, and it is the "fifth horsemen" that Dorothy Bishop (for obvious reasons) doesn't want to mention.

  • @richb2229
    @richb2229 Před 4 lety

    Great interview and an issue that’s been obvious in science for some time.

  • @NomenNescio99
    @NomenNescio99 Před 4 lety +12

    I would love to see Sabine have a discussion with Sean Carroll about quantum physics!

    • @soumyaranjanpanda8865
      @soumyaranjanpanda8865 Před 4 lety +2

      Exactly.

    • @keithtait3214
      @keithtait3214 Před 4 lety +2

      Niklas Paulsson she's already savaged Sean's multi universe theory he wouldn't turn up It would ruin his career He's only interested in keeping Mr Finemans seat

    • @NomenNescio99
      @NomenNescio99 Před 4 lety +2

      @@keithtait3214 And I believe he not only sits at Feynmans seat, but also literally at Feynmans very desk.
      Maybe the conversation would be productive? I think a lot of the criticism Sean Carroll has directed toward the physics institutions are very much congruent with Sabines way of thinking.
      The fact that the many world interpretation of quantum mechanics is a big pile of male bovine fecal matter is another issue.

    • @clayz1
      @clayz1 Před 3 lety

      Worst academic interviewing fear: lay people suggesting interviews like the scientists are just gladiators in the arena.

  • @SimonSozzi7258
    @SimonSozzi7258 Před 4 lety

    This is really important.

  • @davidturner9827
    @davidturner9827 Před 2 lety +1

    In a particular program (to remain unnamed) the first thing I was taught was, when reading a paper, to look at the references first and “see if they’re respectable”. This attitude is a major problem. While the views of the big names in a field may on average correlate more closely with the truth, the positive feedback loop that trust networks create can steer an entire discipline awry. Some prominent examples in psychology come to mind.

  • @keithtait3214
    @keithtait3214 Před 4 lety +1

    I am heartened by all the comments it just shows that if you have intellectual content No fake news or internet trolls will be attracted to this site PS What has this to do with your singing career PPS my daughter has a rock star boy friend so I am in no way biased against multi talented people

  • @MyKharli
    @MyKharli Před 4 lety +1

    Its so nice to see scientific philosophers face the human condition to try get better results.

  • @nziom
    @nziom Před 4 lety

    That's a really good interview

  • @brianarbenz7206
    @brianarbenz7206 Před 3 lety +1

    Many institutions today are under pressure to "score a big hit," meaning to get their college's name in sound bites and social media posts. Hence, a finding's evident impact is treated as more important than the quality of the methods.

  • @jansoroczynski9950
    @jansoroczynski9950 Před 4 lety

    Great interview Dr. Sabine, thank you! Was this filmed in the Kendrew Quad at St. John’s college? Looks very familiar