Is Most Published Research Wrong?

Sdílet
Vložit
  • čas přidán 10. 08. 2016
  • Mounting evidence suggests a lot of published research is false.
    Check out Audible: bit.ly/AudibleVe
    Support Veritasium on Patreon: bit.ly/VePatreon
    Patreon supporters:
    Bryan Baker, Donal Botkin, Tony Fadell, Jason Buster, Saeed Alghamdi
    More information on this topic: wke.lt/w/s/z0wmO
    The Preregistration Challenge: cos.io/prereg/
    Resources used in the making of this video:
    Why Most Published Research Findings Are False:
    journals.plos.org/plosmedicine...
    Trouble at the Lab:
    www.economist.com/news/briefin...
    Science isn't broken:
    fivethirtyeight.com/features/s...
    Visual effects by Gustavo Rosa

Komentáře • 10K

  • @raznaot8399
    @raznaot8399 Před 2 lety +18136

    As the famous statistical saying goes, "If you torture data long enough, it will confess to anything"

  • @Campusanis
    @Campusanis Před 5 lety +4781

    The most shocking thing to me in this video was the fact that some journals would blindly refuse replication studies.

    • @krane15
      @krane15 Před 5 lety +56

      Maybe they're system is flawed?

    • @spiritofmatter1881
      @spiritofmatter1881 Před 5 lety +61

      The Economist publishes blindly. The entire EVidence Based Medicine does not yield dosage for medicine tested - that includes your painkillers, your blood pressure pills, your mood stabilisers and your anesthesia.
      But this is not science's fault, nor is it a conspiracy of the medical system.
      The scientific method is limited in its ability to give us data.
      Yet, practically we need to understand the world. The only way out is innovative methods that people can actually understand their pros and cons and get published without trying to immitate the scientific methods to be listened to.
      Oh, and developing the right hemisphere science. Which is what I experiment with.

    • @ewqdsacxz765
      @ewqdsacxz765 Před 5 lety +17

      We should boycott that unscrupulous journal. Does anyone know the name?

    • @krane15
      @krane15 Před 5 lety +100

      @@spiritofmatter1881 No, the scientific method can be skewed to provide misleading data; and if you don't believe there's a conspiracy in the medical/pharmaceutical system, there's a bridge in NY I can sell you for a good price. The pharmaceutical industry is mostly shady -- and medical doctors are their extended hand of national snake oil salesmen.

    • @fuNaN89
      @fuNaN89 Před 5 lety +124

      Journals are just newspaper for scientists.
      Publication house want to publish things that people want to read. And people want to read interesting and new things, not replication studies.
      Even though in modern times there are journals that accept and publish replication studies. These journals usually incur a much larger publication fee.
      Nothing is perfect. But at least, everyone is trying

  • @ModernGolfer
    @ModernGolfer Před rokem +2266

    As a very wise man once stated, "It's not the figures lyin'. It's the liars figurin'". Very true.

  • @14MCDLXXXVIII88
    @14MCDLXXXVIII88 Před rokem +1374

    this happens because of "publish or perish" mentality. I hate writing scientific papers because it is too much of a hassle. I love the clinic work and reading those papers, not writing them. in this day and age it is almost an obligation that EVERYBODY HAS TO PUBLISH. if you force everyone to write manuscripts, flood of trash is inevitable. only certain people who are motivated should do these kind of work, it should not be forced upon everyone.

    • @migenpeposhi6881
      @migenpeposhi6881 Před 9 měsíci +74

      Indeed. You are totally right. I also feel that this over-publishing phenomenon has led these researchers into manipulating the data because it feels like a competition. In my opinion, researching is hard and sometimes frustrating, but you should always stay loyal to the fact. Otherwise, you are cheating for profit. Totally unethical. There are people who devoted their lives for this but still, they don't get as much recognition as these 'cheaters' do.

    • @karlrovey
      @karlrovey Před 8 měsíci +21

      And not only is it "publish or perish," but you also have to pay the journals to publish your work once it is approved for publication.

    • @arvidbergman
      @arvidbergman Před 8 měsíci +35

      @@migenpeposhi6881 Certain people don't realise that "Research" doesn't mean "Prove something is true" but instead "See _if_ something is true"

    • @HyperVectra
      @HyperVectra Před 8 měsíci +3

      @karlrovey I choose perish

    • @justsayin5018
      @justsayin5018 Před 8 měsíci

      I'll tell you exactly why. The academia is controlled by rabid looney slavemasons and rabid looney jesus. Since they are both mentally and intellectually retarded rendering them incapable of any intellectual manifestation, they force grad students to do it in the publish or perish way. They actually steal and collect it for their own records because of the following reasons: 1. They think that these are valuable intellectual property which should only belong to them. Hoarding basically 2. They will pass it off as their own intellectual property subsequently in a post apocalyptic era or a different realm 3. Being intellectually retarded, they prefer quantity over quality. More the better 4. They are basically mining your intellectual property by exploiting you for their use 5. Commercial reasons just in case your papers somehow fetch money
      This is actually good. You should publish more crap for the slavemasons to prevent then from getting hold of some actual research

  • @MrMakae90
    @MrMakae90 Před 7 lety +9959

    For people freaking out in the comments: we don't need to change the scientific method, we need to change the publication strategies that incentive scientific behavior.

    • @2adamast
      @2adamast Před 7 lety +98

      You *believe* that we don't need to change the method

    • @MrMakae90
      @MrMakae90 Před 7 lety +550

      Adamast I did not for once stated my believe. I stated the point of the video, which many seem to have missed. Ironically, you now missed the point of my comment.

    • @slendy9600
      @slendy9600 Před 7 lety +433

      +Adamast no we dont need to change the method, the problem (which was explained VERY clearly in the video) is that people arent USING THE METHOD PROPERLY. like when someone crashes a car cause they were drunk, the car isnt broken its just being used incorrectly

    • @2adamast
      @2adamast Před 7 lety +45

      You do state your believes. You think you know the point of the video, where you see other people just freak out. Personally I read: _Mounting evidence suggests a lot of published research is false._ Nothing more. There is I admit a short faith message at the end of the video.

    • @MrBrew4321
      @MrBrew4321 Před 7 lety +299

      The scientific method- you go out and observe things, and develop a hypothesis, and test the hypothesis. If you run a bunch of tests and come out with the wrong deductions that is called flawed research methodology. Flawed research doesn't imply the concept of going out and observing coupling with experimentation is flawed, it just means you suck at being a scientist.

  • @qwerty9170x
    @qwerty9170x Před 2 lety +5865

    I really think undergrads should be replicating constantly. They dont need to publish or perish, step-by-step replication is great for learning, and any disproving by an undergrad can be rewarded (honors, graduate school admissions, etc) more easily than publication incentives can change

    • @jatmo6991
      @jatmo6991 Před 2 lety +370

      Agreed. The peer review portion of the scientific method is it's weakest link IMO.

    • @lucaslopez2091
      @lucaslopez2091 Před 2 lety +283

      I agree. Undergrads replicating classic experiments can also help with their education.

    • @chucknorris3752
      @chucknorris3752 Před 2 lety +80

      Undergrads do preform classic experiments ^

    • @sYnSilentStorm
      @sYnSilentStorm Před 2 lety +158

      When I was an undergrad in the physics department, we were (and still are) required to reproduce quite a few experiments that led to Nobel prizes. It's a fairly common practice, and you do this while working in a research group that is producing new research.

    • @qwerty9170x
      @qwerty9170x Před 2 lety +181

      @@sYnSilentStorm How common are undergrads in research groups / replicating newer works? Well trodden ground breaking-now-foundational Nobel prize works make sense for learning, but we would benefit from a more institutionalized path of replication of newer claims. Especially if we can before graduate school, which is who populates research groups in my mind.

  • @etanben-ami8305
    @etanben-ami8305 Před 8 měsíci +279

    When I was in grad school for applied psychology , my supervising professor wrote the discussion section of a paper before the data was all gathered. He told me to do whatever I needed to do in order to get those results. The paper was delivered at the Midwestern Psychology Conference. I left grad school, stressed to the max by overwork and conscience.

    • @stevekru6518
      @stevekru6518 Před 8 měsíci +5

      What was the thesis? Asking to learn if topic was especially controversial or important to particular interests.

    • @ButtersCCookie
      @ButtersCCookie Před 8 měsíci +8

      Why are you the minority? How can people who have the power to create Utopia choose self interest. It's like I'm in the Twilight Zone. Everybody knows and nothing is done. I wish I was never born and I hope I never am again.

    • @MidwesternCracker_2000
      @MidwesternCracker_2000 Před 4 měsíci +2

      Why not mention the subject of the paper then, brother? You lying…?

    • @jougetsu
      @jougetsu Před 3 měsíci +3

      chances are high that the paper wasn't going to reveal anything new

    • @pinkz7911
      @pinkz7911 Před 3 měsíci

      @@ButtersCCookie ??? nihilist looking goober

  • @-30h-work-week
    @-30h-work-week Před 7 měsíci +10

    Sabine Hossenfelder: "Most science websites just repeat press releases. The press releases are written by people who get paid to make their institution look good, and who for the most part don't understand the content of the paper. They're usually informed by the authors of the paper, but the authors have an interest in making their institution happy. The result is that almost all science headlines vastly exaggerate the novelty and relevance of the research they report on."

  • @GiRR007
    @GiRR007 Před 2 lety +4541

    "There is no cost to getting things wrong, the cost is not getting them published"
    It's a shame this also applies to news media as well.

    • @NaatClark
      @NaatClark Před 2 lety +4

      books are media...

    • @fernando4959
      @fernando4959 Před 2 lety +92

      @@NaatClark maybe they meant news

    • @GiRR007
      @GiRR007 Před 2 lety +7

      @@fernando4959 i did, i fixed it

    • @theintolerantape
      @theintolerantape Před 2 lety +44

      @@GiRR007 At this point I think it's indisputable that mainstream news is literally state propaganda.

    • @zehirmhann9326
      @zehirmhann9326 Před 2 lety +6

      @@NaatClark I don't know why it wouldn't apply to books

  • @josephmoya5098
    @josephmoya5098 Před 2 lety +3039

    As a former grad student, the real issue is the pressure universities put on their professors to publish. When my dad got his PhD, he said being published 5 times in his graduate career was considered top notch. He was practically guaranteed to get a tenure track position. Now I have my Masters and will be published twice. No one would consider giving you a post doc position without being published 5-10 times, and you are unlikely to get a tenure track position without being published 30 or so times. And speaking as a grad student who worked on a couple major projects, it is impossible to be published thirty times in your life and have meaningful data. The modern scientific process takes years. It takes months of proposal writing, followed by months of modeling, followed by months or years of experimentation, followed by months of pouring over massive data sets. To be published thirty times before you get your first tenure track position means your name is on somewhere between 25-28 meaningless papers. You'll be lucky to have one significant one.

    • @apolloandartemis4605
      @apolloandartemis4605 Před 2 lety +124

      Damn. I really want to be a researcher in the natural sciences one day with hopefully a Master's or PhD, but I must say seeing this is a little unnerving. Would you happen to have any advice for aspiring researchers?

    • @mohdhazwan9578
      @mohdhazwan9578 Před 2 lety +147

      I studied in Japan, now Japanese has changed their view in research, not to get to top of the world, but how the reseach could be applied and contributed to society. If you look at the ranking today, most japanese univs are not at the top ranking as they used to be. Now, univs from Korea, HK, China, singapore are climbing for top ranking. But every year these univ have suicide cases.

    • @cutefidgety
      @cutefidgety Před 2 lety +65

      @@mohdhazwan9578 friend I was totally with you until you mentioned suicide. How is that even relevant in how research can be applied to society?

    • @ShapedByMusic
      @ShapedByMusic Před 2 lety +35

      Not really accurate though. None of my professors have been published 30 times and they've taught at Yale, Texas A&M, Brown, that's not really true at all.

    • @josephmoya5098
      @josephmoya5098 Před 2 lety +39

      @@ShapedByMusic they might not have, but it is beginning to become a standard for new hires. 30 might be a bit of an exaggeration, but there is no way you are getting hired with under 15-20 for a tenure track position.

  • @noirekuroraigami2270
    @noirekuroraigami2270 Před rokem +91

    The problem is people are suppose to be able to replicate the results by doing the experiment over again. If I can’t find multiple experiments of a study, it’s hard for me to not be skeptical

    • @88porpoise
      @88porpoise Před 9 měsíci +20

      The big problem with that is noted in this video, replicating work already done generally has no rewards.
      The time, money, and need to publish to advance their careers mean even the best intentioned researchers are likely to avoid redoing someone else's study.

    • @zy9662
      @zy9662 Před 8 dny

      Some experiments are very expensive in terms of time and money but We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time

  • @callumc9426
    @callumc9426 Před rokem +376

    As someone who studies theoretical statistics and data science, this really resonates with me. I see students in other science disciplines such as psychology or biology taking a single, compulsory (and quite basic) statistics paper, who are then expected to undertake statistical analysis for all their research, without really knowing what they're doing. Statistics is so important, but can also be extremely deceiving, so to the untrained eye a good p-value = correct hypothesis, when in reality it's important to scrutinise all results. Despite it being so pertinent, statistics education in higher education and research is obviously lacking, but making it a more fundamental part of the scientific method would make research much more reliable and accurate.

    • @romanbucharist4708
      @romanbucharist4708 Před rokem +12

      I barely passed my statistics class, and I'm in biology. Even now I feared that I might not gonna be able to interpret my data.

    • @Guizambaldi
      @Guizambaldi Před rokem

      Observational studies in the social sciences and health sciences are mostly garbage. People who don't do experiments or RCTs need to study a hell lot of statistics to get things right. And only recently we got natural experiments to help us with good research design for those areas.
      Until my masters, I was relatively well trained in traditional statistics (I'm an economist) but unaware of natural experiments. I was completely disheartened about how awful my research was, given that different specifications in my observational studies were giving me different results. I only gained renewed enthusiasm in a much better quality PhD who taught me much better research designs.

    • @mihailmilev9909
      @mihailmilev9909 Před rokem +1

      @@romanbucharist4708 dang. So how's it going now?

    • @mihailmilev9909
      @mihailmilev9909 Před rokem +1

      Also happy soon new year guys! 2023 Greetings from Florida

    • @mihailmilev9909
      @mihailmilev9909 Před rokem +1

      @@romanbucharist4708 Also happy soon new year guys! 2023 Greetings from Florida

  • @2ndEarth
    @2ndEarth Před 2 lety +1965

    My favorite BAD EXPERIMENT is when mainstream news began claiming that OATMEAL gives you CANCER. The study was so poorly constructed that they didn't account for the confounding variable that old people eat oatmeal more often and also tend to have higher incidences of cancer (nodding and slapping my head as I type this).

    • @hydrolito
      @hydrolito Před 2 lety +70

      Maybe don't stand so close to the microwave oven when you cook it.

    • @kdanagger6894
      @kdanagger6894 Před 2 lety +40

      Perhaps Oatmeal isn't the problem. The problem could be with a change in the way it is grown and produced. Contamination, soil depletion, pesticides, etc. There are always variables that are un-accounted for in scientific research that invalidates the conclusions.

    • @NoTimeForThatNow
      @NoTimeForThatNow Před 2 lety +178

      @@kdanagger6894 the true answer was a lot simpler than that.

    • @hamsterdam1942
      @hamsterdam1942 Před 2 lety +65

      Don't forget about "vaccines-autism" one

    • @NoTimeForThatNow
      @NoTimeForThatNow Před 2 lety +27

      My favorite is probably the story of how it was proven that stomach ulcers are caused by bacteria and not by stress and spicy food. Big arguments for decades between the established science and its supporters vs the scientists discovering the truth. It illustrates how poorly established science treats scientists with new ideas no matter how valid.

  • @Deupey445
    @Deupey445 Před 3 lety +1789

    Gotta love when a published research article states that most published research findings are false

    • @cinegraphics
      @cinegraphics Před 2 lety +123

      Research has found that 73.2% of all statistics are made up.

    • @genepozniak
      @genepozniak Před 2 lety +5

      @@cinegraphics Rumor, not research, found that.

    • @genepozniak
      @genepozniak Před 2 lety +41

      It's a complete misunderstanding of how science research works. "Eureka" moments are rare. Instead, the truth is eked out little by little: many rounds of test, falsify, retest, improve until the truth is arrived at.

    • @4bidn1
      @4bidn1 Před 2 lety +18

      @@genepozniak Can't tell if you're taking the piss or not........

    • @genepozniak
      @genepozniak Před 2 lety +5

      @@4bidn1 If I gather your meaning correctly, no, I'm deadly serious. Think about it. If published research was the crap he said it is, where is all this successful bio-tech coming from?

  • @karldavis7392
    @karldavis7392 Před rokem +158

    This has influenced my thinking more than any other video I have ever seen, literally it's #1. I always wondered how the news could have one "surprising study" result after another, often contradicting one another, and why experts and professionals didn't change their practices in response to recent studies. Now I understand.

    • @thewolfin
      @thewolfin Před rokem +10

      Weird how popular this video is recently (most top comments are from ~1yr ago)...
      None of these problems apply to the fields of virology or immunology... right?

    • @karldavis7392
      @karldavis7392 Před rokem +2

      @@thewolfin I have no idea how much or little different areas of study are affected. I assume the very worst ones are when that ask people what they eat, and ask how healthy they are. Beyond that, no clue.

    • @roddbroward9876
      @roddbroward9876 Před rokem +7

      Yeah, I remember how eggs were bad for your health, then they were good, then they were bad again. Not even sure where the consensus on that is at this point.

    • @bridaw8557
      @bridaw8557 Před 11 měsíci +5

      Meta analyses are difficult to conduct, but help weed out bad data and contradictory findings. Not enough of these are done.

    • @karldavis7392
      @karldavis7392 Před 11 měsíci +5

      @@bridaw8557 Some meta analyses weed out bad data, while others average it in. The trick is carefully reviewing how the original studies were done.

  • @kunk8789
    @kunk8789 Před rokem +18

    “p

  • @Vathorst2
    @Vathorst2 Před 7 lety +4896

    Research shows lots of research is actually wrong
    _spoopy_

    • @thulyblu5486
      @thulyblu5486 Před 7 lety +234

      Science can actually falsify science... makes more sense than you might think

    • @RichieHendrixx
      @RichieHendrixx Před 7 lety +123

      Science is a battlefield of ideas, the darwinism of theories, if you will. Only the best ideas will survive. That's why the scientific method is so powerful.

    • @nal8503
      @nal8503 Před 7 lety +100

      Unfortunately the "best" ideas today are those that will result in profitable gadgets and not exactly those that would best propel human knowledge forward.

    • @Ludix147
      @Ludix147 Před 7 lety +58

      +Nal but that isn't science's fault, nor capitalism's. It is the fault of the consumers that value gadgets so highly.

    • @P07H34D
      @P07H34D Před 7 lety +42

      Research shows that most statistics and published research are false, statistically speaking.

  • @psychalogy
    @psychalogy Před 4 lety +2248

    It’s almost impossible to publish negative results. This majorly screws with the top tier level of evidence, the meta analysis. Meta analyses can only include information contained in studies that have actually been published. This bias to preferentially publish only the new and positive skews scientific understanding enormously. I’ve been an author on several replication studies that came up negative. Reviewers sometimes went to quite silly lengths to avoid recommending publication. Just last week a paper was rejected because it both 1. Didn’t add anything new to the field, and 2. disagreed with previous research in the area. These two things cannot simultaneously be true.

    • @shreksthongg
      @shreksthongg Před 4 lety +181

      This is very frustrating to hear.

    • @notme-ji5uo
      @notme-ji5uo Před 4 lety +20

      damn

    • @BestAnimeFreak
      @BestAnimeFreak Před 4 lety +77

      "These two things cannot simultaneously be true."
      Yeah, I just wanted to say it.
      If 1. is true, 2. can't be and vica versa.

    • @alanbarnett718
      @alanbarnett718 Před 4 lety +56

      Don't get me started about meta analyses. I've never heard of one being undertaken except where the backers have some kind of agenda. And I've never heard of one the results of which didn't support that agenda. The entire concept is deeply flawed.

    • @spike4850
      @spike4850 Před 4 lety +13

      BestAnimeFreak that’s what he said...

  • @StructEdOrg
    @StructEdOrg Před 2 lety +78

    This is huge in my field, Structural Engineering, as people get way too lax about sample size. Thanks to testing things like full-sized bridge girders being incredibly expensive, samples sizes of 1-3 have become all too common, and no one does replication studies... Then that mentality bleeds over to things like anchor bolts that can be had for $5 a piece at any big box hardware store. It's getting dangerous out there!

    • @SuperSlayer76
      @SuperSlayer76 Před 2 měsíci +3

      I took a course on rigging moving loads. There I learned Working Load Limit is 10% of Minimum Breaking Strength.
      That makes me feel safe.

    • @leeduke5746
      @leeduke5746 Před 19 dny

      ⁠​⁠@@SuperSlayer76 can you explain what you mean by rigging moving loads? Does this refer to loading up delivery vehicles and such or something else? Or is this 10% rule used for many different applications?

    • @jj4791
      @jj4791 Před 5 dny

      This is Crane operator stuff.
      There is a certain "strain" that is allowed for fatigue reasons. (Strain is material stretch vs applied stress). Exceeding that strain, while still below the breaking strength, will result in weakening of the material with repeated use causing failure below it's published minimum strength.
      E.g. steel might have a tensile strength of 110,000psi but a fatigue strength of only 63,000psi (63ksi/110ksi = only 57.27%).
      So, for conservative use, most industries require robust safety factors to account for fatigue, use, damage, etc.
      Commercial airliners are rated for +3.0g x 1.5 safety factor at maximum weight.
      Bridges vary, depending on seismic requirements, etc. But it's not a good idea to cross an old country road bridge rated for 6 tons, with a 12 ton vehicle. You might survive, but the bridge will be damaged.

    • @erenfe
      @erenfe Před 4 dny

      @@jj4791 Strain is stretch per original length, like if you stretch 2" in what was originally a 100" rod, you've got 2% strain, or 0.02

  • @LincolnDWard
    @LincolnDWard Před 10 měsíci +18

    Science isn't the initial idea, it's the dozens of people who come along and test the idea afterwards

    • @zy9662
      @zy9662 Před 8 dny

      Agree. We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time

  • @ColeJT
    @ColeJT Před 7 lety +862

    An engineer with a masters in nuclear engineering, a mathematician with PhDs in both theoretical and applied mathematics, and a recent graduate with a bachelors in statistics are all applying for a job at a highly classified ballistics laboratory. Having even been given the opportunity to interview for the job meant that each candidate was amply qualified, so the interviewers ask each the simple question, "what's one third plus two thirds?"
    The engineer quickly, and quite smugly calls out, "ONE! How did you people get assigned to interview me!?"
    The mathematician's eyes get wide, and he takes a page of paper to prove to the interviewers that the answer is both .999... and one without saying a word.
    The statistician carefully looks around the room, locks the door, closes the blinds, cups his hands around his mouth, and whispers as quietly as he can, "what do you want it to be?"

    • @janshegers7667
      @janshegers7667 Před 5 lety +31

      That is a good one!

    • @42Lailoken
      @42Lailoken Před 5 lety +34

      i thought the punchline was going to be 5/9

    • @ImperatorMo
      @ImperatorMo Před 5 lety +11

      there is no bachelor in statistics -.-

    • @common_undead
      @common_undead Před 5 lety +12

      @@ImperatorMo there is right??

    • @acetate909
      @acetate909 Před 5 lety +9

      @@ImperatorMo
      It's a joke. I can't tell if you're making a joke as well or trying to insert asinine information into a his humorous comment.

  • @joshualevan
    @joshualevan Před 4 lety +2586

    When something is published it's proof that someone with the ability to publish things decided to publish that thing.

    • @pmj_studio4065
      @pmj_studio4065 Před 4 lety +138

      _...or is it?_

    • @edit3891
      @edit3891 Před 4 lety +92

      *Vsauce music starts playing*
      Also oh gosh why are there so many Vsauce videos in the recommendation bar to might right?

    • @marchdarkenotp3346
      @marchdarkenotp3346 Před 4 lety +8

      Sounds Marxian.

    • @austineberhart8535
      @austineberhart8535 Před 3 lety +13

      The human aspect ruins statistics

    • @jaywoodley6639
      @jaywoodley6639 Před 3 lety +4

      Or are the founder's £ of said research looking for certain results

  • @lynnemullen2278
    @lynnemullen2278 Před 2 lety +16

    It would be great to have a mention in this video of statistical methods that are being used to prevent issues with multiple hypothesis testing. The two most common methods control for the familywise error rate, like the Bonferroni correction, or control for the false discovery rate (aka the expected proportion of false positives), such as the Benjamini-Hochberg method. Reviewers for journals nowadays, at least in my field, are asking scientists who submit manuscripts to account for multiple hypothesis testing using one of these statistical methods.

  • @jakebayer3497
    @jakebayer3497 Před 10 měsíci +58

    I wanted to thank you for speaking up on this issue. The state of science today is a travesty and I’m glad to finally hear someone acknowledge this as I have been along in the dark with these troubles for far too long. I know we are creating the foundation of something great but acknowledging that the current state of science is not something we can rely on is just simply not said or acknowledged. Im so happy and so grateful that you have spoken about his issue and brought it to the public’s attention. Thank you for you work and congratulations.

    • @HyperVectra
      @HyperVectra Před 8 měsíci +1

      The model we have is great, the problem is anything can be hacked if that is your goal. If you build a better mousetrap, Nature will build a better mouse. The problem is the incentive. Not enough money to go around for your own research and tenure is disappearing, so to do the work you want to do you need either to be a. Well known / respected in your field OR take funds to do work you don't want to do so you can do the work you do want to.

    • @zy9662
      @zy9662 Před 8 dny

      We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time

  • @saeedbaig4249
    @saeedbaig4249 Před 7 lety +1845

    This is why statistics should be a mandatory course for anyone studying science at university.
    Knowing how to properly interpret data can be just as important as the data itself.

    • @David-ud9ju
      @David-ud9ju Před 6 lety +71

      It's generally not done at undergraduate, but a massive part of a PhD is understanding the statistical analysis that is used in research. It is extremely complicated and would be way too advanced for an undergraduate stats course for, say, a biology student.

    • @Mirabell97
      @Mirabell97 Před 5 lety +100

      David that‘s usually not taught in undergrad in the us? Wow, that surprises me - a biology student from Germany, where we have to take a class in statistics in our bachelors.
      It might be easier to understand it during your PhD if you heard about it before

    • @forgotaboutbre
      @forgotaboutbre Před 5 lety +19

      As a graduate MS student in A.I., I found my research statistics course to be probably my most relevant in terms of learning to think properly as an individual with an advanced degree. I was very much taken by surprise by statistics, pleastantly so.

    • @forgotaboutbre
      @forgotaboutbre Před 5 lety +17

      Mirabell97 I took an Engineering statistics class in undergrad in America. I've also taken graduate level research statistics as a Comp Sci student, which was taught at a much much higher and more relevant level.
      There are also high school statistics class, which are even more watered down. So as you say, many have indeed heard about if before.

    • @Mirabell97
      @Mirabell97 Před 5 lety +3

      forgotaboutbre glad to hear that :)

  • @darth0tator
    @darth0tator Před 5 lety +828

    we should open up a journal for replication studies only

    • @mashotoshaku
      @mashotoshaku Před 4 lety +48

      With full time staff who never take any compromised funding.

    • @warwolf6359
      @warwolf6359 Před 4 lety +29

      Whose gonna pay for a journal no one reads?

    • @Krystalmyth
      @Krystalmyth Před 4 lety +48

      Call it "Well yes, but also No"

    • @daveyjones3016
      @daveyjones3016 Před 4 lety +2

      darth tator definitely

    • @matthewvaughan8192
      @matthewvaughan8192 Před 4 lety +35

      warwolf6 Anyone who believes in the importance of replication studies? It’s not like everyone who reads it will have read every study. Far from it. It could be multi-disciplined also, which could be interesting to learn about other sciences

  • @yourdiytechlife
    @yourdiytechlife Před 7 měsíci +4

    As a person that loves science but is not in the field, I’ve become quite disgusted by the lack of integrity shown by the university system. They have been corrupted to the core and need to be cleaned out. It’s become big business now and is not to be trusted if profit is the driving motivation, that’s not what universities are for.
    I have no issue with for profit companies doing research and development as long as everyone knows where it’s coming from and is driven solely by profit and is treated as such.

    • @terrencedent3071
      @terrencedent3071 Před měsícem

      I definitely understand that feeling. As a scientist who has spent a disenchantingly long time in academia, I still have faith in individual scientists and the prevailing winds of science overall. Look how far the world has come in such a short span of time (for good and bad). That progress is built laregly on a basis if good science; the bad stuff ends up getting filtered out. Universities absolutely operate for profit, but not everything that makes a profit is without merit in my eyes.

  • @adam-k
    @adam-k Před měsícem +4

    The problem is that academia is incentified to publish rubbish, every year 15 million people are graduating and 3 million people finishing post graduate studies. Every year they need to generate millions of papers of "new and novel" research. And if you are working in academia then your livelihood depends on publishing.

  • @sarvaniv621
    @sarvaniv621 Před 5 lety +1541

    "The greatest enemy of knowledge is not ignorance, but the illusion of truth"
    - After reading replies I have no idea who the author is

    • @danielsonski
      @danielsonski Před 4 lety +135

      "The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln (source: the Internet)
      (quote is not from Hawking; probably from Daniel J Boorstin)

    • @Acetyl53
      @Acetyl53 Před 4 lety +12

      I've seen a variant of this attributed to Voltaire.

    • @Alistair
      @Alistair Před 4 lety +47

      as a counterpoint, this quote has the illusion of truth about it. I like it!

    • @PhyreI3ird
      @PhyreI3ird Před 4 lety +62

      I remember seeing one like this attributed to Benjamin Franklin. Long story short, quoting individuals on the internet doesn't really seem too productive if you ask me. The idea should stand on its own and not hinge on someone seen as intelligent having said it. Then again, what do I know? I'm just Elon Musk.

    • @kyrlics6515
      @kyrlics6515 Před 4 lety +11

      @@danielsonski the authenticity doesn't matter, just the meaning behind it

  • @Christi3443
    @Christi3443 Před 2 lety +547

    As a PhD student, I can fully agree with this. I have come to hate the word "novel". No matter how correct and in-depth an analysis is, anything that doesn't turn the world upside down is always gladly dismissed with "not novel (enough)" as a killer argument. By now I've decided for myself that I don't want to have anything more to do with the academic world after the PhD. I love research, but I HATE academic publishing.

    • @rjyoungling220
      @rjyoungling220 Před 2 lety +31

      Honestly, it's such a shame that the system is losing out on terrific men like you. I hope (if you don't go into the workforce) you can continue as an independent researcher. Hopefully one day this system will change.

    • @boblynch2802
      @boblynch2802 Před 2 lety +8

      Consider starting your own publication journal?

    • @apolloandartemis4605
      @apolloandartemis4605 Před 2 lety +3

      Is there anyway to pursue a research or research-like career without the problematic issues of academia?

    • @branchcovidian2001
      @branchcovidian2001 Před 2 lety +20

      Academic publishing is a nepotistic and simultaneously cannibalistic _industry._

    • @TheBartgry
      @TheBartgry Před 2 lety +7

      Nah Christian, just go work in industry or big company. Better money HA

  • @Goldcrusty
    @Goldcrusty Před rokem +16

    Thanks for making these videos they are such an eye opener for me. I never thought this would be an issue at all, now I understand.

  • @JesseHBrewer
    @JesseHBrewer Před rokem

    Excellent explanation! Similarly, showing a frequency spectrum from an FFT and interpreting a peak is meaningless unless the caption includes an admission of whether the region displayed was selected before or after the peak was noticed!

  • @MrFritzthecatfish
    @MrFritzthecatfish Před 4 lety +891

    Publish or perish ... and quality goes to the drains

    • @Thermalions
      @Thermalions Před 4 lety +61

      So much research exists purely so someone can get their PhD, or bring funds into their University to keep themselves employed. When the pressure is on, no-one really cares whether the research is useful or even reliable - just got to fill the coffers and get your research published and referenced to drive up your University's rankings.

    • @UncleKennysPlace
      @UncleKennysPlace Před 4 lety +10

      @@Thermalions Well, certainly, there are millions of theses out there, all required for the PhD. No way around that. Most of them are garbage.

    • @arvind31459
      @arvind31459 Před 4 lety +8

      @@UncleKennysPlace I would say around 95% of them are garbage

    • @dbmail545
      @dbmail545 Před 4 lety +8

      It has been estimated that it takes $5 million in funding to make a Ph.D in a STEM field. The research community has been corrupted from the base.

    • @Metrion77
      @Metrion77 Před 4 lety +2

      money is the root of all evil

  • @MikeM8891
    @MikeM8891 Před 7 lety +671

    I have an hypothesis. I think getting in car accidents decreases your chances of dying from cancer
    ...but increases your chances of dying in a car accident.

    • @noahwilliams8996
      @noahwilliams8996 Před 7 lety +11

      "I shall test this! >8/ " -Hopefully some scientist out there.

    • @7781kathy
      @7781kathy Před 7 lety +7

      Good analogy.

    • @IAMDIMITRI
      @IAMDIMITRI Před 7 lety +78

      False. Somebody just published a paper about that. You have 100% chance to die from cancer if you where in a car accident. It was a small sample size, about 1 man. He was a truck Driver in chernobyl and he has been in small accident once. He died from cancer.

    • @Andromedon777
      @Andromedon777 Před 7 lety +12

      "You have 100% chance to die from cancer if you where in a car accident."
      so if you get in an accident, you will for sure die from cancer!

    • @7781kathy
      @7781kathy Před 7 lety +2

      ***** xD

  • @davidmackie3497
    @davidmackie3497 Před měsícem +2

    This 12 minutes should be mandatory viewing for every course that touches the slightest bit on any kind of science, engineering, statistics, political science, or journalism. Starting in junior high school.

  • @tamia8298
    @tamia8298 Před 7 měsíci +10

    Lying is at the center of so many American systems, it’s quite depressing

  • @briancreech9990
    @briancreech9990 Před 2 lety +852

    This seems more like a problem with the publishing system over the scientific method.

    • @nosson77
      @nosson77 Před 2 lety +13

      Yes you are right but the bottom line is that any new scientific theory is completely unreliable. Since there is no other way to do science today other than the peer review method.

    • @davidwebb2318
      @davidwebb2318 Před 2 lety +110

      I think the problem is actually quite deeply embedded in academic research. Right from the selection of which projects get grant funding and resources onwards there is bias to show the result that the department head wants to be true. Their career, prestige and income relies on this. The careers, prestige and income of every person in every academic research department relies on only ever finding 'convenient' results.

    • @harrycooper5231
      @harrycooper5231 Před 2 lety +2

      It's human nature. That's what publishing is all about, exposing the study to other scientists, and seeing if it survives.

    • @blitzofchaosgaming6737
      @blitzofchaosgaming6737 Před 2 lety +18

      Publishing is about making money, so they have the exact same problem as scientists do. Go for money or go for truth. Since the publications wouldn't exist without money, they are making the only choice they can.

    • @viktorvondoom9119
      @viktorvondoom9119 Před 2 lety +2

      ​@@blitzofchaosgaming6737 Publishers earn (among other ways) money by selling subscriptions. They could publish anything

  • @NurseKillam
    @NurseKillam Před 6 lety +1853

    Interesting. I am adding this video to my research courses. My students don't always understand why we need to be critical of research.

    • @nosferatu5
      @nosferatu5 Před 6 lety +28

      Modern science is science. Just because you're emotionally upset over something in the news doesn't invalidate the scientific method.

    • @alittlelifeleft8232
      @alittlelifeleft8232 Před 6 lety +20

      J Thorsson you can't talk about the "lack of self critical thinking" after trying to say that you're smarter than someone else because your daddy is a manager at a science institute... you can't contract knowledge lol

    • @tibfulv
      @tibfulv Před 5 lety +43

      + nosferatu5
      The scientific method is indeed unsurpassed. But what this video is about (though it doesn't say so) is a relative newcomer in academia, the NHST, and its use is based on anti-science. Does fitting evidence to theory sound like science to you? I hope not, it most certainly does not to me. But that and anecdotes is what a young academic told me was the norm in many of these fields. I'd order a retest or reinterpretation of every NHST study from 1940 onwards using the actual scientific method, complete with logic and falsifications, given these results. A failure rate of 64% is abysmal, yet predicted by Ioannides.

    • @jmanfiji
      @jmanfiji Před 5 lety +16

      Don't forget something Derek left out though. Re-sampling fixes a lot of these issues. Run a sampling test including replications to get your standard deviations and get a p-value less than 0.05 (or even better, less than 0.01 or 0.001). Then rerun the sampling tests multiple times to see if you can repeat the p-value. THEN (most importantly) report ALL your experimental runs with p-values. If even one out of (at least) three to five separate independent runs has a non-significant p-value, consider the entire study with a huge pinch of salt. Most reputable journals though nowadays insist on this - the peer reviewers worth anything will at the very least.

    • @PP-yx4rv
      @PP-yx4rv Před 5 lety +7

      youre students have the same "publication incentives" as those publishing these "findings".

  • @tonym6566
    @tonym6566 Před 8 měsíci

    Given recent high profile scandals in the field, an updated version of the vid would be greatly appreciated

  • @robrobason
    @robrobason Před 9 měsíci +24

    Thanks for the analytical look at this topic. It seems timely with the recent resignation at Stanford University. It reminds me of a former colleague who shared the quip "publish or perish." In today's political world, the phrase "follow the science" is frequently and ignorantly applied, I'm glad to see science influencers such as yourself shedding light on this topic.

  • @danknfrshtv
    @danknfrshtv Před 2 lety +792

    Just started my PhD. This video has inspired me to call in consultants outside of my supervisory team to check my methods. I don't want to be wasting my time or anyone else's with nonsense research, and I'm honestly feeling a little nervous about it now.

    • @chertfoot1500
      @chertfoot1500 Před 2 lety +10

      What is your research area?

    • @angrydragonslayer
      @angrydragonslayer Před rokem +15

      Have you had a bad time with trying to be honest in science yet?

    • @whyplaypiano2844
      @whyplaypiano2844 Před rokem +27

      @@angrydragonslayer What does this even mean?

    • @angrydragonslayer
      @angrydragonslayer Před rokem +8

      @@whyplaypiano2844 what part do you not get?

    • @whyplaypiano2844
      @whyplaypiano2844 Před rokem +39

      @@angrydragonslayer The whole comment? It isn't phrased very well. Are you being sarcastic, or serious? If you're being sarcastic, it's either because you were trying to be funny, or because you're--for a lack of better words--salty that people aren't honest in science. If you're being serious, it's either a sincere question, or you genuinely think scientists are dishonest on purpose. Explain the mentally you had when you made the comment, I guess?

  • @pouncebaratheon4178
    @pouncebaratheon4178 Před 7 lety +599

    P values of 0.05 are a joke.
    Look, I'm going to sound biased, and that's because I am.
    This is a much bigger problem in fields like Psychology than in fields like Physics. The emphasis on constant publication and on positive results is still a massive problem. Researcher bias is still a massive problem (although still, not as much as in Psych/Sociology). The existence of tenure helps a little since researchers become able to research whatever they want rather than what the system wants.
    But we aren't claiming world-changing discoveries with P=.05. Derek brushed right past this like he was afraid of sounding biased but I'll repeat: 5 sigma is a 1 in 3 million chance of getting a false positive purely by chance. Every physicist "knew" the Higgs had been discovered years before we finally announced it and started celebrating. But we still waited for 5 sigma.
    I did some research with one of my Psych professors in my freshman year. She was actually quite careful outside of the fact that her sample sizes were pathetic. We went to a convention where we saw several dozen researchers presenting the results of their studies, and it was the most masturbatory display I could have imagined. There were some decent scientists there, no doubt, but the *majority* of them were making claims too grandiose for their P-values and sample sizes, confusing correlation with causation, and most of all *failing to isolate variables.* If a freshman is noticing glaring problems in your research method, your research method sucks.
    The next year I had a Physics prof. who had a friend of mine and his grad students run an experiment 40,000 times. There is no comparison. We need a lot more rigor in the soft sciences than we have right now. Mostly because science. (But also because they're making us all look bad...)

    • @Madsy9
      @Madsy9 Před 7 lety +65

      And there's also the problem that experiments might be difficult to perform in fields outside physics. It can expensive and it requires a lot of planning and logistics. Not to mention that ethical dilemmas might stand in the way, which happens a lot in medicine. In a way, the physics field is blessed by not depending on studying people, and overall physics experiments are cheap; expensive particle accelerators non-withstanding.
      One thing I think Derek missed is to emphasize that one shouldn't be looking at single studies anyway. You look multiple studies for trends and toss out the flawed ones and the outliers. Or even better, look for meta studies.
      I'm also unsure if changing your model / what you measure *after* you have looked at the data is p-hacking. Such a mistake seems way more serious to me, as you're basically making your model fit a specific data set. Give me any data set and I can make a polynomial fit all the points. Basically, reusing the data after changing the model should be a crime :)

    • @bangboom123
      @bangboom123 Před 7 lety +64

      On this note, simply because you mention my discipline (Psychology), I will point out that Psychology lacks any kind of unifying theory that organises the predictions it makes. It's a lot easier to be a physicist trying to confirm the predictions of Einstein and Newton than a psychologist guessing at what the underlying mechanics of the mind are.

    • @erikziak1249
      @erikziak1249 Před 7 lety +6

      We are all biased, but even the ones who admit it find it hard to fight it. I guess we can never win.

    • @themightyleek
      @themightyleek Před 7 lety +34

      Another issue is that Sociology, Psychology and Economics are all black boxes that we don't know nearly enough about. In Physics, we can lower the temperature to close to absolute zero, and do the experiment in a vacuum. It is currently impossible to have that level of rigour in Sociology, Psychology and Economics. We still have a while to go.

    • @-Gorbi-
      @-Gorbi- Před 7 lety +9

      I don't see why you need to single out psychology, even this video gives examples of neuroscience and physiology research holding even lower rates of reproducibility. When you look at the success of psychotherapy for individuals, you will find most people find it an indispensable resource in their lives, unlike the health tips or the vague claims about tiny brain regions coming out of neurology and physiology.

  • @dziltener
    @dziltener Před rokem +1

    I love your background music. It's so early-2000s-techy in the best possible way

  • @nabarcheto7950
    @nabarcheto7950 Před 2 lety +12

    Science might be our best tool to study and find correct answers in nature but it definitely needs to be improved a lot! Sadly up to date there are still people refusing to publish negative results, magazines refusing replication of experiments and the the whole publishing system is very subjective... and sometimes it works on friendly relationship or established connections... now that is not the right scientific way! Thank you so much for acknowledging this problem and make people aware of it!

    • @AbhiDaBeatTheSecond
      @AbhiDaBeatTheSecond Před 7 měsíci

      The problem is not with Science. The problem is with peer review journals and publications. They have unintentionally ruined science. Edit: there is a reason scientists who are responsible for the progress of science are all fringe theorists like Albert Einstein and they do not come from the mainstream scientific community. There is no diversity of thought and expression allowed through peer reviewed journals. Scientific progress is unintentionally halted by peer review journals.

  • @JavierBacon
    @JavierBacon Před 7 lety +383

    The lack of incentives for replication studies is obviously the biggest problem. The fact that some of those "landmark" studies were only attempted again recently...
    Hopefully, as people become more aware of this (it's happening), all those journals will change their mind about replications. They should release a separate issue for them, even.

    • @jarrethcutestory
      @jarrethcutestory Před 7 lety +6

      Agree. At some stage we will almost need to press "reset" and start again.

    • @Khanryu
      @Khanryu Před 7 lety +5

      Yea, especially since almost every article in their conclusion implies that "further research in the area is needed" :p

    • @JavierBacon
      @JavierBacon Před 7 lety

      ***** Significant or not... It's always significant in some way

    • @GOBIAS.INDUSTRIES.
      @GOBIAS.INDUSTRIES. Před 7 lety +3

      +JavierBacon I get what you're saying and agree. Even though your testing/conclusions don't have statistical significance, the findings are still significant. In most cases, it would still help increase our understanding of a subject if null results were published.

    • @Bourinos02
      @Bourinos02 Před 7 lety +1

      The best way to start is to get rid of journals telling us what is worth publishing and what isn't. Then kill the h-index/impact-factor that are genuine SHITS. Then put everything in open access, the universities have all the infrastructure necessary and could even save millions $ in subscription fees that are frankly incredibly stupid to begin with...

  • @Neo_to
    @Neo_to Před 2 lety +860

    I had so much trouble to publish when I corrected the p-values to counteract "p-hacking" or alpha inflation. Since I tested for multiple variables, I adjusted the models to minimise false positives and low and behold, almost all hypotheses that would have shown p

    • @aravindpallippara1577
      @aravindpallippara1577 Před 2 lety +87

      Ugh, apparently negative results are so damn untouchables - the publication system really needs to chage

    • @IndigoIndustrial
      @IndigoIndustrial Před 2 lety +28

      Stick all those negative findings in the supplementary figures of a somewhat related paper.

    • @Neo_to
      @Neo_to Před 2 lety +54

      ​@@aravindpallippara1577 Just imagine, it's not compulsory to adjust the p-values. It's not mandatory to counteract alpha inflation. How much of published research must be (intentionally or not) not significant, but published as such.

    • @ryaandnice
      @ryaandnice Před 2 lety +21

      But you kept your integrity.

    • @Sol-fu4nm
      @Sol-fu4nm Před 2 lety +44

      Imagine all the human time saved from being able to get information from someone elses research without having to do it yourself.
      Now imagine all the time lost from all the people who have to research what has already been done, but they cant learn it because negative results don't show up in books.

  • @FreeWaves9
    @FreeWaves9 Před 9 měsíci +4

    There is a pressure to publish significant results. As a research assistant, I know for a fact my professors engage in this. I was preparing the data I collected on a crop, and somehow the paper was published a week after I finished the data... didn't make sense

    • @terrencedent3071
      @terrencedent3071 Před měsícem

      Definitely doesn't make sense, as the peer review process alone takes months. Could it be that you were reproducing some past experiments, or gathering the same data to be used in a future publication?

  • @zhuoyuelyu
    @zhuoyuelyu Před rokem +15

    I'm so glad someone finally spoke out about something I have been worried about for so long. I thought maybe it was just my misunderstanding because everyone else seemed to be deeply engaging in this toxic/false process 🤔, but now I know it is indeed problematic. Thank you so much!

    • @MichaelLeightonsKarlyPilkboys
      @MichaelLeightonsKarlyPilkboys Před 7 měsíci

      @@anchordrop1476 That's very interesting. Could you tell me the the other ways your professor would want you to draw conclusions? Also do you know which types of models statisticians are favouring?

  • @kcwidman
    @kcwidman Před 7 lety +1866

    I feel like everyone in the world needs to watch this video. There's so much crap out there an no one ever thinks past what they want to hear. This should help.
    This should be a Ted Ed

    • @AlexKnauth
      @AlexKnauth Před 7 lety +2

      +

    • @oM477o
      @oM477o Před 7 lety +45

      Do you love science and all it's complixity but wish it could be a little less complex, and a lot less scientific?
      Introducing TODD Talks...

    • @fatsquirrel75
      @fatsquirrel75 Před 7 lety +32

      You're right, people often only hear what they want, so this video would likely make that even worse. It gives people ammunition to discredit others with an informed view. People are going to see that if this is the result from honest science, then what happens to paid and biased science.
      To a wider audience I think this video would likely do a lot more harm than good.

    • @fatsquirrel75
      @fatsquirrel75 Před 7 lety +10

      For me, If there was a video I'd like everyone to watch it'd be one purely on the benefits of science. The last thing we need to throw out to the general public is something that might look at first glace to highlight its flaws.

    • @EvilNeonETC
      @EvilNeonETC Před 7 lety +2

      Oh I "Heard" that if you eat butter, you'll be healthier than those who don't eat butter. Therefore it is correct. /sarcasm

  • @Qba86
    @Qba86 Před 2 lety +497

    Well, as a wise man once said "Some poeple use statistics like a drunk would use a streetlamp -- not for illumination but for support".
    That being said, the most frustrating bit is that the journals and financing agencies actively encourage p-hacking and discourage replicating dubious studies.

    • @PrzemyslawDolata
      @PrzemyslawDolata Před 2 lety +3

      I'm stealing this quote, it's amazing.

    • @MrJdsenior
      @MrJdsenior Před rokem +9

      There is nothing wrong with using statistics for support, as long as they are accurate and honest, AND you don't cherry pick them. That last part is often the biggest problem. I don't think pharma changes numbers any more, but they most definitely fund several studies and pick and choose what they want from each. That is not research, that is advertising. It's also changed a bit for them now that they have to have conclusions that at least have SOMETHING to do with the data collected. There was no requirement for that before, as I understand it.

    • @whatisahandle221
      @whatisahandle221 Před rokem +7

      The point about discouraging replicating dubious-or any-studies is important. There just aren’t incentives to duplicate or refute someone else’s findings, but rather come up with something ”original”.
      On a similar note, as an engineer who frequently volunteers at elementary - high school science fair judging, I’m constantly dismayed at the emphasis that other judges-both somewhat “lay” and professional STEM judges-place on “originality”… at the elementary & middle school level, even, not to mention the high school level! (Ok: maybe a district or regional winner at HS needs to be decently original, but…) Many people place originality and presentation skills (not to be entirely discounted, of course, but still not #1) above scientific inquiry, larger data trials, strict controls, and even just a good, solid use of the basic fundamentals of an experience by as taught in elementary science class.

    • @Qba86
      @Qba86 Před rokem +1

      @@whatisahandle221 I believe that in experimental physics it is customary to publish independent replications of breakthrough studies in comparatively high-impact journals (as well as to cite replication studies along with the original ones in future papers). Sadly this is more of an exception that proves the rule.
      In life sciences on the other hand there are so many subfields and so much competition, that far too many "original" yet shoddy papers (methodologically speaking) get published. My subjective impression is that this problem is slightly smaller in niche and/or "old-fashioned" subfields, where the odds of getting a reviewer who knows all the ins and outs of the topic are relatively high.

    • @stevenr5149
      @stevenr5149 Před 11 měsíci

      ​@@MrJdsenior They still do-and still do so much more. It is what it is.

  • @derikjbrandt
    @derikjbrandt Před 2 měsíci +2

    There's a newer peer reviewed study that shows that the study "Why Most Published Research Findings Are False" is actually false. The author of the new study, however, says that disproving the original study in fact proves that the original study was correct - most published research is false. So he's apologized for his error.

  • @paulster185
    @paulster185 Před 11 měsíci +5

    Years afterwards the problem is still in no way solved, not much if anything was improved, but it's no longer talked about.
    I guess silencing critics worked well.

  • @johndriscoll7803
    @johndriscoll7803 Před 3 lety +818

    “Science is the interpretation of data. The data is usually crap.”
    Liam Scheff, science journalist and author

    • @siddharthrawat7205
      @siddharthrawat7205 Před 2 lety +2

      Ever heard of data wranglers?

    • @lixloon
      @lixloon Před 2 lety +8

      Science journalist and author and he doesn't know that "data" is a plural noun? FYI, "datum" is the singular.

    • @rickross9829
      @rickross9829 Před 2 lety +66

      @@lixloon Why exactly did you assume he is talking about one point of datum? It's the less logical explanation. I'll just assume you're a moron who wanted to let the world know something that makes you feel smart.

    • @kholofelolebepe9637
      @kholofelolebepe9637 Před 2 lety

      Sugasphere and the Lancet concur

    • @aphroditesaphrodisiac3272
      @aphroditesaphrodisiac3272 Před 2 lety +37

      @@lixloon data is gramatically correct. It's not possible to interpret a single datum.

  • @glenmartin2437
    @glenmartin2437 Před 2 lety +281

    Years ago, I questioned some chemistry methodologies. It was very frustrating, because nobody was listening. Then a publication came out discrediting the methods used and discrediting many journal articles. Somebody had listened, or came to the same conclusions I did. Corrections were made.

    • @aunieafifahabdulmutalib7410
      @aunieafifahabdulmutalib7410 Před 2 lety +10

      TRUE., I HAVE THE SAME EXPERIENCE.

    • @MrJdsenior
      @MrJdsenior Před rokem +4

      Vindicated!

    • @whatisahandle221
      @whatisahandle221 Před rokem +7

      When you say you questioned, did you establish a line of communication/collaboration with any of the authors or users of the method, working to test its limits, improve it, or compare it to other methods?

    • @voskresenie-
      @voskresenie- Před rokem

      What were the methodologies, and were any significant findings overturned / discredited as a result? Or did it only affect small findings, with larger findings still being correct (or considered correct) in spite of some methodological errors?

    • @yaroslavsobolev9514
      @yaroslavsobolev9514 Před rokem +13

      You should have used an approach developed in cybersecurity research long time ago for the same issue: notify the authors that in 3 months you are going to publish your findings about all their mistakes no matter what they do. Then the authors have 3 months to retract their papers on their own and/or correct them. This solution is called "responsible disclosure" of vulnerabilities. You see, in cybersecurity the problem of "nobody listens unless you publish" has been acknowledged a long time ago. You can do this anonymously as well: from my experience, scientists are not more ethical than average human, and when you threaten their career and self-image they quite often freak out and try to hurt you back by all imaginable means -- just as many normal humans would in such a situation.

  • @Coreykoon
    @Coreykoon Před 2 lety +9

    When he used a study to study the unreliability of studies…my brain raised its hand and waited patiently to be called on.

  • @rodrigocarnier8035
    @rodrigocarnier8035 Před 8 měsíci +1

    Nice video and a very important commentary on the flaws of research standards.
    However it is fruitful to remember the lessons of "Structure of Scientific Revolutions" by Thomas Khun (the most cited work of social sciences in human history, by the way - for good or bad XD). A result is generally understood as wrong in hindsight, after failure not only of verification but also of the methodology itself (because usually it is the carpenter who is blamed, not the tools a.k.a. methodology, correct or wrong). The book discusses how the normal phase of science is necessarily a very focused study on limited topics, defined by the current paradigm. The paradigm informs not only the "right" theory, tools and criteria to do research, but also the "right" questions, i.e. the open problems of any field that are almost guaranteed to yield a solution after more "problem solving", using the standards of research in the current paradigm.
    It is only when the paradigm starts failing consistently to solve new problems and becomes all patched up by "corrections", that researchers start adventuring on new topics outside of the "right" questions. It is like getting out of the green box in the upper left in Veritasium's figure and exploring the red outside, but only because the green itself is proving to be much redder than previously thought, and researchers now have little to lose by "incorrectly" making the "wrong questions".

  • @DaveGarber1975
    @DaveGarber1975 Před 2 lety +267

    It's definitely bad in medicine. John Ionnadis has conducted "meta-research" into the quality of medical research and concluded that most medical research is severely flawed---in fact, "80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials." Wow. There are also problems with dealing with complex systems, and with challenging scientific orthodoxy into which some scientists have invested their entire careers.

    • @theninja4137
      @theninja4137 Před 2 lety +4

      As an engineering student specializing in med tech have the strong impression that med publications are less elaborate, lower quality and contain less explanation than eng ones

    • @nmarbletoe8210
      @nmarbletoe8210 Před rokem +6

      I'd say 25% wrong is really really good for something as complex as medcin

    • @BobKerns4111
      @BobKerns4111 Před rokem

      I just *knew* John Ionnadis would come up here. If you want to see "mostly wrong", take a look at his record on COVID-19 predictions-off by two orders of magnitude.
      His sensationalized contrarian kick has gotten people killed. There are many better, more thoughtful critics of the state of research.
      I'm not saying he's always wrong. But he does go for the sensational, and is often sensationally wrong, and doggedly so.
      A lot of progress has been made, especially in medicine, with pre-registration of trials, data & code repositories, etc, and I'll give him credit for helping kick-start some of that. (Preprints seem to me to be a move simultaneously in the right and wrong directions!)
      But statements like "80% of non-randomized studies turn out to be wrong" isn't even well-defined enough to be falsifiable. It's a non-scientific statement. And meta-research, like meta-analysis, is itself extremely subject to selection bias. Each need to be approached with great care and skepticism.
      A lot of what he says is not controversial. I'm not here to demolish John Ionnadis, but to urge people to steer clear of his broad sensationalized generalizations, and look carefully at the arguments he makes. Apply the same critical standards that he urges to his own research.
      Sometimes the kettle calls the pot black-but black is still black.

    • @MrJdsenior
      @MrJdsenior Před rokem +4

      There is also the problem that you can test one medication, to some degree, but if you start talking about interactions between different medications in different people, most of the bets are definitely off. People discount "anecdotal" data completely, but if that data comes from doctors reporting on those medications, it definitely has value, as well, IMHO.

    • @squirrelpatrick3670
      @squirrelpatrick3670 Před rokem +1

      The vast majority of medical research is a shell game, run by pharma. You can tell little by study conclusions: you can actually tell something by the set of study parameters. Where study parameters will produce a unwanted conclusion, the research doesn't happen or isn't published. Example: no clinical or epidemiological evidence for the safety of aluminium adjuvants in vaccines. Draw your own conclusion

  • @brianjonker510
    @brianjonker510 Před 2 lety +169

    It should be one of the requirements for getting a Bachelors in a field of science to do a replication study. Even with small sample sizes.
    It is a useful experience and pattern of thinking to carry into adulthood.
    Furthermore a meta-analysis using dozens or hundreds of further experiments would shake out all incorrect P values

    • @user-fk8zw5js2p
      @user-fk8zw5js2p Před 7 měsíci +8

      An honestly, meta-analysis should be just as important to publish as novel as there is so much data out there and it has never been easier to analyze quickly.

    • @Heyu7her3
      @Heyu7her3 Před 3 měsíci +2

      That's essentially what they do in a lab class

  • @DanWhe
    @DanWhe Před 8 měsíci +2

    The fact that null hypotheses are not published also skews studies because the data in those is not referenced which would dilute the influence of non null hypotheses referenced.

  • @merylmarceline9687
    @merylmarceline9687 Před 11 měsíci +2

    Thank you so much, it's such a thought-provoking video.
    The point is clear and striking, the cost of being wrong is always low but the real cost is not getting published (cruel scholar world).
    The most interesting question raised here is "how often we deluded ourselves when there's not statistic and peer review", which again reminds that most of us just live in our self-constructed little bubble and continuing building things based on what we subjectively believe.

  • @AuliaAF
    @AuliaAF Před 2 lety +108

    In Indonesia, many of supervisors in medicine would reject replication studies, expecting new studies and publication and therefore, causing us to have nearly zero epidemiological data. We prefer "good-looking research" to actually researching anything. Better not research than not looking good

    • @Szszymon14
      @Szszymon14 Před 2 lety +1

      Your supervisors are speaking the language of gods

  • @Boahemaa
    @Boahemaa Před 3 lety +67

    "Data doesn't speak for itself, it must be interpreted"~ and there we have it people the point of this thesis.

  • @Marc-uw4lw
    @Marc-uw4lw Před 10 měsíci +1

    I believe a good example of selective interpretation of data is in the small print at the bottom of TV ads for things like cosmetics and pharmaceuticals where it says “83 of 127 people agree”. It’s a section of a larger study group that has been narrowed down to the most favourable ratio possible, hence the small, odd sample number.

  • @saiwolaw2842
    @saiwolaw2842 Před 16 dny

    These thing have always been in my head. Thanks for saying it out and being articulate on this.

  • @atomicnolxix
    @atomicnolxix Před 2 lety +81

    When the contestants found out one of the walls would contain an erotic image, they enabled their inner chakras to get it right

  • @SpruceOaks
    @SpruceOaks Před 2 lety +346

    Short answer: yes. That was a real wake-up call when I was doing my Masters degree literature review - how often university professors push publications using "academic standard" statistical analysis to come to a demonstrably wrong conclusion. It is scary, not only how often this was the case, but how often these studies would be cited and their misinformation spread through academic circles without question.

    • @davidwebb2318
      @davidwebb2318 Před 2 lety +27

      Most academics doing the research are young and inexperienced in the real world. The people managing the research departments have a vested interest in only promoting research that finds 'convenient' results that will enhance their chance of getting bigger budgets next year.
      Maybe we should take people with 30 years of industry experience and put them in charge of research in academic institutions.....

    • @haraldtopfer5732
      @haraldtopfer5732 Před 2 lety +16

      @@davidwebb2318 Unfortunately true. If, as a young scientist, you talk to the head of your lab or department about your work and what your ideals are or what your idea of good science is, you will quickly be taught. You don't know anything! No, you really don't know what is important in science. What you know even less about is what "good work" is and what is expected of you. The most important thing is neither "good science" nor a prestigious publication. At the very top of the hierarchy is an accepted proposal letter! No funding, no research. All other output must be directed towards this goal and are just means to an end. The larger the organisation (Pareto Principal), the greater the pressure to meet this requirement. Exceptions exist.

    • @davidwebb2318
      @davidwebb2318 Před 2 lety +22

      @@haraldtopfer5732 I agree. Academia has become a big industry with big careers to support. The priority of the people heading up departments is to build bigger empires, secure bigger budgets and increase their personal exposure/status. This secures their jobs and the jobs of their colleagues/friends. That trumps everything else in many cases.
      It is really obvious in the climate change industry where nobody ever proposes or approves any budget for spending on anything that doesn't support the pre-existing narrative. They carefully choose and support only work that adds weight to the doom stories because this expands the 'importance' of their industry. Their future careers and their salary depends on doing it so they embrace it and steer all the research in one direction. The system is really flawed and has created a monster where half the world are intent on economic suicide to cure a problem that is relatively minor and will only have any impact over generations.

    • @aravindpallippara1577
      @aravindpallippara1577 Před 2 lety +20

      @@davidwebb2318 Well the thing is virtually every study that disapproves climate change are very usually well funded themselves - There is a vested interest among the folks with resources to forward that narrative as well, and they have resources, profits they can lose. Not to mention these studies also have to exercise pretty big mental gymnastics as the mounting evidence grows.
      Money does make the world go around after all.
      Wouldn't you agree?

    • @davidwebb2318
      @davidwebb2318 Před 2 lety +8

      @@aravindpallippara1577 No, I wouldn't agree. The climate change industry is mostly based on an emotional sales pitch pushed by celebrities and political activists who haven't got the first clue about the actual data concerning the climate.
      This is obvious because the main activists are pushing the idea that humans will be extinct in under 10 years. Politicians who are too weak-minded to work out this is complete lunacy have simply demonstrated their lack of intellectual horsepower by going along with it.
      Money does not make the world go round. It is just a convenient method of exchange used to buy and sell goods and services. Of course, the political activists that are using the climate change narrative to promote their political agenda will try to persuade you that money is evil (or that only evil people have money so they should take it and give it to people they consider more worthy).

  • @jonathanadams8255
    @jonathanadams8255 Před 8 měsíci +1

    Philosophical Theory of Underdetermination. We can never know the answer truly is but with science we can slowly eliminate some things that are not true. As a chemist working in pharmaceuticals, it's the hard truth pill to swallow.

  • @salamatunnafiah9033
    @salamatunnafiah9033 Před 10 měsíci +16

    When I was in undergrad, I was so obsessed to do postgrad for the sake of "in search of knowledge", now, i am a master student, and although I love the university library, I just feel "empty" inside, knowing that we cannot rely on "human knowledge" ... and even more interested in religious studies. Looking for the "pure" knowledge is hard today...

    • @bubblesbomb8949
      @bubblesbomb8949 Před 4 měsíci

      As someone interested in postgrad, would you recomend it to someone fully expecting the absurdity of human knowledge as an absolute truth?

    • @salamatunnafiah9033
      @salamatunnafiah9033 Před 4 měsíci

      @@bubblesbomb8949 yes, just do it. I just realized that, it's part of the learning journey....

  • @jossbox4794
    @jossbox4794 Před 7 lety +137

    How can we tell this research isn't wrong

    • @veritasium
      @veritasium  Před 7 lety +85

      oh the endless loop - there have been a fair number of attempted replications recently that have found pretty dismal results. When you consider they are all in agreement, that biases exist, that incentives are skewed, that .05 is not all that low, that p-hacking occurs, it is fairly unambiguous that a sizeable fraction (if not a majority) of research is actually false.

    • @leonardokallin9135
      @leonardokallin9135 Před 7 lety +1

      +Veritasium Wouldn't the odds of the exception being wrong be higher, than the odds of the norm being wrong? There's a reason why there's such a thing as peer review, after all. The scientific model is there to make sure you can replicate the results and methods of published papers. If something doesn't stand up to peer review, it's bad science, as it means something didn't add up.

    • @user-yd6qq5pr7c
      @user-yd6qq5pr7c Před 7 lety +1

      you don't

    • @LethalSword666
      @LethalSword666 Před 7 lety +2

      sadly low sample sizes are a very common problem due to lack of finances or various other reasons.

    • @IceMetalPunk
      @IceMetalPunk Před 7 lety +1

      That's the point: when deciding which papers to publish, the scientific method isn't being respected. There's selection bias tending toward publishing mostly positive results and not the inconclusive ones, and there's a complete lack of respect for replication since those studies are often rejected outright.

  • @Superfluous.
    @Superfluous. Před 3 lety +457

    As a researcher, I find those numbers very conservative, even when I'm 4 years late to the video.
    I also feel like there's a reason missing for the false-positive results category which is a deviation from the main objective. Some true positive results shouldn't be considered as such when you make an in detail analysis of their methods, statistics and final findings just for the pure reason that, mid-study, some parts of the objetive were changed to accomodate the findings. This is also an issue that pisses me off, especially in my research field where there's such a huge mix of different scientific areas that it's next to impossible to verify anything at all in detail because everyone just pulls the results their way.
    As some people already mentioned here, some authors do withold critical pieces of information for citation boosts. If people can't reproduce something from a study, they can neither be proved wrong by the paper's information alone (as long as it checks out in theory) nor can they be denied autorships and citations from other papers which effectively boosts their 'worth'. The fact that researchers are evaluated using citations/autorship numbers is also one of the leading problems as to which false-positives exists in such large numbers (I don't believe false-positives are only ~30% for a damn second, but this is my biased opinion) and why some papers, even though everything checks out in theory, can never be truly reviewed on a p2p manner on the practical results sides of things.
    Anyone who works in research knows there's a lot of... misbehaving on most published works, regardless of the results. Therefore I have to disagree with the fact that researchers are fixing some of the problems. It's not that we don't want to fix them, but because the system itself, as it stands, is essentially rigged.
    We can sift through p-hacked results. We can't, however, sift through p-hacked results if the objective is mismatched with the reported findings (if someone told me that was involuntary, I'd believe them because I know how easy it is to deviate from it) nor from a paper which withholds critical information. And the worst part about it is that this is further fueled by higher degrees thesis such as masters or PhD's where it's mandatory to cite other people for their work to be 'accepted' as 'valid'.
    You have to approach published works with a very high level of cynicism and with some time and patience on your hands if you're even dreaming of finding a published work that remotely fits your needs and actually shows a positive result on most scientific areas.

    • @lordspongebobofhousesquare1616
      @lordspongebobofhousesquare1616 Před 3 lety +27

      I hope someday a scientist gets very rich and decides to devote his/her money and time in creating a healthier scientific publishing environment.

    • @pattygould8240
      @pattygould8240 Před 3 lety +25

      When I finished my undergrad, I worked compiling a database for a retired professor. One day he asked me to find an article that had been recommended by one of his peers during review. He already had the author and subject so it was pretty easy to find and got me a nod in the paper for my invaluable research assistance. The paper was on how long bones had been drawn incorrectly in every medical text forever. Someone had drawn it incorrectly once and everyone had copied the original mistake.

    • @ejipuh
      @ejipuh Před 3 lety +4

      @@pattygould8240 What happened with the paper? Is it available?

    • @pattygould8240
      @pattygould8240 Před 3 lety +6

      @@ejipuh I have a copy that he gave me when it was published but it's packed away somewhere and I frankly don't remember what journal it was published in. I worked for him summer and fall 2004 so that's when it was published.

    • @pattygould8240
      @pattygould8240 Před 3 lety +6

      @Luís Andrade doctors have been learning from those textbooks for over a century, the mistake in the drawing didn't have an impact or someone would have pointed it out sooner. It took a scientist studying bones to point out the error.

  • @MikkoRantalainen
    @MikkoRantalainen Před 2 lety +2

    I think a simple fix could be that *publishers would be held accountable for any research they publish.* As a direct result, they would be *forced to publish any replicate research that results in different results* from previously published research.
    If publications do not want to do lots of replicate research publishing, they should improve their own quality checking (basically more elaborate peer review) before publishing.

  • @andreamarino95
    @andreamarino95 Před 2 měsíci

    Thank you very much for the insight. As an academic, it is important to keep that in mind. I would like to say the situation can be much worse than you outlined because science grows on what has been discovered before. Important findings are likely to be used in subsequent experiments, even if just as biases, invalidating an exponential (logistic?) number of results. Replication should occur fastly, with a speed proportional to the importance of the study (the exponential growth estimate).

  • @hunterterrell9930
    @hunterterrell9930 Před 4 lety +97

    This is the kind of material CZcams needs more of

    • @rodrigo-vl7bi
      @rodrigo-vl7bi Před 2 lety +1

      It's something extremely ironic, but CZcams encourages other types of content, just like journalism encourages certain kind of results in science

  • @SustainableHuman
    @SustainableHuman Před 7 lety +362

    I'm curious about the comment you made at the end that "as flawed as our science may be, it is far and away more reliable than any other way of knowing that we have."
    I'd love to see a video on:
    1) What are the "other ways of knowing that we have?"
    2) A critical evaluation on why science is better than those "other ways of knowing"
    ~ A loyal fan

    • @alveolate
      @alveolate Před 7 lety +34

      well, there's using logical deduction to eliminate improbable causes.

    • @SkizzlePiano
      @SkizzlePiano Před 7 lety +7

      have you ever heard of IB theory of knowledge? These are exactly the type of questions we discussed in class in high school, it really opens your mind

    • @SustainableHuman
      @SustainableHuman Před 7 lety

      That's the scientific way of knowing, isn't it?

    • @SustainableHuman
      @SustainableHuman Před 7 lety +2

      What is the "IB" stand for?

    • @Krashoan
      @Krashoan Před 7 lety +3

      International Baccalaureate

  • @amphernee
    @amphernee Před měsícem +1

    While getting a psych ba I wondered why journals are pretty much unregulated. The fact that a journal can publish findings then refuse to publish studies that disprove or refute them is troubling to say the least.

  • @nickjohnson410
    @nickjohnson410 Před 6 měsíci +2

    This video needs to be revisited in 2023 in light of the Marc Tessier-Lavigne scandal and the Greg Semenza retractions, just to name two of MANY academic frauds that have been exposed since this video was published.

  • @samsonlovesyou
    @samsonlovesyou Před 5 lety +238

    Outstanding video. It wasn't until I really started getting into research at MSc level that I began to realise so much of the research I was appraising was deeply flawed. At undergrad, I assumed that it was ME who was flawed every time I saw a glaring error. At that level, you don't have the confidence to criticise the work of experienced researchers.

    • @Blirre
      @Blirre Před 5 lety +30

      We had to write a literature review on a chosen subject for our B.Sc. I read through dozens of articles on my subject and to my horror I realized that the results weren't in line at all. It seemed that some scientists had worked with rats and some with mice and they got different results. Still, many sources quoted each other regardless. It was difficult to piece through that mess and know who to trust.

    • @alessiodebonis2710
      @alessiodebonis2710 Před 4 lety +9

      industry influence is everywhere unfortunately. Climate science in an example of that. It's sad because you grow up learning to trust others. Now it seems so confused that we are starting to rely in religion, faith, miths, and so on. In Italy the misinformation campaign is tragic 😷

    • @kevinbyrne4538
      @kevinbyrne4538 Před 4 lety +30

      An undergraduate whom I knew, spent months trying to replicate a chemical synthesis that had been published in a journal. He failed repeatedly. Finally he contacted the authors. They told him that there was a typographical error in the article: the concentration of one chemical was listed as being 10 times higher than it was supposed to be. With that correction, his synthesis worked on the first attempt.

    • @ephemera...
      @ephemera... Před 4 lety

      Kevin Byrne Does this mean that scientific journals don’t publish errata?

    • @kevinbyrne4538
      @kevinbyrne4538 Před 4 lety +1

      @@ephemera... -- The errata often don't appear until months after the original article. And the errata are often buried. It would also be helpful if authors checked the galleys.

  • @Iuwl
    @Iuwl Před 4 lety +181

    Each and everytime I see some article that says "According to studies by scientists...", I always and always read with skepticism.

    • @chriszeeman5647
      @chriszeeman5647 Před 3 lety +42

      Good! Always read with skepticism. That only benefits science.

    • @alvinlepik5265
      @alvinlepik5265 Před 3 lety +5

      Yes, that's the point ;)

    • @AndrewDRoyappa
      @AndrewDRoyappa Před 3 lety +21

      which means how much more skeptical we should be of everything else, "alternative news" sites, alternative medicine, health blogs, mom blogs, etc etc...

    • @ScientificReview
      @ScientificReview Před 3 lety +1

      Read with skepticism and report them to the authorities!

    • @sarenareth689
      @sarenareth689 Před 3 lety +6

      And then to think that scientists are bound to produce more truth than anyone else, you need to question everything and everyone around you

  • @RenegadeRacoon
    @RenegadeRacoon Před 7 měsíci

    I was reviewing statistics on articles for my wife's thesis. The data on two of those articles the team used the sexagesimal system to measure time, then took them to get statistics markers as if they were on the decimal system. It hurt my eyes to see "an average time of 15min 89sec" in the article, not because I'm an engineer, but because anyone with elementary education would notice something was wrong.

  • @ninjakiwigames5418
    @ninjakiwigames5418 Před 9 měsíci +5

    Well this aged like wine

  • @violetlavender9504
    @violetlavender9504 Před 7 lety +89

    Anyone who reads articles online about "new research" needs to watch this

    • @ps3master72
      @ps3master72 Před 7 lety +4

      or people who hear science quoted (sometimes incorrectly) by Today Show, Dr. Oz, even Time Magazine etc.

  • @Talik13
    @Talik13 Před 7 lety +253

    I'd like to point out, as he hints at near the end, that the underlying reason for much of these "p-hacked" studies is due to human nature and not the scientific process itself. Stopping a sample size when you find convenient, not getting published to counter-study, people only interested in unique findings; these are all human fallacies. A manipulation of the scientific method.

    • @d4n4nable
      @d4n4nable Před 5 lety +15

      There's no "the scientific method." That's a complete myth. You should read "Against Method" by Feyerabend. Even if he goes overboard in his argument (which I wouldn't necessarily agree he does), it's naive to think of a defined, precise method in which science *is* done, or *ought to be* done. It's really "anything goes," as long as you convince your peers. Hopefully truth is convincing.

    • @neurofiedyamato8763
      @neurofiedyamato8763 Před 5 lety +19

      @@d4n4nable That's wrong. There's no set methods, but a general guideline to seek the truth. As OP said, if it wasn't for bias in the publishing, then the system would work fine. The scientific method is more of a way of thinking and general guidelines in how truth can be determined.

    • @d4n4nable
      @d4n4nable Před 5 lety +7

      @@neurofiedyamato8763 You act as if epistemology were solved. There's no consensus as to how to get to "truth." There are various methodologies implemented in various fields of research.

    • @arhamshahid5015
      @arhamshahid5015 Před 5 lety +6

      @@d4n4nable I can just smell the narcissm from across the screen .

    • @d4n4nable
      @d4n4nable Před 5 lety +5

      @@arhamshahid5015 Narcissm? Why? Because I'm pointing to a classic contribution to the philosophy of science? It's not that I wrote it. I just read it, like thousands of others. How in the world is that narcissistic?

  • @garychap8384
    @garychap8384 Před rokem +3

    "Is most published research wrong?"
    According to most published studies on the matter.... yes!

  • @Bobby-yagus
    @Bobby-yagus Před 2 měsíci

    Thank you for putting this together and sharing in a way that will reach more people!! Kudos 😊

  • @PLAboy64
    @PLAboy64 Před 7 lety +1080

    I think people without any knowledge of statistic will not get the point of this video. And that is most people.

    • @veritasium
      @veritasium  Před 7 lety +377

      I disagree and that's why I made it!

    • @fieryimmortal
      @fieryimmortal Před 7 lety +132

      i think its the other way around. i think people with a lot of knowledge of statistics actually dont learn anything new from this, but people that dont will learn so much more.

    • @PLAboy64
      @PLAboy64 Před 7 lety

      Veritasium Thank you for making this video! big fan. I just think it is a good opportunity to introduce statistics to more people. Come to think of it maybe squeeze in how P value works in this video is not a good idea.

    • @CoconutJewce
      @CoconutJewce Před 7 lety +4

      Not really. It's not that hard of a concept to understand, and I would hope that of the 3.6 million subscribers (and the fraction of those that actually watch his videos consistently), most of the people would know what a p-value and statistical significance is.

    • @H0A0B123
      @H0A0B123 Před 7 lety +1

      people who finished high school should understand, and of course if they are interested in science they are more likely to understand.

  • @polarisgemini52
    @polarisgemini52 Před 3 lety +759

    When I first came across this problem, I wanted to become a scientist who simply redoes the old experiments. I am still very far away from becoming a scientist but I hope this becomes a legitimate job. Having a subset of scientists who simply redo the experiments with a little or no tweaking.

    • @galanoth17
      @galanoth17 Před 3 lety +147

      Problem is who will pay you for it.

    • @heathertoomey7068
      @heathertoomey7068 Před 3 lety +37

      We need this. Can someone start an organization that does this? Not me, I have another thing to start. :P
      Also, there are AI to analyze data of experiments regardless fo the human conclusion. I think those are pretty helpful in sorting out truth from falsehood.

    • @KT-pv3kl
      @KT-pv3kl Před 3 lety +81

      there is almost ZERO funding for this important task . more money is spent each year to study the mating behaviour of saltwater mud worms. I'm not even kidding ....

    • @xXWorldgamefunXx
      @xXWorldgamefunXx Před 3 lety +9

      "scientist" What does that even mean?
      You have to study a certain field and then you can get a job at a university where they'll pay you for your research.

    • @JacobRy
      @JacobRy Před 3 lety +73

      @LazicStefan If you're talking about climate change, it's real and the effects are observable outside of papers

  • @stevenr5149
    @stevenr5149 Před 11 měsíci +1

    Too much money is now involved. It is serious business now. I was a researcher assistant that was involved in every level of research at a prestigious institution. I have had multiple top Physicians tell me that if you can't produce results, the pharmaceutical companies will find someone else who can. It is not fringe researchers doing it. People who won't/can't play ball, just don't get promoted to leadership positions in research.
    It is more flawed than Veritasium understands-to the point that I seriously question his conclusion. :( :(
    I appreciate that Veristasium looked at this issue.

  • @HearMichaelRoar
    @HearMichaelRoar Před rokem +2

    Caveat:
    That is technically not what the p value means. It’s NOT whether the hypothesis is true, it’s whether you can or cannot reject the null. Then, the significance is the probability of finding an equal or greater effect given the distributions of your data.

    • @nmarbletoe8210
      @nmarbletoe8210 Před rokem +1

      Oooooh man that's a good point. This whole discussion is predicated on the false concept that papers are somehow declaring what is true and what is not true.

  • @erichoceans
    @erichoceans Před 3 lety +483

    Why would anyone give this a thumbs down?
    Spent most of my life in research, painful yet true....

    • @jamese9283
      @jamese9283 Před 3 lety +24

      Ignore most of the thumbs down. 10-year-olds and trolls will down-vote a good video just to agitate people. It doesn't mean anything.

    • @astrobiojoe7283
      @astrobiojoe7283 Před 3 lety +6

      Life gave them a thumb down. Ignore 😂

    • @charanckck
      @charanckck Před 3 lety +4

      Reason 1: Some one worked so much only to add a number to the papers published but not quality. Some other person points out a mistake in those papers.

    • @silvervirio3642
      @silvervirio3642 Před 3 lety +1

      Why not? Maybe they dont know dont recommend video function , so they thought thumb down this video will result in similiar type of video became featured in their homepage.

    • @neofromthewarnerbrothersic145
      @neofromthewarnerbrothersic145 Před 3 lety +5

      9 times out of 10, the answer to this question is BOTS. They have to like/dislike videos at random to try and fool the algorithm. That's all it is. I'm so tired of seeing "how could anyone dislike this GREAT video??" IT. IS. BOTS.

  • @djayjp
    @djayjp Před 7 lety +234

    Yes. Results are not science until verified/replicated! This is the scientific method.

    • @djayjp
      @djayjp Před 7 lety +5

      Very informative video, thank you.

    • @AwesomeSauce7176
      @AwesomeSauce7176 Před 7 lety +15

      Too bad the studies you see on Doctor Oz (the studies most of the sheep enjoy listening to) are never fact-checked because that would cut into profits.

    • @ThePseudomancer
      @ThePseudomancer Před 7 lety +17

      Tell that to sociology majors and they'll call you a bigot.

    • @irvalfirestar6265
      @irvalfirestar6265 Před 7 lety +3

      Sadly though, the same can be true to a greater extent for legitimate science. Replication studies weren't getting funded much back then by the government or other sources precisely because it's not bombastic or groundbreaking enough to advance the field, so basically only a trace number of replication studies ever gets funded and published.
      In short, landmark studies didn't get fact-checked and replicated a lot because it would cut into their grant money application and prevent them from conducting the studies in the first place.
      Good thing it's changing nowadays though.

    • @boxhead6177
      @boxhead6177 Před 7 lety +3

      Sorry cant afford to replicate this experiment, the client didn't give us enough of their product to do further testing beyond the results they requested we deliver. We are a private laboratory and need to be profitable.

  • @Rayowag
    @Rayowag Před 6 měsíci +1

    I feel like the structure of how studies happen in general gives way for them to be messed up. It's a classist environment, because somehow you usually only get to do research like this if you have the privilege of doing/having a PhD. The amount of studies that we would have that would check each other if more people with just a B.A. and M.A. got to do their thing (properly supervised & peer-reviewed which once again is hard because of how the field is structured) is wild. Psychology would probably look very different when it comes to brief therapy interventions (currently very underresearched).

  • @otiebrown9999
    @otiebrown9999 Před 7 měsíci +1

    Always excellent and insightful!!
    If you publish a concept that the "Optometrist majority", does "NOT LIKE", well you will never get published.
    Been there ... DONE THAT!

  • @bhp1719
    @bhp1719 Před 7 lety +357

    I've been a world-class AI researcher for almost three decades now. I have personally, during this time, witness much deliberate scientific fraud, including rigged demos, fake results, and outright lies. Additionally, numerous colleagues have admitted to committing scientific fraud, and I've even been ordered to do so myself. I have always refused. I will not, as a scientist, report results I know or suspect to be misleading. My family and I have been severely punished for this. So I recently returned to mathematics, where true and false still seem to reign. And lo and behold, instead of abusive rejection letters, written on non-scientific grounds, I get best-paper nominations. PS: don't believe any of the current hype around AI.

    • @carolalvarez3728
      @carolalvarez3728 Před 7 lety +26

      That's terrible , there are many stories like this that keep popping up. Stay strong this crap will change soon .

    • @DdotTindall
      @DdotTindall Před 6 lety +2

      Could we talk I'd love to hear your thoughts on this Christer

    • @Mirabell97
      @Mirabell97 Před 5 lety +25

      Christer Samuelsson why would I believe this?

    • @forgotaboutbre
      @forgotaboutbre Před 5 lety +30

      Dang man I think you got out of A.I. at the wrong time lol. People dont have to fudge their results anymore because the results are real and improving every day now.

    • @jacobbellamy7640
      @jacobbellamy7640 Před 5 lety +2

      Let me guess- it was natural language processing wasn't it?

  • @jordanblatter1595
    @jordanblatter1595 Před 7 lety +320

    I intend to live forever. So far, so good.

    • @nicokuhne3255
      @nicokuhne3255 Před 7 lety +6

      i like that haha

    • @imveryangryitsnotbutter
      @imveryangryitsnotbutter Před 7 lety +5

      Who wants to live forever?

    • @grampton
      @grampton Před 7 lety +1

      Ifyou live forever you'll see everyone you know die and then everything you know die because the universe willend.

    • @StretchReality
      @StretchReality Před 7 lety

      +RedEyes Cat dude no way! That must be a record

    • @nal8503
      @nal8503 Před 7 lety +2

      "If you live forever you'll see everyone you know die and then everything you know die because the universe willend."
      If the universe ends, something will come along to replace it. I'd be quite excited to see that.
      Plus, it's not necessarily true that "the universe" will end, although that's a widely spread myth so I can't really blame you for assuming that.

  • @meltinginmissoula7044
    @meltinginmissoula7044 Před 8 měsíci +1

    With most of the World’s activities trying diligently to habitually and continuously ascribe to its own peculiarities? Why would the presumed outcome ever need to lie?

  • @MrMouzaki
    @MrMouzaki Před rokem +1

    as a phd student this is so true, our university as an example force phd student to publish 2 articles ( to get ehir phd) or more and in journals indexec in databases such as scopus or web of science and this make lot if students p-hack and tweak results or falsefy them completly and it's the same in the market of research or trying to be a professor the competition is by the number of published articles some students have over 20 or 30 article which is making people do whatever possible to publish

  • @MetalMachine131
    @MetalMachine131 Před 2 lety +443

    The real problem here are the journals. They have established themselves as the primary way of publishing. There are other ways, but in the end, the journals get you recognition and jobs.
    That results in many studies being done with the intent of publishing. Scientists cant be blamed for that. After all, they not only do the research but also have to contantly beg for money.
    The actual goal of optaining information gets lost along the way.

    • @IndigoIndustrial
      @IndigoIndustrial Před 2 lety +10

      Exactly. One high-impact publication can set up a career, and leads to 'light-touch' peer review at other good journals, soft authorships on colleague's papers and requests to be co-investigators on other people's grants. More publications leads to more funding. Even as a Co-I that doesn't actually get money from a grant, you have demonstrated 'grant funding' success. The incentives to join that group are high.

    • @WesternUranus
      @WesternUranus Před 2 lety +4

      It seems even more absurd to still have these gatekeepers publishing a limited number of papers when we live in the era of long tail economics

    • @AngryReptileKeeper
      @AngryReptileKeeper Před 2 lety +7

      Researchers also have to pay the journals to publish their work, who in turn often charge you to read them.

    • @newagain9964
      @newagain9964 Před 2 lety +3

      Like all other systems and institutions, scholarly research and the academy is a game, with numerous irrational inputs and agents in pursuit of self serving interests.

    • @mohdhazwan9578
      @mohdhazwan9578 Před 2 lety +1

      Indeed. I lost interest to pursue to phd bcoz of this reason.

  • @themadcommenter6867
    @themadcommenter6867 Před 7 lety +86

    3:32
    but
    what if that paper is false

    • @ampjam
      @ampjam Před 7 lety +2

      bacon

    • @oxjmanxo
      @oxjmanxo Před 7 lety +11

      best get started at trying to verify their results then

    • @lilacosmanthus
      @lilacosmanthus Před 7 lety +24

      if that particular paper were false, that would mean most published research is true, which would mean that paper is most likely true.

    • @hamzerpanzer
      @hamzerpanzer Před 7 lety +4

      Indigo Osmanthus Paradoxial...

    • @lilacosmanthus
      @lilacosmanthus Před 7 lety +3

      Hamza Aman
      and round and round we go...

  • @Nature_Quixote
    @Nature_Quixote Před 9 měsíci

    best Veritasium in a long time! More data analysis!!!

  • @SofiaStark-be4jw
    @SofiaStark-be4jw Před 14 dny +1

    thanks, now i don't know what to do with my life. i'm a senior in highschool wanting to study physics, but i have watched a ton of videos that explain the reaserch paper publication strategies, and the way academia works in general, and now i realise that the perfect knowledge making science world i wanted to be a part of is nothing like i though it was....