This Cornell Scientist Destroyed His Career in One Blog Post

Sdílet
Vložit
  • čas přidán 7. 01. 2024
  • For excellent book summaries, take advantage of a free trial and get 20% off by visiting www.Shortform.com/pete.
    Hey guys! Been wanting to talk about Brian Wansink for some time now, he's such a landmark case in my field. Basically made everyone extra aware of the p-hacking problem in psychology. What do you make of his story? Let me know in the comments below!
    Stephanie Lee Article: www.buzzfeednews.com/article/...
    My Website: petejudo.com
    Follow me:
    Behavioral Science Instagram: @petejudoofficial
    Instagram: @petejudo
    Twitter: @petejudo
    LinkedIn: Peter Judodihardjo
    Good tools I actually use:
    Shortform: www.Shortform.com/pete
    Ground News: ground.news/Pete

Komentáře • 1,3K

  • @PeteJudo1
    @PeteJudo1  Před 5 měsíci +69

    For excellent book summaries, take advantage of a free trial and get 20% off by visiting www.Shortform.com/pete.

    • @jesipohl6717
      @jesipohl6717 Před 5 měsíci +3

      i left the system, because I felt that the organisation of academia, modeled off of catholic latin schools coming out of the dark ages, is fundamentally flawed and probably not going to be repaired from the inside alone.
      I think hypothesis papers should be a regular thing, before considering a results paper. I also think their needs to be insentive to publish null results and punish antisocial journals with eugenic or manipulative focuses like the journal of controversial ideas. tends to produce bad stuff.

    • @biggseye
      @biggseye Před 5 měsíci +1

      You can not "fix" the system. The system exists for the purpose of getting attention to scientific information, real or not. In the modern sense, attention is money and fame. Without it, the information does not matter.

    • @NuncNuncNuncNunc
      @NuncNuncNuncNunc Před 5 měsíci +4

      There's a mistaken belief that if the data does not fit the hypothesis then the experiment has "failed." The hypothesis may be unsupported by the data, but assuming all the procedures were followed and necessary controls implemented, the experiment was a success as new knowledge was gained.

    • @melissachartres3219
      @melissachartres3219 Před 5 měsíci +1

      03:01 Not based off of. Based ON. You cannot base something off of something else.

    • @relwalretep
      @relwalretep Před 5 měsíci +1

      Only one listed citation in the video description? That's bad science, and bad reporting.

  • @luszczi
    @luszczi Před 5 měsíci +3065

    When most people say "p-hacking", they mean something like "removing that pesky outlier to get from p

    • @doctorlolchicken7478
      @doctorlolchicken7478 Před 5 měsíci +177

      How about leaving the outlier in if it helps your case? That is also p-hacking. P-hacking is not manipulation of the data, it is biased manipulation of the data.

    • @dshin83
      @dshin83 Před 5 měsíci +146

      I always thought p-hacking originally referred to the process of enumerating multiple hypotheses *after* data collection. If you generate 100 random null hypotheses, 5% will have p-values less than 0.05 by chance.

    • @WobblesandBean
      @WobblesandBean Před 5 měsíci +82

      That "pesky outlier" is a HUGE discrepancy in data. It's a little frustrating to see laypeople not getting it and saying such a small range is no big deal, but I counter by saying if you had diabetes and your doctor had a 5% chance of screwing up your dose so badly you went into a diabetic coma and died, is that still no big deal?

    • @Loreweavver
      @Loreweavver Před 5 měsíci +34

      I believe we used to call that the snowball effect.
      One starts with small things like removing the pesky outliers but every time we remove an outlier, another one appears. Keep taking out the outliers until there is a neat set that agrees with your hypothesis.

    • @Loreweavver
      @Loreweavver Před 5 měsíci +25

      ​@@WobblesandBeananother way to look at it is that the p scale accounts for the outlier and is why there is a margin for error in the first place.
      Removal of outliers give a greater margin of error for the remaining data.

  • @obsoerver7272
    @obsoerver7272 Před 5 měsíci +1664

    As long as "publish or perish" reigns supreme and nice stories get celebrated while null results and replication studies go into the wastebin, stuff like this will be the main output of the scientific system, at least in the social sciences...

    • @WobblesandBean
      @WobblesandBean Před 5 měsíci +41

      Sigh. This. All of this.

    • @tadroid3858
      @tadroid3858 Před 5 měsíci +35

      Publish or perish doesn't apply to identity hires - clearly.

    • @hoperules8874
      @hoperules8874 Před 5 měsíci +60

      ...and don't forget the politics of the grant money$€£¥!!! You definitely cannot do any study that might give results that offends the politics of the "benefactors"!🤮

    • @wallacegrommet9343
      @wallacegrommet9343 Před 5 měsíci +5

      Science research abhors sloth. Justify your professional existence or choose another field.

    • @acetechnical6574
      @acetechnical6574 Před 5 měsíci +2

      Social Guesses you mean.

  • @Ntelec
    @Ntelec Před 5 měsíci +1152

    When I was working on my PhD in the early 90s, it was obvious to me the entire science foundation was broken. The get my doctorate, I needed a positive outcome of my model. I collected the data. I analyzed the data. I wrote up the results. Everything was biased to get a positive result. The same process occurs with getting hired and getting tenured. Positive outcomes are rewarded. Negative outcomes are ignored and cumulatively punished. I wrote a paper then suggesting that statisticians be paired with researchers to consult on methodologies (sufficient n to examine parameters, the right statistical methodology to examine data, etc.). As long as careers are determined by outcomes and researchers can cheat, they will.

    • @zippersocks
      @zippersocks Před 5 měsíci +25

      With all the labs in school I’ve taken so far and putting together the data, it’s not often that I’d get p

    • @ninehundreddollarluxuryyac5958
      @ninehundreddollarluxuryyac5958 Před 5 měsíci

      I refused to cheat, defended my thesis and got black-balled for not being "a team player". American science is fake news.

    • @snorman1911
      @snorman1911 Před 5 měsíci +50

      Imagine what people will do when millions of dollars are on the line to get the "correct" results.

    • @phoneuse4962
      @phoneuse4962 Před 4 měsíci

      Not only that:
      Many researchers, thinking they will help society, are subsidizing drug companies: "Pharmaceutical companies receive substantial U.S. government assistance in the form of publicly funded basic research and tax breaks, yet they continue to charge exorbitant prices for medications".
      The medical industry abuses the taxpayer by overcharging, and takes advantage of both the taxpayer and universities by exploiting publicly funded research.
      Why? It is by design. It is all to reap shareholder profits. The medical industry serves the shareholders and investors, and not the patients.

    • @EdMcF1
      @EdMcF1 Před 4 měsíci +10

      Proving a model wrong would advance science, by showing that something is wrong. Wolfgang Pauli 'That is not even wrong' (Das ist nicht nur nicht richtig, das ist nicht nur falsch!'.)

  • @PimpinBassie2
    @PimpinBassie2 Před 5 měsíci +1246

    _"You pushing an unpaid PhD-student into salami slicing null-results intro 5 p-hacked papers and you shame a paid postdoc for saying 'no' to do the same."_ - quote of the week

    • @randalltilander6684
      @randalltilander6684 Před 5 měsíci +17

      “Shame” a paid postdoc. Spelling error.

    • @napoleonfeanor
      @napoleonfeanor Před 5 měsíci +34

      The postdoc was very proud of his long hair and beard before Wansik shaved him as punishment

    • @B3Band
      @B3Band Před 4 měsíci +4

      After he said that Orzge Sircher was a researcher from Turkey, I couldn't unhurr this enturr virdeo being rerd like the GersBermps merme

    • @ivoryas1696
      @ivoryas1696 Před 4 měsíci +1

      @@B3Band
      -Heh. Bri"ish people-

    • @leochen887
      @leochen887 Před 2 měsíci

      Soon AI will research and publish papers that will be peer reviewed and found that the AI results cannot be ignored or refuted as the research will be based on a database that is catholic or universal in scope.
      PhD's will mean sh^t by comparison. Social science will mean sh^t also. But you already sense that, if truth be told.

  • @Fran7842
    @Fran7842 Před 5 měsíci +272

    Fourth problem- researchers are not rewarded for publishing negative results, and incentivised for original research, but not replicating prior studies

    • @tessjuel
      @tessjuel Před 5 měsíci +26

      Yes and this may be the most crucial problem. One of the most fundamental principles in science is that a result is not valid unless it can be replicated.

  • @saldiven2009
    @saldiven2009 Před 5 měsíci +379

    The weird thing about the buffet study is that finding no relationship between cost and satisfaction is also an interesting finding. There is value in this kind of findings, too.

    • @dermick
      @dermick Před 5 měsíci +38

      Totally agree - you can learn something from almost every study if it's done properly. BTW, "p-hacking" is a skill that is highly rewarded in the corporate world, sadly.

    • @animula6908
      @animula6908 Před 5 měsíci +27

      Yup. I was wondering if it would be different if it was $10 vs $50 buffet. $4 and $8, obviously everyone still got a great bargain.

    • @nikkimcdonald4562
      @nikkimcdonald4562 Před 4 měsíci

      ​@@animula6908I've had expensive buffets that I've enjoyed and I've had expensive buffets that were over hyped ripoffs.

    • @kaynkayn9870
      @kaynkayn9870 Před 4 měsíci +3

      but it isn't good for the headlines

    • @kaynkayn9870
      @kaynkayn9870 Před 4 měsíci

      @@animula6908 The issue is, if given the choice, most people would choose 10 dollars. I wonder if they had a choice. Making the data imbalanced to 10 dollars over 50.

  • @coAdjointTom
    @coAdjointTom Před 5 měsíci +771

    Pete you're laughing when you read out the emails but what he does is literally the bread and butter approach of data analyst and data scientist in businesses. Ive been fighting this stuff for 13 years and it's exhausting

    • @theondono
      @theondono Před 5 měsíci +85

      It’s standard academic practice at this point, even if no one wants to admit it publicly.
      I’ve seen things like this even in the “hard sciences”.

    • @coAdjointTom
      @coAdjointTom Před 5 měsíci +51

      @@theondono it's worse, they're proud of it! When I do experiments my boss wants a 20% type-I error rate and still ignores the result of the test 😂

    • @taoofjester4113
      @taoofjester4113 Před 5 měsíci

      It happens in a lot of fields that rely on data. I am even guilty of this, not p-hacking, but manipulating data to reach a desired result.
      I did real estate appraisals before and during the real estate crash. We routinely screwed with data sets and comparables in order to get a valuation the bank required for issuing a loan in order for the client to purchase the property.
      While we did have some wiggle room if the data said one number and the number that was required was only a percentage or two higher, a significant number of real estate appraisers would swing for double digit manipulation.
      Thankfully the trainer I had was fairly honest and we did not do as many bogus appraisals as others. Instead we would violate the portion where we gave a rough ballpark figure so the bank could decide if they wished to hire us or find another appraiser willing to be really shady. Unfortunately banks were notorious for blacklisting appraisers that tanked real estate deals by not "hitting numbers."
      I only had one property that I feel was extremely wrong. The person I did the appraisal for happened to be someone that was able to get us a client that sent us enough work to keep two people employed. We received about 200k a year in wages from that company.

    • @doctorlolchicken7478
      @doctorlolchicken7478 Před 5 měsíci +76

      Not sure if it will make you feel better or not, but I’ve been fighting this for 33 years. It’s beyond p-hacking, it’s outright denial of business people to accept facts that go against their opinions.

    • @xiaoliuwu8539
      @xiaoliuwu8539 Před 5 měsíci +32

      Yeah, in industry it's common practices. My boss and other collaborators always "why don't u do more slicing?" We literally do tens and hundreds of testing without adjustment. I tried to get them pre-specifing a few sub-population, but that's futile. Whenever, the experiment doesn't turn out as expected, the response I got is always more slicing.

  • @stischer47
    @stischer47 Před 5 měsíci +332

    For a number of years, I was the Chair of my university's Institutional Review Board (which reviews and approves/disapproves research involving humans). The amount of crap research that we had to review from the social sciences was appalling...we had a number in which N (the size of the research group) was 1-2, from which they drew "conclusions". If there was any pushback from the IRB, they just made it a "qualitative" study or "preliminary study" to not have to do statistics. And the disregard for Federal guidelines for using research involving humans was scary. Luckily, what the IRB said could not be overruled by anyone, including the president of the university. But I made a lot of enemies across campus.

    • @NemisCassander
      @NemisCassander Před 5 měsíci +32

      Yeah, reminds me of when I was a grad student and was the research assistant for an IE professor. He worked with a gem of a tenure-track mechanical engineering professor on a research topic, where the ME professor was responsible for the physical simulation and recording of results, and the IE professor (my boss) was responsible for the experimental design and data analysis, both of which he deferred to me.
      So they do a few sample runs so we can get a good idea of the variance of the results. (Two between-subject factors, and a repeated-measure, in case anyone cares.) We discovered that there was actually little variance, so we have a meeting where we (IE professor and me) are happy to tell the ME professor and the client that we can probably have as few as 3 runs per cell of our design. This ME professor then asks why we can't do just one run per cell. I was appalled that someone would say something THAT much out of left field in front of our client, who was quite knowledgeable on experimental design and, you know, basic calculations regarding degrees of freedom. I was afraid my professor was going to have a stroke or something, so I quickly just pointed out that the math wouldn't work out.
      I think that was the day that any respect I had for the title Ph.D. in itself died.

    • @scarlett8782
      @scarlett8782 Před 4 měsíci +26

      thank you for your work. I'm a female biomedical engineer who specializes in tissue and genetic engineering who did a lot of research during my time at my university, and we always had a running joke amongst engineers in my department, "biologists can't do math, and social scientists can't do anything period". its a real problem.
      social science students are NOT science students in terms of STEM education and scientific practice - I want to make that clear. there is nothing scientific about their "studies" about 90% of the time, and their curriculum in school is the embarrassment of the scientific community. most don't even take calculus, let alone physics, chemistry or advanced natural sciences. they take mostly fluff courses like very basic anthropology (which is mostly about cultural norms), literature based courses, sociology based courses (which are also cultural in content), and psychology courses, which can be practically anything in terms of content at course levels beyond the beginner level course PSY101/PSY111 which is standardized. so the base curriculum itself is severely lacking in terms of scientific education, especially when compared to the education other STEM students are receiving.
      due to the lack of a proper education in STEM, when it comes to social science studies, these people develop a foregone conclusion that they're trying to prove based on a fictional story they want to tell, instead of collecting existing data from a literature review or previous work THEN formulating a hypothesis. basically, they're developing an idea out of nowhere and then trying to prove it, which doesn't work, generally speaking. that's backwards. typically, you are simply studying a subject; let's say "the human behavior of purchase satisfaction". at this point, once you've decided on a subject of study, you conduct a literature review to see what has already been published on a subject. you may choose to peer review another study on that subject, or you may look through the data and methods of other previous studies to look for trends. once you have identified a study to peer review, or trends in previous literature, THEN you develop a hypothesis based on previous research on that subject or a similar subject, so that you are an educated expert on that subject before you start, and so you're making an educated guess with your hypothesis based on something tangible and real, not a wild guess based on your personal version of reality. this is also so that you don't inadvertently repeat a study that has already taken place, without realizing that you're peer reviewing, or repeating a study that has already been conducted and verified many times, so that resources are being spent to either strengthen previous findings, or develop new findings. we don't want to waste resources on useless subjects or on subjects that have already been extensively studied.
      then you build your study around what you've learned from all of humanity's collective previous knowledge on the subject, collect your data, and look for trends with a large randomized sample size. if you don't find trends, you go back to the literature, and try to understand what went wrong, and try again after making modifications, or you choose another niche/subject. if you do actually find significance, you perform similar studies to collect more data and publish.
      social science is doing the opposite. they are formulating a wild guess hypothesis based on their imagination (not anything concrete), putting together a poorly planned study due to their lack of scientific education with terrible sample sizes, no randomization, no controls, which is based on nothing but their fantasies, then collecting data in a way that doesn't make sense or is missing important aspects of data collection, then use Microsoft Excel to analyze the results for them because they don't understand the statistics themselves, and when they don't see anything significant (shocker!) due to the fact that they based their study on nothing but their daydreams, they delete outliers, eliminate dozens of data points until their sample size is tiny, or even more brazen methods of data manipulation until their study's data fits their pre-conceived narrative.
      if Dr. Wansink had reviewed previous literature, he would have found that human beings in the situation he set up for his study will likely experience the same level of satisfaction. why? because they don't realize that they paid more, or paid less, than other people did for the same item. this is a study where the participants are not told that they paid more or less. they are simply served a meal, at a given price, then polled on their satisfaction. two people given the same food will experience the same level of satisfaction, especially with the price difference being unknown to them. the previous literature indicates this fact. only when people are *told* that they over or under paid for an item or service do their feelings on satisfaction begin to shift. like so many other social scientists, Dr. Wansink simply wanted to write a good fiction "story" to get published and picked up by the media, instead of doing real, useful science. I've yet to find a social science study that is practically useful, fully replicated and peer reviewed, scientifically sound, and not just common sense.
      I've seen this more times than I can count out of both biologists and social scientists, but mostly social scientists. thank you for acting as a barrier between bad science and unwarranted funding/access to more resources including unwitting animal and human subjects. I'm sure you didn't make a lot of friends, but you did god's work. you should be proud of your integrity. cheers to you my friend.

    • @SmallSpoonBrigade
      @SmallSpoonBrigade Před 4 měsíci

      I'm not surprised, I've heard a bunch of excuses about the social sciences as to why garbage results are acceptable. Apparently, the hard sciences are just easier to do reliably and always have been. There certainly hasn't been an effort over the course of centuries to ring as much reliability out of the methods as possible involved. It seems to me that so much of this is in part the result of considering crap results like those with a P in the .7-8 as quality work, when really, that just indicates that there's potentially a lot more to what's going on and that there should be work made to get better results as ..75 is not that much better than 50/50. It's certainly not good enough to do much with.
      Yes, humans can and do vary, but that doesn't excuse t he attitude that there's no need to figure out ways of ringing out more reliability and more reproducibility from the test participants you can get. Sure, the results will never be as precise or generalizable as you get from physics or chemistry, but there's a lot more that could be done if folks were expecting more when they designed and executed the experiments.

    • @ciarandoyle4349
      @ciarandoyle4349 Před 4 měsíci

      I agree with everything you said in 941 words. But I can easily say it in 751. Précis and word count are the engineer's friends. @@scarlett8782

    • @rickwrites2612
      @rickwrites2612 Před 4 měsíci +8

      ​LOTS of social scientists start with lit review, and form a hypothesis after years of study. What you are talking about sounds abnormal. Also the basic Anth as science is physical anthropology which ranges from evolution to genetics, biomechanics, forensics.

  • @danameyy
    @danameyy Před 5 měsíci +396

    The biggest positive change for academia (imo) is for journals to publish papers where the researchers’ hypothesis was not ultimately supported by their data (either there was no findings either way or even if the data showed an effect completely different from what the researchers predicted). I know that this is less exciting for the news media but when science is so driven by exciting results and leaves out the “boring” stuff, it heavily incentivizes dishonesty in researchers.

    • @slappy8941
      @slappy8941 Před 5 měsíci +29

      You can learn as much from failure as from success.

    • @SigFigNewton
      @SigFigNewton Před 5 měsíci +17

      This requires the invention of an incentive.
      Say I run a journal. I want it to be a prestigious journal. I convince scientists that it is prestigious and that they want to submit their papers to my journal by maintaining a rate of citations. Lots of people cite papers published here, so publish with us!
      If I now begin accepting null hypothesis papers, I am accepting papers that will tend to receive far fewer citations. Makes my journal look bad. I run a business, and you’re proposing a money losing idea.
      Maybe journal could be required to have at least a certain percentage of their papers be null hypotheses? So it won’t punish the journals that are encouraging the honest science?

    • @hoperules8874
      @hoperules8874 Před 5 měsíci +16

      I sort of agree with your publication stance, but being able to cite a failed experiment based on "x" data and "y" hypothesis, can work wonders for metadata studies. Variables have to be accounted for, of course, but it is far easier to get grant and research money when you can say, "it's been done this way and that and failed so many times-we should look at this other hypothesis instead."

    • @thekonkoe
      @thekonkoe Před 5 měsíci +4

      I reviewed a paper for a high impact journal once, where imho the graphs indicated results almost completely opposite to the text. This would not be obvious on a cursory read since classification of the data and the fits obscured it. I wrote this in my review but still saw the results in a less prestigious journal about a year later. To their credit they communicated more uncertainty in the final published product, but the motivated analysis was still super clear. I suspect I also engage in at least some motivated analysis, it when it becomes sufficiently widespread that a large community is chasing the same expectation it can get out of hand really fast.

    • @Loj84
      @Loj84 Před 5 měsíci +10

      Honestly, I think somebody should start journals dedicated to publishing repeat studies and rejected hypotheses.

  • @pedromenchik1961
    @pedromenchik1961 Před 5 měsíci +245

    Yay, you did my suggestion. I was a grad student in Cornell and I had a class taught by Brian Wansink right before this story blew up. He came across as an “oblivious diva”, but tbh I think Cornell’s administration is also to blame for both not firing him and also not helping his grad students into new labs/research once he “chose to retire”

    • @mathijs58
      @mathijs58 Před 5 měsíci +40

      100x this. Exposing the academics who p-hack is one thing, another is asking the question: who was their department head for so many years and so many papers, and never bothered to do any quality control on the output of their own faculty. Once you dive into the data, it is often not too difficult to spot patterns of iffy science, especially if you are in the same building and hear the rumours about their postdocs refusing to do a certain project etc. etc. Quality control should be the job of the department heads, who in general should know the subject matter quite well and should be competent scientists themself. Now the only control is quantity control which can be done by the secretary of the department.... SO a shaming of the department heads of cheating scientists would maybe help to create an environment where science quality rather than quantity is rewarded.

    • @Obladgolated
      @Obladgolated Před 5 měsíci +12

      At least Wansunk didn't get Cornell to threaten legal action against the people who drew attention to his dumb blog post. Like another "academic diva" I could name.

    • @pedromenchik1961
      @pedromenchik1961 Před 5 měsíci +5

      @@Obladgolated there is always someone worse, still doesn’t make him a good person

    • @SigFigNewton
      @SigFigNewton Před 5 měsíci +3

      @@pedromenchik1961your hypothesis is that there are infinitely many people?
      I don’t think we even need to test that

    • @steveperreira5850
      @steveperreira5850 Před 5 měsíci

      Pretty much it takes an act of Congress to get someone fired from academia, literally!

  • @vfwh
    @vfwh Před 5 měsíci +228

    You scroll quickly pas Wansink's response to the guy asking if the post was a joke. He says that he wishes his tutors had pushed him to to this when he was younger, as this way he would have published more and wouldn't "have been turned down for tenure twice."
    So beyond this doofus spilling the beans, isn't it also an indictment of the whole field? I mean if he's bragging about it, it probably means that this is utterly commonplace and even expected in his academic circles, no?

    • @napoleonfeanor
      @napoleonfeanor Před 5 měsíci +28

      I think he was completely oblivious that this is wrong

    • @TheThreatenedSwan
      @TheThreatenedSwan Před 5 měsíci +5

      It is as the institutions care more about outward trappings than the essence of science, and the people in charge who don't actually do especially are like this, but also it's a language problem in that these frauds can use the same language as genuine actors when that's the only level the former care about

    • @animula6908
      @animula6908 Před 5 měsíci +2

      He doesn’t seem like one of the bad ones really. It’s more how he looked down on his employees. I don’t think it’s bad to look for what unites those who the effect does hold true for. He should follow up with another experiment to see if it’s significant or just a one off coincidence, too, but it sounds like looking for something, not fabricating it to me.

    • @michaelfrankel8082
      @michaelfrankel8082 Před 4 měsíci +6

      @@animula6908 He's looking for something to justify his fabricated hypothesis. Same thing.

    • @mujtabaalam5907
      @mujtabaalam5907 Před 4 měsíci

      Correct. Wansink's mistake wasn't the fraud, it was failing to realize the fraud was in fact fraud and needed to be kept quiet. All of his colleagues will continue to do exactly what he does, but they know better than to open their mouths

  • @imightbebiased9311
    @imightbebiased9311 Před 5 měsíci +94

    I remember when this dude's shenanigans got revealed. Strengthened my resolve to ignore all "cutting edge" Psychology findings. If it holds up under rigorous scrutiny, I'll hear about it later. If it doesn't, I never needed to entertain the thought.

    • @TheJhtlag
      @TheJhtlag Před 5 měsíci +6

      Something tells me there's no such thing as "cutting edge" psychology findings. If someone comes up with surprising new findings you've found your cheater.

    • @wat5513
      @wat5513 Před 3 měsíci +5

      Interesting. Books and philosophies (and music) that are "trending" really aren't in my personal radar till after a few years. Time separates the wheat and the chaff.

  • @peterjones6507
    @peterjones6507 Před 5 měsíci +276

    It's great that we're learning how to trust the science, but not the scientists. For too long we've been woefully gullible.

    • @WobblesandBean
      @WobblesandBean Před 5 měsíci

      I think the recent surge in anti-science rhetoric has forced the scientific and academic community to FINALLY crack down on bad faith actors like Brian. How can we convince society that science is trustworthy if the establishment keeps on letting this kind of nonsense slide?

    • @sssspider
      @sssspider Před 5 měsíci

      I trust the scientific method. I most certainly do NOT trust The Science, which often has very little to do with the scientific method.

    • @sampletext9426
      @sampletext9426 Před 5 měsíci +4

      it doesn't matter how much awareness you spread, humans are always going to fall for it 😂

    • @sampletext9426
      @sampletext9426 Před 5 měsíci +2

      ​@@sssspider
      im not sure if "sc13nce" is c~nsored, but your comment doesn't show unless sorting by new

    • @sampletext9426
      @sampletext9426 Před 5 měsíci +2

      ​@@WobblesandBean
      not sure if "sc~ience" is c~ensored but your comment doesn't appear unless sorting by new

  • @dottyjyoung
    @dottyjyoung Před 5 měsíci +121

    It's not just him.
    It's the entire academic system--they're the ones who TELL THEM to do that.
    My husband is a mathematician, & they wanted him to publish papers, regardless of how good they were. This baffled my husband, & he eventually left academia for the public sector.
    It burns me up that honest people like him had to leave while con artists flourish.

    • @insu_na
      @insu_na Před 4 měsíci +5

      That's good tho. Publish data even when the results aren't what you expected or disprove your hypothesis. It means people on the future will know this has already been tried and they can make better decisions about what to try next

    • @likemy
      @likemy Před 4 měsíci +7

      I don't think the garbage p hacking has any bearing on the field of mathematical publishing. Theorems are either sound or unsound, there is no 'wiggle room'.

    • @funfungerman8401
      @funfungerman8401 Před 4 měsíci +4

      Thats bad tho...
      The og comment never said that said paper disproved his(husbands) theory’s but that they were just bad - maybe too short to give a real answer on the question/problem that this paper was about, or the funding was not sufficient so that some results are heavily influenced
      Imagine posting a paper about how vegetables aren’t healthy and the study itself had a duration of one year, and only enough money for 2 test persons and at the end everyone would think „damn I always thought vegetables were healthy but this paper said otherwise guess I was wrong all along and of course there’s no need to remade such a study cause it’s already had been done, maybe bad but it has been done“
      And at the end the test person who ate no vegetables didn’t smoke (privately) and the one who did smoked 4 packs each day but because the project hadn’t enough money and this 2 were the cheapest option you choose them
      (Or the Sugar/health sector/industry even encouraged wrong studies about it cause it boosts there regenue)
      This would be absolutely shitty and acutally happened in the past
      Like Cholesterin (fat in red meat etc) is soooo unhealthy and at fault for most heart diseases and not sugar/high fructose syrup and because it was backed by studies no one fought it for decades because as you said people already „tested“ it

    • @insu_na
      @insu_na Před 4 měsíci

      @@funfungerman8401 Even such a bad paper should be published. It's up to the people who read and want to cite it to ascertain its veracity.

    • @dottyjyoung
      @dottyjyoung Před 4 měsíci +3

      @@likemy I didn't say P-hacking was in math. I said the pressure to publish was there, regardless of how good the paper was.

  • @sage5296
    @sage5296 Před 5 měsíci +112

    How do you fix the system? LET PEOPLE PUBLISH NULL RESULTS.
    the publish or perish system combined with having to have something significant to publish incentivizes people to "make the data significant" however they may. If a study can be published even if not "successful", this behavior will likely decrease significantly

    • @Callimo
      @Callimo Před 4 měsíci +6

      This is the way. It should be about accurate results, not results that soothe corporate interests.

    • @ahmataevo
      @ahmataevo Před 3 měsíci +1

      This would also need a way to easily look up the set of total results for a particular topic so researchers can get a better understanding of why something has null results vs something else having "significant" results.

    • @algebraiccontinuation9291
      @algebraiccontinuation9291 Před 3 měsíci

      Journals wont do this because it drives down citations, thereby impact factor, and thereby the journals incoming high quality research.

    • @kuanged
      @kuanged Před 3 měsíci

      How would you award grant money if everyone gets to publish?

    • @Callimo
      @Callimo Před 3 měsíci +1

      @@kuanged I suppose grant money could be bestowed upon truly life changing scientific results that would bring us closer to solving world problems. Instead of just giving grants to studies that just confirm biases, maybe be a bit more prejudiced about which studies it goes to.
      OR, the government could be more strict. Require the results of a study to be able to be reproduced by groups not connected with the primary study.
      There are so many ways to overhaul a system that seems to reward shoddy work. 💁‍♂️

  • @louisbrill891
    @louisbrill891 Před 5 měsíci +59

    Null results shouldn't be seen as bad imo. Its definitely possible to have a null result thats just as notable or surprising as a positive or negative result.

  • @lordsneed9418
    @lordsneed9418 Před 5 měsíci +330

    BTW for anyone watching, there is literally NOTHING wrong with "cutting up and slicing" the data to look for relationships and results, so long as you test whether those results hold on a different data set, ideally one gathered seperately.

    • @Nitsirtriscuit
      @Nitsirtriscuit Před 5 měsíci +89

      Precisely. Looking for interesting stuff is how we generate hypotheses, but finding a weird artifact and then creating a reason post-hoc is stupidly weak argumentation.

    • @andrewkandasamy
      @andrewkandasamy Před 5 měsíci +76

      I was just looking for this comment. It's an entirely valid method for identifying possible future hypotheses for further research, but shouldn't be considered conclusive in and of itself.

    • @dgarrard100
      @dgarrard100 Před 5 měsíci +20

      I was wondering about that. I'm not a scientist, but it seems like looking for patterns is what scientists _should_ be doing; albeit that should just be one step towards a future study, rather than trying to retroactively fix the previous one.

    • @marquisdemoo1792
      @marquisdemoo1792 Před 5 měsíci +13

      As a non-scientist but with a science based degree that was my immediate thought, so thanks for confirming it.

    • @lutusp
      @lutusp Před 4 měsíci +9

      > "for anyone watching, there is literally NOTHING wrong with "cutting up and slicing" the data"
      J. B. Rhine of Duke University used this method -- he picked the results that best agreed with his ideas about ESP. After he passed on, his spectacular confirmations for ESP evaporated once someone put the less exciting (and sometimes negative) results back into the data. That "someone" is now called a scientist.

  • @IsomerSoma
    @IsomerSoma Před 5 měsíci +151

    This is actually absolutely hilarious. He seemed to have been almost oblivious that his way of conducting research isnt legitamte at all, but that surely cant be right?

    • @hipsterbm5134
      @hipsterbm5134 Před 5 měsíci +11

      This happens at every university. There are multiple labs at every research university that do these things.

    • @Guishan_Lingyou
      @Guishan_Lingyou Před 5 měsíci +17

      @@hipsterbm5134 It's one thing to take part in shady behavior, it's another to brag about it on the Internet.

    • @albeit1
      @albeit1 Před 5 měsíci +11

      People lie to themselves all the time. In every walk of life.

    • @houndofzoltan
      @houndofzoltan Před 5 měsíci +11

      Not seemed to be, was; he starts his blog post by saying p-hacking is bad, but a deep data dive is good, so he clearly thought what he was doing was not p-hacking.

    • @TheThreatenedSwan
      @TheThreatenedSwan Před 5 měsíci

      That's how a lot of these frauds are, willfully ignorant and only dealing with the superficial language level of reality but they game the social system for status really well

  • @entangledmindcells9359
    @entangledmindcells9359 Před 4 měsíci +15

    The irony of a Behavior Scientist falling into bad behavior because of the "reward system".

  • @iPsychlops
    @iPsychlops Před 5 měsíci +23

    We need a journal of nonsignificant findings. Somewhere where the status quo, if supported, is documented. That would go a long way to giving credit to researchers for the work that they do when when the results aren't "interesting".

  • @kxjx
    @kxjx Před 5 měsíci +50

    My experience in academics was that there are enough people who are willing to simply follow the incentives that it becomes imposssible for the ordinary person to participate. You either have to be a workaholic borderline genius or highly unethical.
    For example my attempts at collaboration were quickly rewarded by a professor with a paper-mill getting a grad studuent to publish my idea without me. I learned to always have a fake idea to tell people I am working on and keep my real idea secret until its formed enough to get it onto the public record.
    People publishing stuff that barely worked was routine.

    • @marywang9318
      @marywang9318 Před 5 měsíci +9

      A good friend and former student (I had her as an undergrad) lost 2 years of her PhD work, delaying her own degree, when she and her fellow grad students turned in their neuroscience professor/advisor for manufacturing data. After working for several years in her field of expertise, she has decided to return to her home country and go back to being an accountant. She has had trouble finding work that doesn't leave her in poverty. I've wondered how much of that trouble is attributable to her related. to her reputation as someone who won't put up with fudged data?

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g Před 4 měsíci

      What field of science?

    • @paulw5039
      @paulw5039 Před 3 měsíci +5

      "You either have to be a workaholic borderline genius or highly unethical." Or a mix of both. My PhD supervisor was a genius AND highly unethical. He p-hacked his way through his entire career, because he saw that was the way to game the system (see: unethical).

    • @TheJustinJ
      @TheJustinJ Před 2 měsíci

      This is a matter of philosophy.

  • @ninehundreddollarluxuryyac5958
    @ninehundreddollarluxuryyac5958 Před 5 měsíci +53

    Rutgers University Psych department has a professor named Alan Gilchrist who told grad students that he would not sign a dissertation unless it produced results supporting his model. The department kept him for decades even though he worked with grad students but never graduated a single one until 1991 when I defended a thesis to the other three committee members. It was the first PhD granted against the recommendation of the faculty advisor. He made sure I could never get a job with my PhD. After several decades, I sent the PhD diploma to the landfill

    • @kap4020
      @kap4020 Před 4 měsíci +4

      so sorry to hear about what you had to go through :(

    • @WmTyndale
      @WmTyndale Před 4 měsíci

      sounds like a mentally sick man to me

    • @johng4093
      @johng4093 Před 3 měsíci +1

      Does any one person have such power to deny someone get a job in his field, anywhere in the country? And is the alleged victim so important to him that the director would take the trouble?

    • @TheJustinJ
      @TheJustinJ Před 2 měsíci

      😂

  • @rfolks92
    @rfolks92 Před 4 měsíci +21

    A few notes from a statistician.
    3:33 Having data before a hypothesis is not *necessarily* bad science. It just needs to be understood as retrospective research. This happens a lot in medicine and is a cheap way to see whether moving forward with a larger, prospective study would be worthwhile.
    4:20 One issue that perpetuates P hacking like this is referring to negative studies as "failed" studies. If we don't find a link between two things, that is as worthy of publication as finding a link.
    5:12 This caused me physical pain.

    • @WmTyndale
      @WmTyndale Před 4 měsíci

      for a statistician you seem remarkably uninformed. with 90% errors doctors cannot successfully pass a basic test on probability and statistics. "Hypothesis Testing as Perverse Probabalistic Reasoning" Westover, Westover, Bianchi BMC Medicine. Poor patients.......

    • @stewartsiu1960
      @stewartsiu1960 Před 3 měsíci +1

      A lot of the issues with p-hacking would go away if we shift to Bayesian methods. Hypothesis from cherry picked data would have a lower prior. It's mind boggling that in 2024 we're still talking about p-value

  • @Chili_Rasbora
    @Chili_Rasbora Před 5 měsíci +12

    9:20 Absolutely INSANE that this guy just plainly put "hello, please do scientific malpractice on this paper and get back to me" in writing on his university e-mail and he didn't immediately face disciplinary action.

    • @WmTyndale
      @WmTyndale Před 4 měsíci

      all points to credentialed LOW IQ, very dangerous. But the other fools in the department did not catch it either. LOL

  • @jeskaaable
    @jeskaaable Před 5 měsíci +43

    I would be more trusting of a 0.06 value than a 0.05

    • @aDarklingEloi
      @aDarklingEloi Před 5 měsíci +2

      The 0.05 threshold itself was arbitrarily set, so...

    • @Kavafy
      @Kavafy Před 2 měsíci +1

      ​@@aDarklingEloi Not the point.

  • @samm7334
    @samm7334 Před 5 měsíci +40

    Preregistration of studies seems to be a good start. I wonder if this could be pushed further into some form of open journaling. Researchers would not just log their intentions but also their important steps. You could see how the value of n changed over the history of the study and could see their justifications for excluding samples. This could also be a good way to semi-publish negative results. Something like "tried X but did not work" could not only be valuable information for other researchers but also ease the pressure to publish positive results since there is still a public display of research being done.

  • @davidkobold5311
    @davidkobold5311 Před 5 měsíci +15

    During my own PhD program, I became discouraged and frustrated by the "Mathew Effect". Given that p

  • @01Aigul
    @01Aigul Před 5 měsíci +52

    I think a lot of p-hackers really don't realize they're doing anything wrong, although maybe that's been changing recently.

    • @doctorlolchicken7478
      @doctorlolchicken7478 Před 5 měsíci +16

      It’s pretty much institutionalized in pharma, finance, food science, energy, politics - likely many others. If someone told me this Cornell guy used to work for Kraft of similar it would all make sense.

    • @WobblesandBean
      @WobblesandBean Před 5 měsíci

      The root problem is with institutions incentivizing constant success. That's not realistic. They will only give grants and coveted tenured positions to people who are consistently churning out desirable results. But that's not realistic, the world doesn't work that way.
      It's like business investors demanding that profit margins must ALWAYS be increasing, every fiscal year, forever. As if inflation wasn't a thing and there were unlimited resources, manpower, and disposable income to throw around. It's delusional thinking.

    • @vampir753
      @vampir753 Před 5 měsíci +12

      There is a slippery slope between an explorative study and p-hacking. It is legit to make an explorative study about a dataset to look at it in different ways and generate different hypotheses for later validation on another dataset. E.g. Look at small dataset A, get an idea of what could be the case, then collect large independent dataset B to test that hypothesis. After all this is how technically all hypotheses are generated, at the very moment you decide to focus on one hypothesis to test you exclude several other hypotheses you could have had. Publishing the step of looking at dataset A separately is not p-hacking.
      Similarly, you also can slice up dataset A and test for multiple things, but then you have to correct for literally everything you looked at for multiple testing. I.e. that something you find is significant the p-value has to be smaller and smaller the more things you tested for for it being still significant.

    • @dshin83
      @dshin83 Před 5 měsíci

      @@doctorlolchicken7478 Say you work for a car insurance company and they ask you to find population segments where your rates are mispriced. The data is fixed, and there are endless possible hypotheses (age, gender, car color, number of previous accidents, etc). How do you proceed?

    • @kimpeater1
      @kimpeater1 Před 5 měsíci +5

      Well they were rewarded with money and fame for it. Why would they think it was wrong?

  • @ronwoodward716
    @ronwoodward716 Před 4 měsíci +10

    We should publish and reward people who get null results. Knowing what is not true and does not work is actually valuable.

    • @WmTyndale
      @WmTyndale Před 4 měsíci

      a single rejection of the null hypothesis establishes and proves nothing. It just casts suspicion. All inductive or statistical hypotheses must be established by repeated observation and testing, even the absolute rejection of the null.

    • @sycration
      @sycration Před 5 dny

      Pre registration of methods is a pretty simple system to implement and I think it would solve the problem of positive result bias pretty well

  • @halneufmille
    @halneufmille Před 5 měsíci +30

    Heard at an academic conference:
    Questioner: You regression is terrible!
    Presenter: We have done worse together.

  • @hipsterbm5134
    @hipsterbm5134 Před 5 měsíci +22

    Lol, funny, my advisor told me to do all these things too, but never was dumb enough to send those requests in emails... only group meetings

    • @rainman1242
      @rainman1242 Před 5 měsíci

      yeah, I guess that how these so-called 'sciences' will solve the problem: learn to cheat better

  • @IsomerSoma
    @IsomerSoma Před 5 měsíci +44

    It should be industry standard for authors to first submitting hypothesis, methodology to a journal, then being accepted on the basis of the quality of this and then be garanteed to be pulished no matter if the results are positive, inconclusive or negative. With todays practise the p value isnt very meaningful even if results arent tampered the distortion happens anyway as negatives are almost never published. I am astonished this wasnt done yet even tho this isnt a new idea at all. Its such an obvious flaw in todays science and really not that hard to fix.

    • @sidharthghoshal
      @sidharthghoshal Před 5 měsíci +3

      I like this idea a lot

    • @RichardJActon
      @RichardJActon Před 5 měsíci +7

      This concept is known as 'registered reports' check out the centre for open science's pages on it for some more details. Quite a number of journals now accept submissions in this form (even high profile ones like Nature) but they are not yet the standard - unfortunately. There is also some talk of integrating this into the grant process - i.e. funding at least part of a study based on the proposed hypothesis and methodology. This avoids too much overlap with the review of grants and the study registrations as they may otherwise end up being partially redundant.

    • @lanceindependent
      @lanceindependent Před 5 měsíci

      Yes, registered reports are a great idea.

    • @samm7334
      @samm7334 Před 5 měsíci +1

      @@RichardJActon I wonder if this could be pushed further into some form of log. Where you not only register the study but also update the most important information like the value of n. Each change of n would require a justification that could be verified by editors (if they bothered to audit).

    • @RichardJActon
      @RichardJActon Před 5 měsíci +3

      @@samm7334 In short Yes. Using git and something like semantic versioning, major version number bumps only occur after a review, minors for small corrigenda/errata, patch for inconsequential stuff like typos, tie this in with versioned DOIs. Submit papers as literate programming documents where all the stats and graphs are generated by the code in the document. That way to get the final paper the this document has to build and generate all the outputs from the inputs as a part of the submission process. The method submitted with the registered report would ideally include running code on example data. Where possible for both a positive and a negative case. Then once you generate the data all you do is change the example to the real and re-run the code. Then you can add another section incidental findings it needed but it's clearly separate from the tested prediction.

  • @watsonwrote
    @watsonwrote Před 5 měsíci +35

    I wish we could reward research with valuable or interesting null results. Knowing that there may not be any relationship between the price of food and satisfaction should be worthwhile on its own.
    We should also incentivize replicating studies.
    I'm sure the worry is that people will not make new science and continously repeat old studies or papers about null results without suprising insights. But we've gone too far in the other direction where researchers are heavily incentivized to fake results, chase fads, and churn out lots of low-quality papers that are hopefully "new" or "interesting" enough to keep the researchers employed.

    • @angelikaskoroszyn8495
      @angelikaskoroszyn8495 Před 5 měsíci +3

      For me it was interesting. O thought that paying more would trigger some kind of sunk cost reaction but it looks like humans are more resiliant than I thought
      Which is great

  • @louisnemzer6801
    @louisnemzer6801 Před 5 měsíci +18

    Great job! Things will only get better once we call out bad practices, and not treat scientists as infallible

  • @sage5296
    @sage5296 Před 5 měsíci +9

    remember that by definition, a p value of 0.05 is 5% likely to happen just by pure chance even if your hypothesis is wrong. If you test your hypothesis on 20 subsets of your data, you might find one that's p

    • @petermarksteiner7754
      @petermarksteiner7754 Před 5 měsíci +1

      There is an xkcd comic about just this: it reports a correlation between jelly beans and acne.

    • @ps.2
      @ps.2 Před 2 měsíci +1

      If you do 20 tests _and make the appropriate statistical adjustment for multiple comparisons,_ and get a result with an adjusted _p'

    • @davidrosen2869
      @davidrosen2869 Před 2 měsíci

      Actually, the p-value starts with the assumption that your hypothesis is wrong (i.e. the null hypothesis is true) and tells you how frequently random chance would produce the results you have observed. So a p-value of 0.05 suggests that random chance would produce the observed results once in 20 identical experiments. The p-value tells you nothing about the probability that your results are due to chance. That is a Baysian question and p-values are frequentist statistics.

  • @uroosaimtiaz
    @uroosaimtiaz Před 5 měsíci +4

    Pete, i seriously love your content and suggest it to all grad students or aspiring researchers i know. you deliver the content in a unique way and the focus of your channel is so damn important. people need to know!!

  • @denyssheremet8496
    @denyssheremet8496 Před 4 měsíci +3

    Awesome job exposing and shaming these frauds! I think that is part of the solution to fixing the academic system. Because if nobody exposes and shames frauds, there is almost no downside to faking your data, and many people do it. On the other hand, the more fraudsters are exposed, shamed, and fired, the less enticing it will be for others to commit fraud. Keep up the good work!

  • @DocBree13
    @DocBree13 Před 4 měsíci +16

    As a former graduate student in statistics, this was very painful to watch. I believe that qualified statisticians should be consulted for statistical analysis of scientific studies, and that statisticians should be very involved in the peer review process prior to scientific journal publication.

    • @kap4020
      @kap4020 Před 4 měsíci +3

      Getting statisticians involved is a great idea, and unfortunately, won't work under the current framework.
      Delaying publishing, splitting authorship, and working with someone on equal footing whose primary job is to find gaps in my research?

    • @paulw5039
      @paulw5039 Před 3 měsíci

      @@kap4020 Finding gaps in your research is the primary goal of peer review. Done poorly in many journals, but most of the top ones take this seriously, fortunately. I submitted to Cell recently and the reviewers found many 'gaps', for which I am very grateful, because I actually want to write as accurate and scientifically rigorous a paper as possible. This is how science should be done.

    • @kubetail12
      @kubetail12 Před 3 měsíci

      I have mixed feelings on the idea because depending on the research one might need experts in other fields to vet one’s work. The lines between science and engineering subfields are blurred. So, the list of experts needed grows quickly. Plus, I believe most academics won’t bother to check someone’s work unless they are getting some of that research money and/or included in the author list.
      I can only speak for the physical science, and I know that scientists and engineers could brush up on their statistics. Either they need to take more statistics or take the standard introductory sequence that statistics majors where the fundamentals explored in more depth. I regret taking one semester programming course that a lot of non-computer science majors take instead of the two semester sequence that CS majors took.
      But statistics can only go so far, bad data is bad data.

  • @LanceHKW
    @LanceHKW Před 5 měsíci +21

    My 12yo daughter loves research and your videos have opened her eyes! Thank you!

  • @alfredwong1489
    @alfredwong1489 Před 5 měsíci +7

    Preregistration of hypotheses and compulsory publication of non-findings.

  • @fernandomendez69
    @fernandomendez69 Před 5 měsíci +6

    Just like there’s “jury duty” there should be “replicating analysis duty” (additional to reviewing). Right now, only competitors have incentives to critically analyze other’s research. When they do it, there is suspicion of bias (which sometimes is real).
    The idea is that if you’re on duty, you have to endorse, criticize or say you cannot do either.

  • @marilyngail4010
    @marilyngail4010 Před 4 měsíci +2

    Thank you, Pete for sharing a refreshing point of view - what can we do to incentivize PROFESSORS to be truthful? - how ironic that we have to even ask this question - and good for you to not only share the problem, but move us toward brainstorming solutions - this is the kind of thinking that can help people grow toward healing and restoration.

  • @brucebaker810
    @brucebaker810 Před 5 měsíci +15

    Anyone noting the satisfaction of postdocs paid $8 and those paid $0?
    Those paid $0 seem to be more willing to participate.
    Is this consistent with his hypothesis?

    • @ohsweetmystery
      @ohsweetmystery Před 4 měsíci

      No. Postdocs and PhD students used to get paid much less and corruption was not nearly the problem it is now. The younger generations have all grown up in an environment where cheating and stealing (i.e. digital music, movies) carry no negative connotations, this is why there are so many corrupt people now.

    • @Alphabetatralala
      @Alphabetatralala Před 3 měsíci

      @@ohsweetmystery No.

  • @mrgmsrd
    @mrgmsrd Před 5 měsíci +5

    I'm a registered dietitian and Wansink's work was regarded as revolutionary when I was in grad school and shortly after. Very disappointing.

  • @catyatzee4143
    @catyatzee4143 Před 5 měsíci +13

    If the original hypothesis doesn’t work out in a study, but a new revelation is found, why can a scientist not roll with that? Or must they just do a whole new study with new parameters to ensure it’s valid?
    Just a non scientist wondering!

    • @MR-ym3hg
      @MR-ym3hg Před 5 měsíci +18

      A great question! Going off the cuff here but I think it comes down to what that p value really means. p = 0.05 means there's a 5% chance the effect is coincidence, so if you slice up the data 20+ ways, you're essentially hunting for that coincidence and then not telling people how many times you jumbled your analysis so it seems like the new "result" is what you were looking for from the outset. You're right that there's a thread of a good idea here, which is, it also might not be coincidence, but you would have to do a new study with new data that you didn't slice and dice, to prove that.
      Edit: scare quotes on the word 'result'

    • @Oler-yx7xj
      @Oler-yx7xj Před 5 měsíci +10

      The thing is, that this revelation can be due to a random chance, so you would have to account for all possible revelations when calculating statistical significance, and you would need to be honest about what you did in the paper (which he wasn't) for the calculation to be done correctly.

    • @TickiMign
      @TickiMign Před 5 měsíci +5

      There are also ways to publish exploratory studies, but it has to be done in an honest way. If you want to describe a phenomenon/population/disorder/etc that is not yet well know, and do not have many hypothesis about the results, you can make a research where you gather a large number of data, then analyse them and look at their exploratory relationships, then publish a descriptive study where you say "we don't know a lot about this yet, here are some first explorations". The key part, however, it about being honest about it, and not choping your dataset in tens of subgroups just to find some effects. In truth, real exploratory research quite often have some form of interesting results, even if there are no group differences, or no special new effects, because we still get new knowledge about a phenomenon or a population. But if you design an experimental research testing something pretty specific like the effect of paying half-price on the satisfaction given by a meal, if you base your hypothesis and method on flawed, p-hacked litterature, it's not really surprising that nothing will come out of it "naturally".
      So yes, @catyatzee4143, it's true that new revelations are often found randomly, and it's exciting when that happens! But it should always be said if that it is the case. Scientist should not try to pass these discoveries as something they always thought was going to come out, based on their vast knowledge and great research expertise (lol). And more importantly, as @MR-ym3hg said, there has to be follow-up studies where researchers try to replicate the results and dig deeper into them.

    • @iantingen
      @iantingen Před 5 měsíci +1

      Everything that everyone said above 💯
      The problem with finding coincidental findings in an experiment is exactly that - they’re coincident!
      These kind of findings are specifically atheoretical - the experiment was not testing for the random finding, it just kind of jumped into being.
      If you want to test for the conincedent finding later, and you replicate appropriately, that’s great!
      But you can’t say it’s a genuine finding until it’s tested.
      TL;DR: real science is resource intensive, and coincident findings are not “bonus efficiency”

    • @rainman1242
      @rainman1242 Před 5 měsíci +5

      @@JS-oh2dp Nice explanation, a simple thumb up did not feel like enough of an appreciation for it :-)

  • @jorgeduardoardila
    @jorgeduardoardila Před 3 měsíci

    Great !!! Great !!! thanks Peter Judo, new to your channel! excellent content! Thanks, seriously, your content does a lot of good to humanity!!!

  • @eldrago19
    @eldrago19 Před 4 měsíci +2

    7:22 I think the best bit about this is that he was told how bad this looked, given an opportunity to say it was a joke, and he still replies "I meant it all as serious"!

  • @JohnRandomness105
    @JohnRandomness105 Před 5 měsíci +4

    4:20 Why was that called a failed experiment? One that produces results opposite those expected has not failed. An experiment fails only when it fails to give an answer, or if something rendered the conclusion invalid, or something similar.
    9:00 P < 0.05 is very close to two standard deviations away with a normal distribution.

  • @frednewman2162
    @frednewman2162 Před 5 měsíci +8

    I am looking forward to how you think it can be fixed! This whole topic goes far beyond even just academic issues, but hits all walks of life!

    • @sunway1374
      @sunway1374 Před 5 měsíci

      I think there are two ways. Both are not easy.
      First is change the current culture or system of rewarding (and punishment) in research. Our current culture celebrate and reward researchers who work on trendy topics that break new grounds, while ignoring the majority of hardworking researchers doing normal (Kuhn's terminology) science. Without the latter, the former would not have been possible to rest on firmer foundations and to have practical applications made from them that benefit humanity. Both types of research are important.
      Second is change or emphasise the good values that individuals hold themselves accountable for: commitment to the truth, service to others, do no harm, sacrifices, etc.

  • @bakawaki
    @bakawaki Před 3 měsíci

    Thanks for making this video and getting this story heard

  • @DoctorMagoo111
    @DoctorMagoo111 Před 5 měsíci +4

    The wild part of the first dataset slicing example is that there are ways he could have looked at the data more ethically. If he did not want to just publish the null result, I could understand trimming the outliers and publishing both together in a transparent way. Or, he could have had the student investigate the data to try and inform the development of a different hypothesis for a follow up study as opposed to butchering the data for the sake of significance sausage.
    Null results are underrated anyways. A good experiment can still produce profound null results. One of my favorite papers I've had the pleasure of working on is largely a null result because of the value getting that result still provides.

  • @EricAwful313
    @EricAwful313 Před 5 měsíci +7

    How about exposing these journals that actually allow these papers to be published. Don't they know what he's all about at this point?

  • @jdenmark1287
    @jdenmark1287 Před 5 měsíci +10

    I think you have to reform the attitudes of university administrators and professors. Too many plagiarized and hacked their way into positions of authority and very few are willing to give up that power.

    • @WmTyndale
      @WmTyndale Před 4 měsíci +1

      clever but brutish animals

  • @grahamvincent6977
    @grahamvincent6977 Před 2 měsíci

    Pete, this is the third or fourth presentation I've seen from you, and your sincerity comes over very convincingly, as well as your superb diction. It is only in this film (the others dealt with the Gino scandal) that you turn to the possibility of a systemic problem, on which I agree with you. Clearly, solid data and reliable experimental work are the bedrock of your quest to clean up research science, and I cannot but suppose where problems might lie. And that is precisely the kind of surmising that you want to eradicate. I wish you good fortune and will follow your work with interest. You have now earned a subscription. Well done.

  • @jeniferb183
    @jeniferb183 Před 5 měsíci +3

    Good stuff. Just because you asked for suggestions, I think you could maybe do a long episode with Chris Kavanaugh and Matt Browne from Decoding the Gurus, they do Decoding Academia episodes that are great. They had a panel that talked about open science initiatives, etc. Can’t seem to find it on CZcams.
    Thanks for all of your hard work.

  • @fatedtolive667
    @fatedtolive667 Před 5 měsíci +6

    Common practice when I was studying was, for lecturers to force us to use source material, published by writers I'd never heard of. Now, being a bit of a psychology nerd, I'd read work published by famous, unknown, and infamous researchers, simply out of interest. So, being presented with quotable 'authorities' I'd never heard or read of made me curious, so I did some digging. Long&short was, I found that there was a long established mutual backscrathing circle, where lecturers would plagerise researchers work, write poorly written books built on poorly understood psychology, without crediting the originator, or doing so in such vague terms, that it was more like a polite mention than credit, each lecturer in the circle, then forcing their students to use other members of the circle as their quotable source. Not surprising to say that, the majority of my fellows came away with the understanding that cheating is fine, so long as you don't get caught, or have the power/connections to squash your accusers. Perhaps starting at this basic level might be the way to go?

    • @kap4020
      @kap4020 Před 4 měsíci

      was this a U.S university?

    • @fatedtolive667
      @fatedtolive667 Před 4 měsíci

      @@kap4020 I share awareness of a significant academic fraud, and your take away question is, where did it happen? OK.

    • @johng4093
      @johng4093 Před 3 měsíci

      ​@@fatedtolive667It seems like a reasonable question IMO.

  • @squarehead6c1
    @squarehead6c1 Před 5 měsíci +4

    I think the Universities should make sure that when a PhD is graduating the candidate knows at least a minimum of research methodology and research ethics. It probably differs from institution to institution, and discipline to discipline, and maybe it has gotten better over the years, but when I graduated (in the STEM field) in the early 2000s, these aforementioned skills were optional, not compulsory. And I am not even sure, I even could take a class on research ethics. Now, years later, at the agency where I am working, an ethics board has recently been established and we have had courses and workshops about these issues to guarantee the quality of our research. Hopefully, we as a global research community are maturing. Good work Pete, for bringing this up.

    • @RichardJActon
      @RichardJActon Před 5 měsíci +2

      My institution has research integrity seminars as part of the mandatory training for all new PhD students, I think is becoming more common but it won't fix the incentives problem.

    • @squarehead6c1
      @squarehead6c1 Před 5 měsíci +1

      @@RichardJActon Yeah, incentives is an additional dimension. Where I work now we are only vaguely affected by the "publish or perish" paradigm. It is secondary to solving the problems at hand of our clients, which are mostly other agencies.

  • @FergalByrne
    @FergalByrne Před 5 měsíci +2

    Glad to hear the plan, several others must have advised the same - use your platform to help fix this rubbish. You have the majority on your side

  • @Chichi-sl2mq
    @Chichi-sl2mq Před 5 měsíci

    I really love your content. I really learn a lot. Thank you for your work.

  • @lisboastory1212
    @lisboastory1212 Před 5 měsíci +3

    As an academic, you MUST publish whatever, and a LOT, in order to get a job or a scholarship. And, since readers only "trust" what they read and have no critical thinking, you can only scape that by being an outsider and do research by your own. Brian Wansink is not the problem, it is the system itself, it is the univerity as a whole, and also, people who "trust science" (that is an oximoron) or "trust" whatever is published in a "trusted" journal. And also it is problematic to think that you are a better researcher, just because if you have published X articles in a Q1 or Q2 journals. Academy is broken. Thank God internet exists, there are many excellent stuff out there, in public repositories, or foreign journals. Btw, this channel is cool, i have suscribed

  • @abebuenodemesquita8111
    @abebuenodemesquita8111 Před 3 měsíci +3

    to be clear, he published that paper with his daughter obviously so that she can put it on her college applications

  • @prince.c8458
    @prince.c8458 Před 5 měsíci

    Thanks for making these videos !

  • @robertmatch6550
    @robertmatch6550 Před 2 měsíci

    Thanks to comments explaining the term "p-hacking". Accelerates my crumbling respect for academics done over the internet, as well as anyrhing else done over the internet.

  • @lukas_613
    @lukas_613 Před 5 měsíci +31

    I think some percentage of all research funding should go towards a kind of science police. There are already policies in place for helping out underrepresented minorities in science. I think we should add "honest scientists" to that list😁

    • @RichardJActon
      @RichardJActon Před 5 měsíci +10

      I advocate for a version of this that I call 'review bounties' which is modeled off of the concept of bug bounties in the software development space. It is designed to solve two issues, the problem of unpaid reviews and the lack of incentives to find errors in published works. Instead of an article processing fee someone wanting to publish posts a review bounty and someone with a role like that of a journal editor in the current system arranges the review and to host the resulting publication for a cut of the bounty, the reviewers each get a cut of the bounty if the editor thinks their review is of sufficient quality and a portion of the bounty is retained as a bug bounty. If someone can find an error in the paper they can claim the remaining funds if some combination of the reviewers, editor, and authors agree there is an error. If the bounty is unclaimed then it accrues back to the authors. This also potentially allows grant awarding bodies or other parties interested in the quality of a result to add to a bug bounty pot on results. This would allow grant makers to incentivize correctness/quality in publications arising from their awards they they currently cannot. It could also let companies / investors with a potential financial interest in a result e.g. underpinning the development of a new drug to incentivize additional outside scrutiny on results so they can avoid sinking money into a project based on a flawed study.

    • @TheThreatenedSwan
      @TheThreatenedSwan Před 5 měsíci +2

      The problem is that the people who are already supposed to do this are bad. In fact those who can't do in academia are disproportionately bad same with this guy. You would basically need to force certain people into the job and have an institution that has good incentive structures though one of the clearest ways of having a good system is get rid of the bloat and let a good system emerge. You get this thing where people think because children are taught to read at a certain age through schooling today, they were incapable or learning it in the past. So people act like you can't have science without all this top down regulation and bureaucracy when in reality it's the other way around where all the bureaucracy is a sign that an area of society has become high status and therefore people are trying to get in on it for personal gain.

  • @ravenrose4869
    @ravenrose4869 Před 5 měsíci +8

    Researchers can be very competitive, which leads to discouragement if they think they can't win whatever game they're playing (get published). This is made worse by sponsors and journals only promoting positive results. There needs to be a way for null results to be published and celebrated, because knowing if an answer is wrong is just as important as knowing if it's right.

    • @georgehugh3455
      @georgehugh3455 Před 5 měsíci +2

      It's not just positive results the journals are looking for, but frequently there is an agenda (depending on the subject and what's "hot").

  • @callawaycass5148
    @callawaycass5148 Před 4 měsíci +1

    Great work illuminating only one of the problems with "science" today!

  • @icalrox
    @icalrox Před 5 měsíci

    Thanks for the heads up. 🙏🏾

  • @garrettbiehle9714
    @garrettbiehle9714 Před 5 měsíci +14

    Fascinating and distressing. And yet we have also the opposite problem as well. I have a colleague with an interesting yet speculative paper in physics. The physics and math is complicated and correct, but not in line with the mainstream. He cannot get it published in any journal. How do peer-reviewed journals publish trash and keep out interesting work? What criteria do they use? It's a rhetorical question.

    • @drelahnniedikhoff8005
      @drelahnniedikhoff8005 Před 4 měsíci +1

      “does it serve me” and “maybe do i like him”, i think its the former…

  • @2013Arcturus
    @2013Arcturus Před 5 měsíci +3

    Maybe if there was a way to set up a system that rewarded positive replication of your work by only unaffiliated researchers you'd fix the system. No clue how you'd set that up or what it would look like, but if somehow you could create a reward structure around that, you'd fix science in general.

  • @dennisweidner288
    @dennisweidner288 Před 2 měsíci

    Thanks for making this so clear.

  • @IslandHermit
    @IslandHermit Před 4 měsíci +4

    Not a particularly hot take, but we really need to do more to encourage the publication and rewarding of null results.

  • @jonahhekmatyar
    @jonahhekmatyar Před 5 měsíci +6

    So I have this guy to blame for shitty school lunches?

  • @jefft8597
    @jefft8597 Před 5 měsíci +15

    It seems to me that some of these studies like Elmo stickers on apples are pretty silly and Cornell researchers have way too much time on their hands.

  • @ruperterskin2117
    @ruperterskin2117 Před 5 měsíci

    Right on. Thanks for sharing.

  • @ivani3237
    @ivani3237 Před 4 měsíci

    I work in Data Analytics sphere, and it is constantly the same issue everywhere. 1st - you have the hypothesis you want to prove, and 2nd - you find the piece of data which can support and prove you idea (and decline all data which not support your hypothesis)

  • @brandoncyoung
    @brandoncyoung Před 5 měsíci +3

    Man i love nerd drama, i have no idea who he is talking about but its so juicy.

  • @TheSimChannel
    @TheSimChannel Před 5 měsíci +3

    Two things I think could've been added to this video: (1) An explanation for non-scientists as for why the salami-slicing of data is problematic (I don't think it's obvious to a layperson what exactly is the problem with this approach), (2) the notion that this is a decent approach for an explorative study to develop ideas / hypotheses, but that you then need to collect a new dataset specifically for this hypothesis to test whether the hypothesis holds without p-hacking.

    • @johntippin
      @johntippin Před 5 měsíci

      Could you explain why salami-slicing is bad to me?

    • @TheSimChannel
      @TheSimChannel Před 5 měsíci +2

      @@johntippin here you go:
      there is randomness (noise) in every measurement you make. even clearly one-sided empirical observations can happen by chance in a given sample. say you compare one variable between two groups: any ratio of the variable between the two groups that you measure can be the result of random chance. luckily, we can, using math, determine the probability of receiving the observed outcome based on pure chance alone (i.e., without there actually being any difference between the two groups). this probability of getting the observed result based on pure chance alone is the p-value. thus, p=0.05 means there is a 5% chance that a certain observation was the result of pure chance, or in other words, there's a 95% chance that there really is a difference between the two groups in that variable. this is often taken as the criterion to accept a hypothesis.
      the problem with salami-slicing is that you now perform this test not for one variable, but for many variables, and you take the 5% criterion for each of them. thus, salami-slicing is another word for doing "multiple testing". If you look at 100 different variables in the dataset, and you apply the 5% criterion to each of them, then on average you will find 5 variables that show a "significant" (p

  • @inqwit1
    @inqwit1 Před 3 měsíci

    Subscribed. Looking forward to learning more. P-hacking occurs as endemic in much of the "truths" and "facts" touted to support many current political stands.

  • @melaniehickey236
    @melaniehickey236 Před 3 měsíci

    You are quite scintillating in your delivery. You talk fast, but you analyze faster. Keep going. It is refreshing to see what happens when science becomes a con job instead of a way to a better future.

  • @etiennebruere1922
    @etiennebruere1922 Před 5 měsíci +15

    My teacher of Statistics called the people who adhere to p

  • @SaveThatMoney411
    @SaveThatMoney411 Před 5 měsíci +3

    Abolish the h-index

  • @dimitargueorguiev9088
    @dimitargueorguiev9088 Před 2 měsíci

    that was interesting. thanks for this report.

  • @Evergreen64
    @Evergreen64 Před 4 měsíci +2

    "How can we ever get anything published if you're going to be so damn ethical?" And. "What's P-Hacking?" Nothing like outing yourself. I guess we just need to keep a closer eye on this kind of thing.

  • @AncoraImparoPiper
    @AncoraImparoPiper Před 5 měsíci +3

    I feel sorry for this daughter being pulled into the vortex of bad science. I hope she realises how she is being duped before too long. But it seems she is already being groomed by her father to go down the same path of bad science, submitting his guided research projects into school science fairs etc.

    • @Evanstonian60201
      @Evanstonian60201 Před 5 měsíci

      Sadly, even obviously ghostwritten papers in obviously worthless pay-to-publish journals can be a significant help in getting into certain programs at certain colleges. Some admissions offices really like to see "published research" without thinking much about what the chances are that a high-school student would primarily by his own efforts generate worthwhile research of general interest publishable in an academic journal. In this sense, it may sadly be the case that for an immediate purpose she'd being helped, not duped.

  • @greenfloatingtoad
    @greenfloatingtoad Před 5 měsíci +3

    What institutional changes could prevent incentivizing cheating?

    • @lanceindependent
      @lanceindependent Před 5 měsíci +3

      Career advancement not being based so much on journal publications and accept fewer people for phds so there's less competition.

  • @WTFIsThis4YT
    @WTFIsThis4YT Před 4 měsíci

    You have the common sense of a genuine and honest person and I appreciate that about these videos. Thank you and please keep going. (I'm not even an academic 🤭)

  • @allanrousselle
    @allanrousselle Před 4 měsíci

    I *love* the fact that you give props to the author of the source material that you based this episode on. It's almost like you're being... intellectually honest!

  • @eggchipsnbeans
    @eggchipsnbeans Před 5 měsíci +5

    Bad science is excellent, read the whole book!

    • @RichardJActon
      @RichardJActon Před 5 měsíci +2

      as well as bad pharma it's also excellent - Ben was great on this stuff, he seems to have been relatively quite of late though, I wonder what he's up to these days.

    • @eggchipsnbeans
      @eggchipsnbeans Před 5 měsíci

      @@RichardJActon I must read Bad Pharma

  • @boundedlyrational
    @boundedlyrational Před 5 měsíci +12

    By no means is this an excuse, but rather this is meant to serve as a precaution. Having met Brian, he was more of an activist than a scientist. He believed in the cause behind his research (choosing healthy foods over unhealthy foods), which lead him to rationalize his methods. To spell out the cautionary part of my comment, and perhaps the obvious, we need to be especially vigilant for bias and methodological shortcuts when the research is aligned with our beliefs.

    • @user-go4vz2ir6r
      @user-go4vz2ir6r Před 4 měsíci

      So..."The ends, justify the means"? smh

    • @boundedlyrational
      @boundedlyrational Před 4 měsíci

      @@user-go4vz2ir6r that is not what I said, nor do I believe it is what Brian would have thought he was doing. Reread what I wrote, or here read this briefer explanation: as an activist, he may have been naive to the (strong) role of bias in his method.

    • @user-go4vz2ir6r
      @user-go4vz2ir6r Před 4 měsíci

      Naive? F'er has a Ph.D and looks to be in his late 40's. Making excuses is exactly how stuff like this happens.@@boundedlyrational

    • @boundedlyrational
      @boundedlyrational Před 4 měsíci +2

      ​@@user-go4vz2ir6r You seem very angry and this appears to be affecting your reading comprehension. I am not making excuses for him. Let me try again, using even fewer words this time: a scientist who is an activist may be vulnerable to bias.

    • @user-go4vz2ir6r
      @user-go4vz2ir6r Před 4 měsíci

      So faking results is "OK" and get out with the concern bullying. If you love frauds so so much, ask your self how much of what you believe is truth, and how much is BS. Then look in the mirror.@@boundedlyrational

  • @playapapapa23
    @playapapapa23 Před 5 měsíci +1

    Fellow academic here. I love your vids man. Keep up the good work.

  • @l.w.paradis2108
    @l.w.paradis2108 Před dnem

    That there was no relationship between the price of a buffet and the satisfaction with the buffet IS a STUPENDOUS result.

  • @capt.bart.roberts4975
    @capt.bart.roberts4975 Před 5 měsíci +3

    There's never enough good Italian pizza, doesn't matter if you paid $4 or $8.

    • @capt.bart.roberts4975
      @capt.bart.roberts4975 Před 5 měsíci +1

      No pineapple!🙀

    • @lordchaa1598
      @lordchaa1598 Před 5 měsíci +3

      @@capt.bart.roberts4975, he said Italian Pizza, so the lack of Pineapple is implied, lol 😂

    • @georgehugh3455
      @georgehugh3455 Před 5 měsíci +1

      If you don't get any cannoli, the reviews are going to suffer (at least that's my hypothesis...).

  • @hoi-polloi1863
    @hoi-polloi1863 Před 5 měsíci +4

    Hey, nice video! Have a question; not being disingenuous, honest. What's actually wrong with p-slicing, if the slices are big enough to be interesting? Or put another way, is it so wrong to start with data and look for patterns that you hadn't anticipated?

    • @desertdude540
      @desertdude540 Před 5 měsíci +1

      There's nothing wrong with doing it to form a hypothesis, but any experiment to test that hypothesis must rely entirely on new data.

    • @hoi-polloi1863
      @hoi-polloi1863 Před 5 měsíci

      @@desertdude540 Does that really follow? I mean, other teams should test the theory later by getting new data, but in the case we saw in the video (pizza buffet goers) why can't the researcher say "Here is the data I gathered and here are the conclusions I drew from it"?

    • @angelikaskoroszyn8495
      @angelikaskoroszyn8495 Před 5 měsíci +1

      Carefully chosen data can show anything we want
      For example there're disproportionally less atheists in American prisons than Christians. Does it mean that atheists have less criminal tendencies than Christians?
      Not really
      Atheists tend to be wealthier and more educated. Both wealth as well as education are related to lower incanceration regardless of someone's belief system. On the one hand wealthy people don't have to be involved in crime in order to survive. On the other hand a well-paid lawyer allows you to avoid prison time whether you're guilty or not
      Not to mention that participating in a religious programs can give you an early release for "good behaviour". It's more worthwile for an atheist to lie about being a believer than for a believer to lie about rejecting their God

    • @Raen5
      @Raen5 Před 5 měsíci

      It's something of a statistical problem. If you look for one specific relation, there's a probability it can occur by chance in your sample even if that relation doesn't hold generally. That's what the p-value is about, how likely is the data to be this significant by chance. The problem with p-hacking is that if you keep looking for more and more things, one of them is likely to occur by chance. It makes the study much less statistically significant than it appears.

  • @LiquidAudio
    @LiquidAudio Před 5 měsíci

    Absolutely fascinating case, I’d not heard of this guy.

  • @margretrosenberg420
    @margretrosenberg420 Před 5 měsíci +1

    I had never encountered the term "p-hacking" before, but I had certainly encountered the concept; we used to call it "cherry-picking."
    I am not a scientist, merely an intelligent layperson, but it seems to me this brings up several issues.
    1. There should be (and I'm pretty sure there are) two kinds of experiments:
    • Experiments designed to test hypotheses, which are the kind we're talking about here.
    • Experiments designed to answer questions. Say, "We've seen a sudden rise in deaths from (fill in the blank); what is causing it?" Good question, and one that needs answering, but we don't have enough information yet to form a hypothesis, so we need to collect information (AKA "data") that will suggest a hypothesis we can then test.
    Am I wrong about this? Are scientists ALWAYS suppposed to begin with a hypothesis?
    2. Scientists are judged by the number of papers they publish. Am I alone in thinking that this sounds lazy? It actively SELECTS for p-hacking and discriminates against ethical scientists, all because it's an easy statistic to get; it requires no one to dig deeper. I would be very interested to know what happened to the ethical grad student Wansink shamed in his blog post.
    3. This is related to my second point: Apparently journals aren't interested in publishing null results. I can certainly understand this; if they did publish null results they could easily be overwhelmed. But null results are important; a list of what doesn't work is a good first step to finding what does work. And if scientists had a way to publish their null results there would be less pressure to engage in p-hacking. Perhaps someone should establish a new journal that looks only at null results.

  • @AM-zy6wf
    @AM-zy6wf Před 5 měsíci +8

    Can you talk about the recent plagiarism accusation of hardvard Gay. It is difficult to follow that for a normal person.

    • @PeteJudo1
      @PeteJudo1  Před 5 měsíci +5

      Coming up!

    • @iantingen
      @iantingen Před 5 měsíci +1

      Highly recommend what James Heathers wrote about that situation; great resource

    • @lanceindependent
      @lanceindependent Před 5 měsíci

      ​@@iantingenHeathers is great. Where can I find that?

  • @DannyHatcherTech
    @DannyHatcherTech Před 5 měsíci +2

    Could you link the source in the description?

  • @Chiquepeace
    @Chiquepeace Před 4 měsíci +1

    another great vid, thanks for your excellent work to cover and present theses stories.

  • @diydantex6150
    @diydantex6150 Před 4 měsíci

    Thanks for the video