Academia has HOPE! Professor's reaction to Gino (feat. Katy Milkman)

Sdílet
Vložit
  • čas přidán 1. 07. 2024
  • Katy Milkman is one of the world's leading behavioral scientists. In this video I get her reaction to the shocking Harvard Fake Data Scandal involving Francesca Gino, and also we talk about Megastudies which are a really exciting way to do better research.
    If you want Katy's book: amzn.to/45lKSCA
    My Website: petejudo.com
    Follow me:
    Behavioral Science Instagram: @petejudoofficial
    Instagram: @petejudo
    Twitter: @petejudo
    LinkedIn: Peter Judodihardjo
    Good tools I actually use:
    For great book summaries: www.Shortform.com/pete
    For balanced news coverage: ground.news/Pete

Komentáře • 184

  • @heijd
    @heijd Před 10 měsíci +301

    It would be great if there is like a 'null-journal' which only publishes negative results. It might make it more acceptable to publish those studies.

    • @amrojjeh
      @amrojjeh Před 10 měsíci +33

      I like that idea. But journals should still try to be more specialized, since even negative studies should be reviewed as normal

    • @Ganntrey
      @Ganntrey Před 10 měsíci +30

      I cannot agree more. NEGATIVE RESULTS ARE RESULTS!!!

    • @SimplyWondering
      @SimplyWondering Před 10 měsíci +22

      @@Richrichy Except journals already deal with the quality of studies. Null results arent bad just because their null, and if the journal highlights good studies that have null results it would be an interesting read.

    • @Coz131
      @Coz131 Před 10 měsíci +5

      @@Richrichy Journals can choose good quality studies you know....

    • @haldanesghost
      @haldanesghost Před 10 měsíci +14

      I had an inside joke with some people in my lab back in the day that I wanted to start up a journal called “The Journal of Failed Experiments and Bad Ideas”.

  • @markbayer1683
    @markbayer1683 Před 10 měsíci +146

    One of the things we teach in Organizational Ethics is that you (managers) shouldn't put your people in situations where they are incentivized and/or tempted to cheat. If the situation is strong enough, they will behave unethically. The situation that top level academics have been in - for decades - is giving them strong, strong incentives to perform research fraudulently. The rewards for publication are far too tempting, and the checks and balances on how they produce data are far too weak. Given the situation academic leaders have put them in, we should not be surprised that these unethical behaviors are happening. And they are undoubtedly still happening. Gino is merely the highest status tip of what is probably a large iceberg. Probably.

    • @DerekCroxtonWestphalia
      @DerekCroxtonWestphalia Před 10 měsíci +4

      That is not, however, a realistic approach to academia. Pro athletes also have a huge incentive to cheat because the better you are, the more you get paid (and, really, even moreso amateur athletes, because just getting to the pros is the biggest pay bump you will ever get). Unless we make academia a low-status profession with small salaries, or stop rewarding the professors who produce the best results, the incentives to cheat will be there. What we need is a way to catch them or preempt the cheating.

    • @meneldal
      @meneldal Před 10 měsíci +3

      I wouldn't say that the reward are strong, more like if you don't have a good paper every x months you better have tenure or you get fired. And when you've been stuck in academia and have a skillset not very useful for work outside, I get cheating to save your own job.

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci +2

      They should’ve had everyone in the field sign at the top of the page for honesties sake when they publish lmao.
      Fake field. Fake results. Don’t treat your employees like shit, and they won’t rebel against you.

    • @TheThreatenedSwan
      @TheThreatenedSwan Před 9 měsíci

      Where is such a mechanism, and if it were enforced, why would the semantics of "cheat" not just change

    • @JavaScripting64
      @JavaScripting64 Před 5 měsíci

      “Probably”

  • @runningwithsimon
    @runningwithsimon Před 10 měsíci +79

    PhD in biomedical research here, but I left academia some time ago. There is hope, but let's not deny that there is a huge problem across all fields that goes much beyond Gino. I have tons of respect for some of my peers that stayed, but they themselves would be the first to admit it's a mine field.
    It's usually recognized that ~30% of publications will have major inconsistencies (i.e., something that can't be replicated independently, or even in the same lab by a different researcher). That may seem like a lot, but I'm sure it's similar in other fields - the biggest difference being that it's easier to replicate the exact same thing, and therefore find inconsistencies (vs. for example behavior science - not attacking it, just saying that not spotting the errors doesn't mean they aren't there). One would think that big publications from famous lab in prestigious journal would be immune to that - but it's the opposite. Why would you lie to publish in the Icelandic Journal of Whatever, vs. publishing in Nature or Science? It's NOT all fraud however, but it'd be naïve to think fraud is not a major factor. I think anyone working in big lab have seen some suspicious post-doc with results that are just TOO clean - you can't prove anything, but you suspect something is off and avoid collaborating with them.
    IMO there are two big issue in academia - the strong pervasive incentives (if you want to stay in academia, you NEED that high impact factor to have grants and have a chair, etc). But even PhD candidates that want out - you need to publish, otherwise you'll stay forever. That doesn't necessarily mean fraud, but it could mean cutting corner and sloppy research. I think the incentive issue is the biggest in field where you have limited career option beyond academia. And even bigger in smaller field where nobody can or will replicate the exact same experiment.
    The second biggest issue is often lack of supervision, and that most research is done by basically noobs. How often did my professor come down to the lab to teach me during my PhD? ... Never! Who designed, ran, analyzed, and wrote papers? ... Me. Sure we discussed, but at the end of the day, you are driving your research mostly independently. How many years of research experience did I have when I started being independent? ...Not much. When you need to learn a new lab technique, you'll be directed to a lab mate who has at most 2-4 years experience, and is most certainly not an expert on this technique. Heck, I've been labelled "expert" on some techniques I had barely done twice. Experiments can be poorly designed, and/or poorly executed despite good intention. And peer review? ... Please give me a break! I have yet to receive a good critic of my work through that - lab discussion with other labmate and advisor were 1000x more helpful. But have their limitations. Plus, come on - who here hasn't been asked to peer review a paper on behalf of someone else?

    • @whycantiremainanonymous8091
      @whycantiremainanonymous8091 Před 10 měsíci

      And that postdoc with *too* clean results ends up becoming a bigshot professor, while those who stayed away are out of academia...
      About fields where there's no employment market outside academia, well, much depends on the specifics of the field. You don't find data fraud in the humanities, because there's no data analysis. There's plenty of plain old BS, and of professors using peer review to push rubish work by their cronies and block rival schools of thought, but that's not fraud, really.

    • @philkim8297
      @philkim8297 Před 7 měsíci +1

      The whole system sounds so flawed and in need of a major revam

  • @brhelm
    @brhelm Před 10 měsíci +20

    In molecular biology, there is an increasingly common requirement/standard to make the direct outputs of various collection devices directly available to the public (i.e. "raw data"), including sequences (DNA/RNA sequencing), microarray outputs, flow reports (FACS, etc), and qPCR raw data. Most of these are fairly standardized or follow just a couple of standards throughout the research community. If there is some kind of tampering suspected, then other researchers can go directly to the raw data and attempt to recreate the analysis from scratch. I'm surprised behavioral sciences haven't required that papers include data deposition of the equivalent raw data (surveys, brain scan data, etc). It doesn't completely eliminate the potential for fraud, but ALOT of fraud is conducted in the "analysis" part of the science--especially where that raw data may be impossible to go back to and/or recreate because of various limitations in collecting the data.

  • @whycantiremainanonymous8091
    @whycantiremainanonymous8091 Před 10 měsíci +27

    You know, I keep coming back to that megastudy, and it keeps irking me. It reminds me of all the times I read a paper in Behavioral Science (henceforth referred to by the acronym BS), and had a strong feeling that in this field, somebody decided that quantitative methods should _replace,_ not supplement, good common sense.
    In this very comment section, ordinary people with plain common sense pointed out several potential problems with the design: the test of messages with humor might have had no effect because the joke is lame (and unclear too); the message "Your vaccine is waiting for you" might have been effective not because of the implied ownership, but because of the implication that this is a time-limited invitation.
    And if I know anything about BS literature, all such objections will be flatly ignored, and the "findings" reported as scientific truth.
    So, in the end, the main thing we "gained" from this megastudy is PR for Wallmart.
    This is just a bastardization of true science. Fraud is the least of your problems, folks.

    • @mxvega1097
      @mxvega1097 Před 10 měsíci +5

      Exactly. Without open data, it is just a very big closed study. How to apply the replication / reproducibility function to it? Another researcher will have to do another study with 689,000 results? To what end? To study and learn, or to provide functional data for a health campaign which has already concluded its "right answer"?

    • @gaerekxenos
      @gaerekxenos Před 6 měsíci +4

      Agreed. The testing premise/methods are not good for a number of them. The logic just isn't there for a number of things. Humor that isn't appropriate for the situation isn't typically appreciated by certain people. Vaccinations are one of those things where the humor might not be appreciated; however, applications for something like University Deadlines are something where a bit of humor can actually be strongly appreciated. Hell, one of the reasons I am considering applying to a certain place for Graduate school that I ended up never finishing my application for Undergrad years ago was because they sent witty postcards with very punny reminders for completing my portfolio for submission back when I was applying for Undergrad. If you sent me a joke for vaccinations, I am either not going to take you very seriously or just treat it as if it were any other ordinary reminder.
      Some additional things with why "your vaccine is waiting for you" worked are things like the assumption that "oh, there was work that was done to make it simpler for me to go in and grab it and be done," or to guilt-trip people with "this is now labeled as yours and if you don't use it, then it will just sit there and be wasted." I didn't even think about the "time-limited invitation" aspect until you mentioned it

  • @fallenangel8785
    @fallenangel8785 Před 10 měsíci +37

    In addition to pre-registration from the side of researchers , journals should make their initial acceptance for papers based on this preregistration ( i.e : research idea , study design ) not based on the results

    • @bram5683
      @bram5683 Před 10 měsíci +9

      This is actually starting to become available; there are now quite a lot of journals in various fields that offer 'registered reports' - the study design gets reviewed first and the results will be published (in principle) regardless of outcome

    • @fallenangel8785
      @fallenangel8785 Před 10 měsíci +3

      @@bram5683 can you provide me with some examples?

    • @bram5683
      @bram5683 Před 10 měsíci

      @@fallenangel8785 Ah sure; actually I just noticed even Nature has them now (see their editorial on preregistered reports from February 22nd of this year). But the Center for Open Science has a list of journals on their site. I don't think I can put a link here, but you'll find it if you search for registered reports cos / participating journals

    • @ohnenamen0992
      @ohnenamen0992 Před 10 měsíci +4

      THIS! In Psychology and I would believe in other fields as well, there is a huge publication bias. This could only be solved if the journals accept papers based on the design rather than the results.

    • @niekverlaan7227
      @niekverlaan7227 Před 10 měsíci +2

      I came to the comment section to say exactly this. I mean, if you're working for a popular glossy magazine, I understand that you like to publish the most spectacular (sounding) articles. But in science, the result shouldn't count for wether the article is published or not.

  • @whycantiremainanonymous8091
    @whycantiremainanonymous8091 Před 10 měsíci +9

    On multiple authors, recall Gino's retracted studies had quite a few authors. In one case, there appear to have been at least two different fraudulent studies (by Gino and by Arieli) were included in one paper. Multiple sets of eyes, on multiple sets of fraudulent data...

  • @billscott1601
    @billscott1601 Před 10 měsíci +14

    Aren’t all papers peer reviewed, when my wife, an MD, published her papers they were all peer reviewed. She frequently review papers published by others in her field.

    • @FishSauc
      @FishSauc Před 10 měsíci +8

      I guess not well enough

    • @Heyu7her3
      @Heyu7her3 Před 10 měsíci +8

      ​@@HerrRotfuchs that + if you're prominent in the field/ your field is novel, your peers are your friends & can be easier to identify a paper even in a double-blind process.

    • @TomJacobW
      @TomJacobW Před 10 měsíci +8

      ⁠​⁠@@HerrRotfuchs and even if you get access to the original data (which is happening more and more), you can only check it. In essence: peer review is desk work, not lab work - so it’s not the end-all-be-all.
      Actual “knowledge” is only formed through a rigoros, arduous process involving research, review, discussion, replication, model forming, predictions, research and so on.
      People are naive and somewhat gullible about novel research, but we also aren’t wizards or machines; if we want to fix it, we need something practical and feasible; trust will always play a role in human systems. Solutions, that keep the status quo financing and predatory hierarchy and that are expensive, take time and effort or ignore the human element won’t work!

    • @falrus
      @falrus Před 10 měsíci +6

      In my field I have requested the original source code to verify that the graphs were indeed correct. This request was denied and our lab just refused to review the article. It does not mean, this article will not be reviewed by somebody else.

    • @markjoseph2801
      @markjoseph2801 Před 10 měsíci

      Add a consumer union/computer reports entity to validate the data and report fraud. In the end, the public pays the price as these universities suck up federal funds on inane and rigged studies. Crowd sourced review with big data analytics.

  • @whycantiremainanonymous8091
    @whycantiremainanonymous8091 Před 10 měsíci +10

    On megastudies, don't we run a strong risk of false positives with these? If you test for 40 hypotheses, on average two will give "significant" results (at p

    • @clankb2o5
      @clankb2o5 Před 10 měsíci +2

      That's why they needed a massive sample size. They took it into account.

    • @whycantiremainanonymous8091
      @whycantiremainanonymous8091 Před 10 měsíci +3

      @@clankb2o5 Isn't p

    • @clankb2o5
      @clankb2o5 Před 10 měsíci

      @@whycantiremainanonymous8091 I should have been more clear. I do not believe that an absolutely huge team of researchers would forget something as basic as a Bonferroni correction. They must have ensured the statistical validity.
      My (dare I say reasonable) assumption is that the lower p-values that they used required a larger sample, that is why she brings up the extraordinarily large sample size. Because of course the effect size doesn't change.
      And no, p

  • @guard13007
    @guard13007 Před 10 měsíci +11

    I hate that the solution is saying a company should stick its fingers into science.

  • @joaoneves4150
    @joaoneves4150 Před 10 měsíci +2

    When it says "waiting for you" makes it seem that it won't wait forever so either I get it now or I lose my chance.

  • @Sheikdaddy
    @Sheikdaddy Před 10 měsíci +10

    We have a system of academia where one can create a spreadsheet with any data and as long as it looks legit nobody's verifying that the research was done?
    If you want to fix every soccer game you don't need to bribe entire teams. You just need 1 goalie.
    You only need a couple of fabricators in a system of a lot of people to be able to fabricate anything you want.
    How do you resolve your faith in a data world when any data could be fudged? When there will be scandals of studies being published turning out to be chat gpt created in the future?

  • @luszczi
    @luszczi Před 10 měsíci +19

    "Have you heard the one about the flu? Don't spread it around!". I think the effectiveness of this joke might have been moderated by its funniness.

  • @updatingresearch
    @updatingresearch Před 10 měsíci +8

    Very weary about this "hope". I am sure megastudies can be defeated by those with enough maligned motivation. Real validation is not peer review it is repeatability and repeated studies by independent researchers.

  • @drbachimanchi
    @drbachimanchi Před 10 měsíci +5

    As an undergraduate I was part of data collection team...I carefully copied data from my friend with minor modifications to save time for biking...it is being cited as a ground breaking study till date

  • @vparez4363
    @vparez4363 Před 10 měsíci +4

    No, this is completely wrong. We do not need more working principles from the industry in academia, we need less! The greed which is ported from industry with stupid measures like number of citations and h index into science is what causes academia to behave as it does. If it werent for the incentive to commit wrongdoing, there would be no need for security.

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci

      1) Take away incentives
      2) Stop copywriting data so no one can see it
      3) Make the data available for scrutiny
      4) Maybe have a programmer create a program that scrutinizes data statistically, and keep it away from the people trying to publish so they can’t try to “game” the software

  • @mxvega1097
    @mxvega1097 Před 10 měsíci +4

    I absolutely disagree that a centralized data and oversight system is going to solve more problems than it creates. C'mon, this is game theory and institutional economics 101. When researchers come to rely on a centralized system, the inputs will fit the parameters and methods of the system, and the outputs will invariably be force-normed. Participants will not internalize the methodological and epistemic solutions and express them in better studies, they will likely do bog standard research and claim verification based on acceptance by the central system. Call it the Ministry of Scientific Accuracy approach. A better system would be more transparency, more challenge, acceptance of audit at any stage, incentivized replication, and ownership by the researcher of the integrity of the process. Integrity can't be outsourced. It can be reinforced, including through a well-designed whistleblower function, an ombudsman, etc. Sounds burdensome? Not really, if the alternative is competitive lawsuits and even lawfare. Try defending a lawsuit for years and maintaining focus, funding, and prospects.
    [interesting that Pete is working in internal audit - my field is mechanism design and risk management, incl in large banks]

  • @JEBavido
    @JEBavido Před 10 měsíci +2

    Wild to hear about the pharmacy/vaccine encouragement wording today because I just got one of those exact messages from CVS. They said MY vaccine awaited me.

  • @1789Bastille
    @1789Bastille Před 10 měsíci +2

    it is acutally quite surprising how most scientists are clueless about data. I wish there was something like a neverending peer review as part of a everlasting metastudy.

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci

      Cool. Who’s gonna oh for it! Science and capitalism only mix when capitalists want it to.

  • @jota5044
    @jota5044 Před 10 měsíci +4

    4:30 I find it bold to assume that a bank can store the data. The original source of the data can and will most likely have a personal interest over the outcome of any study using it's data

  • @Armz69
    @Armz69 Před 10 měsíci +5

    Can you do one on social desirability bias in behavioral studies and strategies to overcome that?

  • @AbbaKovner-gg9zp
    @AbbaKovner-gg9zp Před 10 měsíci

    the reaction shots of you nodding like a goon while she's talking were top notch keep it up

  • @splatsma
    @splatsma Před 10 měsíci +3

    I wonder if there are any attempts to critically analysis the validity of whole fields. I got a couple of years into my chosen field (international studies), to then realize its entirely dependent on opinion. Yet presents as a clinical fact-based critique. Which is far from reality.

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci

      Can’t go after psychology. How else could businesses falsify studies and publish them as fact to chase profits while harming people.

  • @yemiojo2265
    @yemiojo2265 Před 10 měsíci +2

    Even if you choose to put a stamp of authenticity on papers, crooks will still devise other means to measure up to get that stamped! It is like getting the "Organic" or "Green" badge on food products.

  • @champagne.future5248
    @champagne.future5248 Před 6 měsíci +1

    My takeaway is that behavioural science has some creepy ramifications in that it’s used by governments to refine their propaganda techniques

  • @surajsajjala2857
    @surajsajjala2857 Před 10 měsíci +25

    Harvard is a big L.

    • @ahmedaliraqi17
      @ahmedaliraqi17 Před 10 měsíci +1

      What does L stands for ?

    • @dengesizd
      @dengesizd Před 10 měsíci

      Liar?

    • @lisleigfried4660
      @lisleigfried4660 Před 10 měsíci +2

      @@ahmedaliraqi17 L = loss

    • @TomJacobW
      @TomJacobW Před 10 měsíci +4

      ⁠@@ahmedaliraqi17internet lingo. “W” means winner or win, “L” means loser or loss.

    • @ahmedaliraqi17
      @ahmedaliraqi17 Před 10 měsíci +1

      @@lisleigfried4660 thx man

  • @davidBTAS
    @davidBTAS Před 10 měsíci +4

    Have had chance to watch/or are you aware of a video by CZcamsr: Quant, stating that Dan Ariely may actually be a fraud?

    • @PeteJudo1
      @PeteJudo1  Před 10 měsíci +10

      I’ve seen the video. Have something in the works, can’t say too much right now.

  • @caglayanozdemir348
    @caglayanozdemir348 Před 10 měsíci

    Awesome work

  • @cipaisone
    @cipaisone Před 10 měsíci +14

    While in accademia I was, as many, frustrated by the amount of papers of dubious validity, especially those on “high impact” journals. This, together with the share amount of papers published, many of little or no relevance, I am convinced will result in a collapse of accademia in few decades at most, unless something change.
    I believe that it is about time that states investe in some “parallel” institutions to research center, whose aim should be not to do research, but to try to replicate available research studies, so as to check, at least, the fraction of studies becoming popular and potentially relevant ( i.e, worth preserving for the future, as it is unlikely that most of the “science” will anyway survive in the decades or even centuries to come).
    I think checking available research data is becoming as relevant as or in fact more relevant than making new research, and states should be supporting such activities. It would be also a way to give works to some of the many valid researchers that cannot continue the very competitive market of accademia ( where indeed the extreme competitiveness and lack of control of outcomes is I believe the main source of fraud in accademia)

    • @salganik
      @salganik Před 10 měsíci

      1. Science existed for hundreds of years and suddenly all of a sudden it will collapse. Sounds legit.
      2. The vast majority of publications are just not getting any attention so the state should not do anything to know that such researchers are doing a bad job.
      3. How a parallel institution while not having experts in most niche fields can check or replicate anything, when it comes to state-of-the-art equipment, computations, or theoretical complexity?
      4. The state is hiring researchers for many reasons including producing independent thinkers who can lead research in academia or industry. And how revealing 1% of cheating researchers would significantly help the research or the state?

    • @cipaisone
      @cipaisone Před 10 měsíci

      @@salganik
      1) how many people were there, doing research, 2-3 hundreds here ago, compared to the last 20-30? How many publications were produced back then, per year, compared to know? My man, things in humanity changed exponentially lately, I do not know where you have been…
      2. The vast majority of publications are not getting any attention by people, but not by browsers, so that what happens when you search for a trivial spectroscopic feature today, or the composition of some industrially well-known coating, is a never-ending list of garbage. I do not know you, but I do not think that this is a useful way of managing knowledge.
      3) not even clear what you mean.
      4) the “state” (I do not know what state you refer to, but very broadly, for much of the states) invest very little for research, and that little spent on science is to a large extent spent in exotic “hot-topic” and cryptic nonsense, with only a small fraction leading to innovation in science or industry… I think your 1/100 estimation of unreliability in science is way lower than what the reality is (and by the way, from where you got this statistics? Or is just bs? Just curious…)…I think there is an old veritasium video on CZcams making a better estimation on how much data is wrong in publication, go check it out .

    • @salganik
      @salganik Před 10 měsíci

      @@cipaisone My third point is very simple: if an institution wants to replicate a fraction of studies as you suggested, it should have funding comparable to all universities and employees with similar qualification. And even this would not eliminate cheating as not all studies are based on data you can reproduce. This includes theoretical studies, usage of heavy simulations, and observations of nature. And, of course, funding an institution with a comparable budget of universities and institutes is unbearable for most countries. Norway spends around 8% of its budget on education, so a substantial fraction goes to universities, and this doesn't include governmental research institutes.
      And the Veritasium video was not at all about the fraction of research which is falsified, but about studies which make statements not supported by data. The fraction of retracted papers is way less than 1%, there is a number of papers about it. And there are many anonymous questionnaires when researchers were asked if they ever cheated with their results. There is a range of numbers, but on average something close to 1%.

  • @benjaminkuhn2878
    @benjaminkuhn2878 Před 10 měsíci +3

    okay, so you jsut wanna trough tech at the issue. Lets hope there is valid training data for the AI.

  • @haroldbridges515
    @haroldbridges515 Před 10 měsíci +1

    Actually, he has no basis to be sanguine about the extent of data fraud, since scrutiny of the type that exposed Gino is rare.

  • @Planetoid52
    @Planetoid52 Před 10 měsíci +3

    Great interview. It's a happier world when people of integrity are doing the research and are designing processes and systems to reduce fraud and also to incentivize studies that may not produce 'wow' results but that still contribute to mega-studies. Love your channel.

  • @parrotraiser6541
    @parrotraiser6541 Před 10 měsíci +1

    Studies of failure may be boring and unpublishable by themselves, but they are valuable and should be seen, to avoid future mistakes. Engineers study failures for that very reason. Mega studies make that possible, by including the failed hypotheses.

  • @lisleigfried4660
    @lisleigfried4660 Před 10 měsíci +7

    2:16 bro's acting like a stock footage actor

  • @d3202s
    @d3202s Před 10 měsíci +5

    Behavioral "science." Pleas.

  • @123-ig9vf
    @123-ig9vf Před 9 měsíci +1

    What about funding systems? There is more harm to science in how the funding agencies operate. Why are proposals not reviewed in a double-blinded mechanism?

  • @Heyu7her3
    @Heyu7her3 Před 10 měsíci +3

    Thank you for providing strategies to use in qualitative research!

  • @gaerekxenos
    @gaerekxenos Před 6 měsíci

    Funny enough, the prompt for Vaccination of "Waiting for you" or "Reserved for you" isn't actually just 'ownership' -- it's a way of guilt tripping people. "We've gone out of our way to make a reservation for you," "this is a resource that is going to be wasted if you do not take it," etc. Another thing related to that is "We have taken the work out for you" or "we've made it easier for you to complete this task" - basically the removal of barriers to make it simpler and easier to access, which is implied if they have 'reserved' the vaccination for you as there would be an assumption that whatever complicated paperwork or coordination effort for securing it has already been done and that there would be less of a wait time to go through with the vaccination (there isn't all that much of a complicated process in the first place as far as I am aware, but the illusion that whatever might be there now isn't can be a motivator)

  • @niekverlaan7227
    @niekverlaan7227 Před 10 měsíci

    I love this comment section! Its full of like-minded people who have critical remarks in a mostly positive way. I really adds to the video itself. Thanks all!
    And to add something to the discussion: I've always learn that one example is no example. You always need a few examples to understand the essence of the examples. Thats same principle might apply to studies too. You need more than one study to proof a hypothesis.

  • @andrewmiller3055
    @andrewmiller3055 Před 10 měsíci +3

    First Prof. Milkman is saying better safeguards to mininize fraud are necessary (aka let's not deny the obvious, that cheating scandals require reform beyond colleagues chastising each other behind closes doors or commiserating over coffee). Professor Milkman shows a lot of poise and leadership in moving quickly to solving a huge problem while not taking a potshot at anyone eg the solution will sideline more bad actors. Unfortunately she doesn't make any waves in terms of highlighting some egregious bad actors that need to be dismissed. I'm glad Pete Judo does this for everyone - aka cutting through bad faith arguments and pointing out the field has a problem without rushing to the solution end. He's done a really good job of handling the dumpster fire affecting the field rather than avoiding it, and has even said that it's important to take a second look at references so that only behavioral science that is correctly vetted is merited. I am also glad Pete Judo squarely puts the onus on the people involved AND the incentives, rather than merely the incentives. That's the right thing to do, because at the end of the day people are still responsible for their work, no matter what that means in terms of professional consequences. Anyways, thanks for looking at all the dimensions and going where Professor Milkman can't, but also giving Professor Milkman a chance to express what will make a difference, both for better science and a better profession beyond the scandal.

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci

      Universities and Labs are still businesses. Ofcourse they’re going to be predatory to everyone involved.

  • @charlesdarwin5185
    @charlesdarwin5185 Před 10 měsíci

    A raw data set has to be sealed and sequested in a repository with the IRB or equivalent before analysis is done.

  • @internetmovieguy
    @internetmovieguy Před 10 měsíci +6

    Hot take: pier review is just academic circle jerk. If we want the field of research (for all subjects) to grow then the pier review system needs a complete overhall.

    • @OptimalOwl
      @OptimalOwl Před 10 měsíci +5

      Isn't it really weird that no one has ever done a really thorough systemic review on the efficacy of the journal & peer review system?
      Researchers basically donate their work to journals for free, and then the journals turn around and sell that work for exorbitant prices. That's how you get those stories about various journals clearing 30+% profit margins.
      I don't think it's unreasonable for society to demand some quality assurance in return for that privilege.

    • @TheAlison1456
      @TheAlison1456 Před 10 měsíci +1

      this isn't hot at all
      it is just obvious

    • @blujaebird
      @blujaebird Před 9 měsíci

      Peer review, not pier

    • @blujaebird
      @blujaebird Před 9 měsíci

      Also...*overhaul

  • @falrus
    @falrus Před 10 měsíci +11

    Megastudies should be secure enough even against Gino type of data manipulations.

  • @antsmith739
    @antsmith739 Před 10 měsíci +1

    Having questionnaire results published directly to a blockchain may help.

  • @JennaHartDemon
    @JennaHartDemon Před 10 měsíci +5

    Its interesting. This is all great. With deepfakes we are going to have to have recordings cryptographically signed on the collection hardware to verify the authenticity. Its good to see all these other branches of STEM focusing on authentication of data as well.

  • @zxdc
    @zxdc Před 9 měsíci

    @1:30 which website is that?

    • @PeteJudo1
      @PeteJudo1  Před 9 měsíci +1

      Ground News! Use my link in the description for a discount :)

  • @TripImmigration
    @TripImmigration Před 10 měsíci +1

    None of this avoid the ghost people and mega studies is only available for influential academics
    It's good but the measures still very naive for the reality

  • @Dragoon91786
    @Dragoon91786 Před 9 měsíci

    Maybe, I'm a tad absurd, but providing researchers with the means to test (as you said) "absurdly large sample sizes" seems to me to be what *_should_* be the norm. I realize why it isn't (and there are a *statistically significant number of reasons why 🤣), but when setting goals for a planet's worth of people, having larger sample sizes (when you haven't noticed the questions or something significant) can help even out all the craziness that is the human condition.
    There are so many variables that it seems to me absurd to have smaller sample sizes unless one is trying to figure out how one wants to model the study. Pre-studies to help improve the actual study's design. This might have beneficial effects on these so called "mega studies" by having the opportunity to control for legitimate variables that will impact the study's results in a detrimental way-by "detrimental" I mean that they skew results away from a more accurate model or description of reality.
    This way, while it will certainly limit the scope of a study-such as limiting it to a certain characteristics (say, people with a particular genotype), that way those specifications can be clearly stated.
    Basically, controlling for variables and stating those variables so that more information about some aspect of reality can be parsed.
    "When we controlled for ambient temperature we saw greater results than when testing during inclement weather." Say if the study in question has results from people in areas where a massive heat wave or cold front was occuring. When new data sets were tested accounting for weather you could see how weather impacted the types of text messages sent to remind people to get vaccinated. Would cold days for a given region have a greater impact on studies subjects tendency to attend their flu shot when reminded.
    Or, say, accounting for ADHD. What happens if the sample had an unusually high number of people with executive function impairment such that when compared to a control group and a group that had greater executive functions, how might the results differ? What could be meaningfully said about controlling for these variables, etc
    It would be nice to see more studies getting the opportunity to control for more variables and to have all of this data as well as the pre-study(pre-studies) be registered/verified along with the main study.
    The cherry picking of data as opposed to transparently controlling for variables is so notorious.

  • @blujaebird
    @blujaebird Před 9 měsíci

    Its interesting to me that this video has such low views compared to the other ones.

  • @giovannigiorgio42069
    @giovannigiorgio42069 Před 10 měsíci +2

    I have an idea that uses the blockchain to validate the data used for research, however I am unsure of how viable it is as I am not very knowledgable on the inner workings of blockchain.
    Would a system which uploads raw data from a study or field research based on a predefined period alongside the date and time of the upload potentially reduce the likelyhood of a tampered dataset?

    • @bigboi1004
      @bigboi1004 Před 10 měsíci

      Blockchains are just a worse version of the good old append-only database, which would be sufficient to implement your idea. A realistic/easy implementation is that raw data is pushed to a version control system (think GitGub), and the researcher has no permissions to rebase (meaning to alter the past). This allows researchers to modify data which can be used to anonymize it or correct mistakes, but any changes would be visible to an auditor. Auditors would see timestamps along with what data was changed and exactly how. This, however, doesn't prevent a malicious researcher from tampering with the data *before* it hits the database. That problem alone renders the idea pretty much a non-starter.
      It's not a problem that can be solved with software at all, and I say this as a computer science student. People get extremely clever when they're motivated, and "fading into obscurity because you aren't publishing groundbreaking research" is unfortunately a strong motivator for some to cheat. A smart enough researcher will bypass the anti-fraud mechanism, and can then claim that their data is legitimate *because* it made its way through the system (imagine someone responding to "Do you have the key to that door?" with "Well I'm inside, aren't I?")
      I think the problem is ultimately incentive. There are strong reasons to cheat and, as things stand, it can take years to get caught. I'll admit that I don't have a real solution in mind because the scope of the problem is too large, but I'm certain that it isn't a software solution.

  • @garyquinn8014
    @garyquinn8014 Před 6 měsíci

    One thing which really strikes me about this whole episode is the amateurishness of it all.
    From the original experiment, to the simple types of data collected, to the data fakery itself, it's all so trivial and basic.
    This is (was?) a highly regarded professor at Harvard, earning over $1m a year, and all she can think of is an extremely simple experiment involving where to sign a document?
    Even the alleged fake data is so so simple, not some sophisticated exersize in subtle data manipulation - just some basic data juggling.
    I really worry about the future of US academia.

  • @Dragoon91786
    @Dragoon91786 Před 8 měsíci

    Did they sort this Data for ADHDs? Cuz we'll majorly throw iff your stats if not accounted for in the "reminder" department! 😅

  • @jloiben12
    @jloiben12 Před měsícem

    So a mega-study is basically a super meta-analysis

  • @estern001
    @estern001 Před 8 měsíci

    What does it mean that a study "failed?" Layperson here. I understand that science is about collecting data. I was told that all data is important. Don't we learn something even when we don't get the expected result? Shouldn't we value that research just as much as data that supports a hypothesis?

  • @ArturEjsmont
    @ArturEjsmont Před 10 měsíci

    For the behavioural science community not to push for change in incentives is surprising. Control and beaurocracy is a losing battle.

  • @byronhunter6893
    @byronhunter6893 Před 9 měsíci

    idk about ownership 🤔
    I'd think most people would be more attracted to hospitality for a vaccination, something a bit distant from the ironically cold mechanisms of a hospital. A bit anecdotal perhaps, but I've never known of anyone that's entirely comfortable with a vaccine "for them".

  • @RemotHuman
    @RemotHuman Před 10 měsíci +1

    do we want humanity to know how to manipulate humans with things like ownership language

  • @sacman3001
    @sacman3001 Před 10 měsíci +3

    Just nudging ain't science

  • @killa3x
    @killa3x Před 10 měsíci +2

    Has he done a video on Dan ariely? That dude a straight fraud no?

  • @opheliaelesse
    @opheliaelesse Před 10 měsíci

    Who cares about millions ! of wasted, tortured animals?
    Few.

  • @FinnBrownc
    @FinnBrownc Před 6 měsíci

    You need git based change tracking for data. Tech has been doing this for literally decades.

  • @Ganntrey
    @Ganntrey Před 10 měsíci

    It seems to me that "mega-studies" are just pre-emptive metanalyses. This is definitely good, but its not inherently new. It's just academic responsibility preempted.

    • @whycantiremainanonymous8091
      @whycantiremainanonymous8091 Před 10 měsíci +2

      No. Meta-analyses cover many studies testing the same hypothesis. Mega-studies cover many hypotheses in one study. That's much more methodologically questionable.

  • @plugplagiate1564
    @plugplagiate1564 Před 10 měsíci

    ... and to comment on the megastudy topic, why is a survey of 680000 people unreliable? if they use the data of the nsa, it becomes a rather humble number.

    • @meneldal
      @meneldal Před 10 měsíci

      @@JS-oh2dpNot to mention a bunch of studies are actually megastudies in disguise, they just remove the questions that didn't lead to any interesting results

  • @MadsterV
    @MadsterV Před 10 měsíci +1

    Studies on how to manipulate people
    neat.

  • @stephmaccormick3195
    @stephmaccormick3195 Před 10 měsíci +1

    Didn't one of them Trumps went to Wharton? 🤣🤣

  • @morgengabe1
    @morgengabe1 Před 10 měsíci

    The recurrent reproduction crisis in psychology was never a threat to academia.

  • @GutsofEclipse
    @GutsofEclipse Před 5 měsíci

    7:25 It's ironic that he's talking about doing exactly the kind of thing that's making people view academia as a left wing partisan machine that's abandoned all of its principles without any disclaimers. He didn't have any other examples?

  • @asadchoudhrya
    @asadchoudhrya Před 10 měsíci +6

    None of this has hope. This guys very biased and extremely shy to approach real and more substantial academic fraud. Your picking socially safe and easy topics lol.

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci

      He chose a cringe af career path. Now that the openly corrupt field is now even more openly corrupt, they have to triple down to justify their degrees and semi-wasted time.

  • @lukasbormann4830
    @lukasbormann4830 Před 10 měsíci +7

    Harvard is done I’d say

    • @franciscody9622
      @franciscody9622 Před 10 měsíci +4

      Stanford is also done.

    • @TomJacobW
      @TomJacobW Před 10 měsíci +1

      unlikely

    • @saraluvsyuo
      @saraluvsyuo Před 10 měsíci

      it will never be lmao no one will care

    • @andrewmiller3055
      @andrewmiller3055 Před 10 měsíci +1

      Ha. If I were given a dollar over the years for every time someone said that Harvard was done. Harvard's outliving all of us, our descendants and theirs too.

  • @Ganntrey
    @Ganntrey Před 10 měsíci +1

    I've left similar comments on every video in this series. PEER REVIEW!!!!! IF A FINDING IS REPEATABLE, THEN IT IS VALID, IF NOT, INVESGTIGATE THE ORIGINAL PUBLICATION!!!! The whole scientific method is subject to and validated by the process of peer review and repeatability.

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci

      Unless the PR makes these businesses $1 Million+, they won’t pay for it.

  • @markwest1963
    @markwest1963 Před 2 měsíci

    Penn ✊

  • @masterdecats6418
    @masterdecats6418 Před 10 měsíci

    Imo always trust a neurologist or an endocrinologist before you believe a psychologist..

  • @rubberduck2078
    @rubberduck2078 Před 10 měsíci +4

    the "ownership language" sounds a lot like a lie

  • @redoktopus3047
    @redoktopus3047 Před 10 měsíci

    >wharton

  • @brownieboiii
    @brownieboiii Před 10 měsíci +2

    Penn > Harvard frfr

    • @MadocComadrin
      @MadocComadrin Před 10 měsíci

      I agree, but Penn (and especially Wharton) is also filled to the brim with rich kids so out of touch with they rest of us that they couldn't tell you the rough price of a banana.

  • @PeGaiarsa
    @PeGaiarsa Před 10 měsíci

    Damn... the megastudy reminds me of a fundamental concept in free-market capitalism: competition. Having multiple competing forms of evaluating the same phenomema leads to a more clear and conside picture of what actually helps or not. Maybe the solution to increase competition between different researches and methods to find the ones that most accurately describe reality.

  • @stanleyklein524
    @stanleyklein524 Před 10 měsíci +6

    Katy Milkman is not a scientist (behavioral science is a conceptual oxymoron -- unless you think a discipline that violates two of the most basic criteria for X to be considered a science still merits the status of "science").

    • @MadocComadrin
      @MadocComadrin Před 10 měsíci +6

      While I have serious concerns about behavioral science programs in business schools (due to weird and misplaced incentives), any field that uses the scientific method is a science.

    • @luszczi
      @luszczi Před 10 měsíci +8

      Hey it's the pretentious "professor" and his insider knowledge again. 😂 Is there any other type of oxymoron than a conceptual oxymoron? And what are those criteria you're referring to? You speak of things nobody has heard of before, educate us! 🤣

    • @TheAlison1456
      @TheAlison1456 Před 10 měsíci

      why do you get to decide what is science?

    • @masterdecats6418
      @masterdecats6418 Před 10 měsíci

      @@MadocComadrinYeah but what if that “science” is routinely bastardized by fake results.
      Your hypothesis and results all mean shi* if you’re going to fake it.

    • @stanleyklein524
      @stanleyklein524 Před 10 měsíci +1

      You are confusing a necessary condition with a sufficient condition.@@MadocComadrin

  • @erandeser5830
    @erandeser5830 Před 10 měsíci +2

    In universities "professors" walk free, teaching that there is no difference between men and women. Go after their publications.

  • @zhenyaka13
    @zhenyaka13 Před 10 měsíci +1

    Love it! So…. How do we know that your guest or you are lying?
    Isn’t what you practice a lie? Just another way to manipulate humans to do what you think they should?
    What happened to persuasion with truth?

  • @MarkMackenzievortism
    @MarkMackenzievortism Před 10 měsíci +6

    en.wikipedia.org/wiki/Grievance_studies_affair

    • @Heyu7her3
      @Heyu7her3 Před 10 měsíci

      😮 That's about as bad as Mindy Kaling's brother's med school acceptance...

    • @TomJacobW
      @TomJacobW Před 10 měsíci +3

      We are all just humans; if someone wants to be a “gender researcher”, it just is a reality that it attracts… well, you know which kinds of people. And those people will have strong extra-scientific influences, like politics, peer pressure/ gaining the respect of your peers , biases and so on. This inevitably will seep into the research - which is an immense problem & it’s especially transparent with these fields you referred to, which is why they are so openly criticized!
      Also an issue in journalism!
      We need to find actually viable, fair and “human” solutions to these “outer” problems that go beyond the populist (and also purely political) criticism that is prevalent in more right-wing media.
      Ironically, a solution is indeed “more diversity”!😅 but maybe next time diversity in thought & not being a “gender minority”…
      We have a lot to lose!