ACADEMIA IS BROKEN! Stanford Nobel-Prize Scandal Explained
Vložit
- čas přidán 4. 07. 2024
- My Website: petejudo.com
Follow me:
Behavioral Science Instagram: @petejudoofficial
Instagram: @petejudo
Twitter: @petejudo
LinkedIn: Peter Judodihardjo
Good tools I actually use:
Shortform: www.Shortform.com/pete
Ground News: ground.news/Pete
Please subscribe!!! Also, PubPeer comments can be found here: pubpeer.com/search?q=Thomas+Südhof
6:47 Wait, what!? Okay, that's amazing. Does Elisabeth Bik have some incredible neurodivergence that allows her to spot this by eye!?
She's like a detective born at just the right time in the right era to catch these types of. . . anomalies.
Ask people to "like" more. How did a video this good get only 3K likes on a 60K view count? Seems low to me.
I think you are wrong that we need to encourage "good" neuroscience, because the more they know about the brain the more power they have and the more they will use it against us. They are not using science for good purposes, they are using it for social control.
Well would these photographic inconsistencies alter the general results and general findings of the paper? From what I can see it would not. So where is the scandal?
These alterations look more like some idiot assistant scribbled with a pen on the original chromatography, forgetting that pen ink will also be Chromaticagraphitsed. And somebody remove the inkpen scribblings into typed letters to the column on the right. Basically bringing back the chromatography to its original state before somebody spoiled it.
This all looks like clarification not like forgery. If a scientist doubts the results of a paper, the scientist repeat the experiment and see if the results match. They don't look for digital inconsistencies.
You should interview Elizabeth Bik
Science is now doing what peer review should have been doing the last 2 decades.
Pretty wild when you think about it like that, but yeah.. I, as a PhD student was put through some very harsh and frankly cynical questions before I managed to publish my first paper. A lot of rejection even though I am confident to say it was all honest work that I was able to repeat many times.
How come established scientists are not put to the same level of thorough scrutiny?
Surprisingly Joe Rogan interview with Terrance Howard went over how peer review is a scam created by Maxwell the guy that had something to do with Epstein
Perverse incentives.
Problems with peer review would disappear if we valued experimental replication more.
I'm first! isn't science, it's different people working on the same problem coming to the exact same conclusion.
Jan Hendrik Schön... thats the only name i gonna say, anything comming after him as "peer-reviewed" isnt worth a miniscule of consideration on itself. Peer Review is broken sicne that guy, and until the scientific community comes up with something which FINALLY fixes the hole Schön shoot in it i gonna consider chases like this as a inteded feature, not a bug.
another month another fraud
Careful, that's a really big accusation. Until there's a formal investigation it's best not to assume that
@@rickyc46 it's a youtub comment for fuck sake
Always run a few blur tools in Photoshop to cover your tracks!
Idiocracy
ALL past Nobel laureates' works should be reexamined for potential fraud. It is highly unlikely that this human behavior only began in recent years
If there is ever going to be a prize for showing fraud, this should be called the Elizabeth Bik prize.
[Laughs in Hindenburg Research]
According to Wikipedia, in 2021, she was awarded the John Maddox Prize for "outstanding work exposing widespread threats to research integrity in scientific papers".
@@sophigenitor - The woman is on a mission! It can be lonely and thankless, though. We should support her efforts.
Well, If a proper scientist doubts the results of a paper, the scientist repeats the experiment and see if the results match or not match. They don't look for digital inconsistencies.
That's not scientific.
These alterations look more like some idiot assistant scribbled with a pen on the original chromatography, forgetting that pen ink will also be Chromaticagraphitsed. And somebody remove the inkpen scribblings into typed letters to the column on the right. Basically bringing back the chromatography to its original state before somebody spoiled it.
This all looks like clarification not like forgery.
@@winstonchurchill8300 "If a proper scientist doubts the results of a paper, the scientist repeats the experiment." There are no proper scientist then, i guess. Only broke loosers in a dire need of funding.
When does Elisabeth Bik get a Nobel Prize?
There should be a Nobel Prize for proving fraud or outright disproving the truthfulness in important papers. That might actually create a healthy tension between the scientists doing research and the people keeping them from using tricks and/or deception to gain status and wealth.
@@hungrymusicwolf That is an excellent idea because that person clearly has a better understanding than the peers who reviewed those published papers.
Exactly
A Nobel prize on unmasking fraudsters
You get a Nobel! You get a Nobel! You get a Nobel!
Professor at a top university, at the forefront of medical research:
"lemme just turn my pictures 90 degrees, that'll fool them"
"those are compression artifacts" lmao what a joke. This dude's supposed to be smart but doesn't even have a grasp for how unlikely it'd be for random noise in two areas of an image to match up perfectly? 😅😂😂
I mean... its kind of worked for a while. If it wasn't so easy to slip through, prominent people would at least put in more effort.
Suprisingly lazy at cheating!
@@pablovirus Not unlikely. Modern compression algorithms do that deliberately.
@@joshuahudson2170 Like what? They encode an area as "clone-brush from that other area that has the same noise spectrum"?
This is why I always photograph extra unpublished blots to create unique forgeries.
I'm amazed that these PHDs use such rudimentary methods. I may be a low-rank engineer, but I'm pretty sure I can do significantly better forgeries.
@@federicolopezbervejillo7995I think it’s because the fraud is often not premeditated. Imagine grad students and postdocs are working in a pressure cooker lab where an overbearing supervisor demands you crank out amazing results, that fit their preconceived notions, and must make it into a top journal by some arbitrary deadline. Honest scientists too often get stymied by negative results, or are sidelined into the annals of mediocrity.
@@federicolopezbervejillo7995much of it is laziness and arrogance. They're in a field so used to publishing reams upon reams of nonsense that no one will ever read other than a few lazy reviewers that I'm sure they never actually expected anyone to give it any more than a cursory glance.
They’re sure to try harder in the future. They’re smart, science is hard.
@@federicolopezbervejillo7995 the concerning thing is, if undetectable forgeries are that simple, then they are out there undetected
A med student friend of mine asked her adviser if she should go into research, or medical practice.
He asked her how important it was to her to be able to look at herself in a mirror. Abby asked him to clarify, and he said, "They won't order you to commit fraud, but they'll press you to find a way to get their products approved - whether they help patients or not. Could you still look at yourself in the mirror after doing that?"
That was 20 years ago, almost. I never found out which way she went.
Scientists can't do science if they can be pressed to be biased or commit outright fraud.
""They won't order you to commit fraud, but they'll press you to find a way to get their products approved" -- Sounds a lot like the computer software industry.
@@JakeStine But computer software rarely kills tens of thousands of people, like Vioxx, Fen-Phen, and Fentanyl.
Makes software design sound almost benign... except they still cheat customers out of their money. Just not their lives. 💩🤡🤯😵💫
@@JakeStine Sounds like every job I have ever had in every industry I have worked. Construction, contracting, software, food and sales. If management is not requiring deception, the customers or culture are.
@@luck484either you're too stupid/lazy to find a field where that isn't the case, or too amoral to care
Elisabeth Bik is the real MVP. When these institutions fire some of these fraudsters they should send the discoverer that employee's would-be bonus or 6 months salary upon termination. It would show that they actually care about integrity and encourage academic honesty rather than just acting aghast and brushing things under the rug. Encouraging honesty and sending a message that academia can have a future when trust in institutions is at an all-time low.
That would create another whole problem of people being shady just to collect money. People should do good for the sake of it, once people get rewarded for something they do the bare minimum in order to obtain that reward.
They don't care, they probably get more cash committing the fraud. People without integrity will act in their own best interest, so you can suspect even rewarding "honesty" would be filled with fraud as well.
Trust in institutions should be at an all time low. The neoliberals that dominate all of our institutions have disdain for the public and choose to implement their agenda through social engineering, manipulation, and collusion with various central gov agencies.
The bigger problem is that negative results don’t get published. So everyone will try to fudge their data, conclusions and say there is something significant.
As long as the people doing the experiments have a dog in the race there always will be these kind of problems.
True. I think negative results may often be more interesting than positive.
I read an article in a newspaper 35 years ago, or so. Researchers found that, "College women who got pregnant were more likely to finish successfully if they got an abortion." Besides the, "Well duh," factor, I marveled at the assumption on the direction of causality. Like maybe it's, "Women who prioritize college over motherhood are more likely to get an abortion."
I would find more usefulness from, "I tried this promising medical treatment and didn't get the result I hoped for." Others might try variations and find success, or the medical community learns and tries something else.
Time ago I was thinking the same, but now I think that if we include a new bunch of papers about negative results it would be impossible (may already be) to stay up to date of the smallest, lest significant micro area of research. So many journals, so many papers, so much systemic pressure… publish or perish… a big business with many journals that will support your work for a reasonable amount. Are positive results more easily faked than negative results? That’s is another one. Do a lazy experiment with a sick hypothesis , nothing comes out, publish on The Journal of Irreproducible Results. Done. Next
@@joajoajoaquin It is exactly this "publish or perish"-nonsense that pretty much forces people to forge. You *need* to publish, *only* positive results get published.
Guess what people will produce.
If your grad research gets negative results your academic career dies.
The desperate need for funding has corrupted ALL the sciences 😢
Did you mean "greed for funding"?
Nothing new sadly. Even 10 years ago when I was studying biochem it was an open secret that if you don't get the results they want they'll find someone else who will to give a grant. It's all crooked.
> need for funding
lol, hard to sympathize with scientists over this supposed desperate lack of funding when one of the best funded areas of research --- cancer research --- is filled with frauds and mismanaging of resource all the same. Not to mention, fundamentally this (lack of funding) is a problem that will never go away due to the nature of resource allocation and what we study in the first place.
@@thomasdykstra100 Considering not getting funding leads to people getting fired, and you need to work to live, it's need
@@TeamSprocketso does what that mean for integrity in the sciences? I am tired of businesses being demonized for seeking profit (even when they are acting ethically by anyone’s standards) when the sciences are considered noble for seeking funding even as it will use the most unethical means to get it.
As someone who as owned quite of few dogs, I find them eating my homework more believable than it being "compression artifacts" as would happen with low resolution rendering.
I've had my birds eat my homework once.
@@noelsteele My cat Grey tore up my finished assignment once. I had to redo the whole thing, internally crying all the while.
My dog chewed up my notes once. Fortunately never anything important. It was weird to find him chewing on paper
The second one would be the closest. But it just doesn't look like JPEG. Not to mention it wouldn't be copied around neatly like that. It would also need to be a very very high JPEG setting, as that content looks really difficult to compress - it's pretty close to random. And true random data cannot meaningfully be compressed.
Fun maths fact though: you can have an infinite string of random data. The chance that you can compress it by 90% is exactly zero. But it can still happen! It's the difference between surely and almost surely, where you can have probabilities of 1 or 0 but still have things not happen or happen.
@@lost4468yt there are compression algorithms that do exact duplicates
I love the fact that you took note of your audience's reaction to your first academia is broken videos and took action. Your action resulted in this channel becoming one of the most unique channels covering these subjects.
what reaction? what action did he take?
@@mrosskne what I mean is I think his channel was on marketing if I'm not mistaken but his first academia was broken video got a lot of views, so he took note and started to dig into cases like these which I believe had a good return in terms of audience count.
Oh yeah, science is science and people are flawed. That's why we need people like Pete and Elizabeth Bik to identify flaws and perform corrections.
Sorry to burst your bubble, but good old Pete here is guarding his backside by never calling a spade a spade. the always skirts the edges and give the liars weasel room.
@@biggseye no bubble to be bursted, I am a PhD myself, spent 6 years in academia and still active.
Humans are flawed, but the scientific principles are sound - seeing as I am typing up this response using incredibly advanced phone techology and transmit this text to a site for the whole globe to see at any time of day or night.
@@biggseye I imagine unless you have rock solid evidence that someone has done something themselves, an accusation of that calibre is very dangerous. Even the fact that this kind of stuff is being brought to attention is commendable I think. You don't *need* to point fingers and call people explicitly liars. The scientific community is smart and can draw conclusions.
Yeah gonna have to side with the PhD guy here. Channels like this are misleading.
This isn't so much "Science" is flawed as it is "Human integrity is insufficient".
As a supervisor it is also your role to verify the data and ask uncomfortable question-At least that is the case in Germany... Maybe you can mess it up once or twice but not 30 times.... You do get so much money BECAUSE you have to do this tidious task, you can not just rest on your past distinctions...
When your success in academia depends largely on the quality and relevance of your research, it’s no surprise that people fabricate data. There are essentially an infinite number of directions you can research, and the majority are dead ends. Imagine working your whole life in a merit-based system, climbing to the very top, then getting passed over because you happened to pick a research direction that was a dead-end, which you couldn’t have known was a dead end ahead of time. You can either attribute the last few years of your life to “at least others know not to do it this way now”, but the temptation to fabricate some results is very real.
Ironically, the negative result could, as you say, be beneficial to research as a whole. But it would hardly be a satisfying outcome for the researcher.
@@joannleichliter4308 Absolutely. Knowing what doesn’t work is extremely valuable and necessary for scientific development. It’s just not recognized as especially valuable as it’s a far more likely outcome.
@@tomblaise Moreover, in testing various compounds unsuccessfully for one purpose, researchers can inadvertently find that it is efficacious for something else. I don't think that is particularly unusual.
@@joannleichliter4308 University (at least Canadian university) is not entirely merit based. I would go as far to say that merit is not the main predictor of success. I have experienced as well as seen others be looked over for a multitude of reasons that have nothing to do with merit. If you want to include things like "politicing" and ones ability to blindly follow orders in "merit" then sure, but lets not confuse merit (marks, and significant findings) with other things. It's sad to see bright minds, over looked because they refuse to tow the line. People with power in these institutions would rather lie and manipulate people and data to remain "successful" than to accept that they are perhaps wrong.
Merit should lie in well executed research work and not in the results of that work. Researchers (should) have no influence on the results of thier research. They shouldn't be blamed for results which are unfavorable to society, reality being a certain way is not thier fault.
I think
1. The peer reviewed system needs an overhaul because it’s completely failing at its task.
2. It’s too easy for a “supervisor” to put their name on a paper without doing the leg work and then expect blame to lie elsewhere when they fail at supervising.
3. We need some kind of reward for the person/people who find the scam artists/Lazy work in the system.
That's the way to do it... incentivize peer review studies. These irregularities only were found due to the authors' sloppiness, and someone who was looking for duplicate data in the results. Other than that sort of double checking, nobody has done a review study to find out whether it's repeatable with similar results... because who will pay for that secondary research? It happens, but not on most studies. If the person who stitched the western blot image had used different "empty" patches for each coverup, then it wouldn't have been caught. Going forward, most embellishers won't be so sloppy.
They rubber stamp each other.
@@ksgraham3477 That's not quite it. Working in academia is a full-time job. Would you like to spend several dozens of hours monthly working FOR FREE just to earn browny points from journals you're reliant on to publish your own work? Probably not. So conscientious scientists actually read the manuscript, look at the figures and provide comments. But essentially nobody will go to the trouble of doing a thorough check for fraud. We have daytime jobs!
Proper rubber stamping is a small, if related, problem. When you know that rejected manuscripts will be published eventually, even if not in the same journal, you're incentivized to provide as many helpful comments, rather than rejecting, semi-trash research (and even right-out trash). But I wouldn't accept something I knew was fraudulent.
Number 2, the supervisor putting his name on a paper without doing the leg work, is how you get ahead in Academia. It is proving that the academic system is broken and we need a different way to choose and evaluate research.
Right? Supervisors expect to like, privatise all the credit and fame and socialise all the blame and mistakes. Fuck them. If you're the lead author and your paper is fraudulent, THAT IS ON YOU! None of this bollocks 'oh it must have been a research assistant sneaking in bad data not my fault!'
'Hotshot Bot Sought, Caught Fraught Blot Plot'
I love it
Princess Carolyn is that you?
@@maina.wambuiwho?
Amazing 😄
Retracted PNAS!... It's just cold 😐
penis or peanuts
😂
The passive aggression in his response. 😂
It would be more understandable if the concerns were minor, but these seem to indicate serious data manipulation. He just looks guilty, even if in reality he knew nothing about the data manipulation.
"Yeah but I have a ***NOBEL*** !!! Who are YOU to question ME?!!"
***sigh***
Typical academic.
Lying for money? Who would do that? Certainly all academics have waaay too much integrity to do that...
Many do have that integrity and also don't have a job.
"As the final author on the paper & as a scientist of his caliber" he should NOT just be there in an advisory role: he should be reviewing & confirming the data collected & the work of his co-authors. Let's face it, as a Nobel Prize winner, the paper is flying under his authority & will attract attention in the marketplace because of his name on it.
If he's just surfing fame, resting on his laurels & gaining continued fame by co-opting the work of his co-authors, he deserves to go down in flames if THEY fabricated evidence.
For real. If he had so little to do with the paper that he couldn't even determine the veracity of the output, then he should be stripped of his nobel prize for not actually contributing anything.
Alternative title, Stanford professor has 35 papers scrutinized, penas retracts
Which of these two headlines do you find more informative? "Some uniformed Germans in 41,140 armored vehicles seen traveling on French border road" or "Hitler invades France!" ?
@@jorgechavesfilhofirst one obviously
*PNAS
@@BankruptGreek You've just lost the war.
Whoosh.
$350,000 a year in California. With that, he should be able to afford a 1-bedroom apartment and a pizza from Domino's once a week.
California is a joke😂
It's extra if you want running water or toppings.
don't tell me you joke about the same shit at every social gathering on top of all these videos too any time california is mentioned. come on bro
Good ooooone
California is just fine.
Richest state in the union by far.
Texas cost of living adjusted for purchasing parity is actually higher in TX than California if you make $80,000 per year or less.
Texas is a rich man's tax haven, pulling companies from every state in the union by bribing them with tax credits.
Without oil, TX would be Mississippi.
I understand why the process from research to clinical applications is so slow: there's a bunch of contradictory BS data out there.
The senior author is 100% responsible for the content of the paper.
The idea of AI going through old publications to find duplications like this is actually a fascinating idea (even if that's not what happened this time)! We already have a reproducibility crisis as it stands, so being able to weed out at least the most obvious fraud would be immensely beneficial for science as a whole.
Of course, people would learn to "cheat better", but there's no reason why algorithms in the future couldn't be produced to catch that later as well.
As sad as it may seem for science now, this could be a major step on the path to "fixing" it.
Even if people learn to cheat better it would increase the amount of effort necessary to do so (and the tools to catch them would also improve as long as we reward people cheating).
there is no interest in fixing this problem as it is sistemic issue created from the top.
"Cheating better" actually wouldn't be a problem at all; AI generally greatly benefits from having an increasingly adversarial dataset, particularly when the data is genuine competition and not the arbitrary adversaries researchers are fabricating. The more effective way to beat this would be to intentionally create so many false positives that the model loses credibility permanently, and continuing to train the model to beat them becomes impossible to fund, but that would require the cheaters to be honest, which would never happen.
This AI will also never happen, though, since after a single false accusation, the project would probably get axed and ruin the creator's career, with the people who cheated their way to the top throwing around weight to blacklist them. It's too risky of a product to make for anyone inside academia.
@@cephelos1098 Which is how we can have tens of thousands of car fatalities a year from human error, but if we get 1 from a self-driving car, everyone loses their mind, even though it would still be objectively safer for everyone to have the AI do the driving.
The pushback against common sense AI solutions can only last for so long before the benefits present themselves and people no longer desire what life was like before it. Keep in mind that electricity had doom ads against it and people were protesting to keep electricity out of their towns. Either way, the passing of time itself is all that is needed to overcome that particular qualm. Especially if the leading experts keep losing credibility and their "weight" means less and less as time passes. Ironically, this is one of the few ways for them to maintain that weight long-term.
You could take the offensive aspect out by using it like a plagiarism checker and have it as a standard procedure when acceoting a paper for peer review.
If any anomalies just ask for clarification
last name and being corresponding author on the article means that he is the most responsible. first author is usually PhD student or postdoc. nobel prize is now just for propaganda purposes
???? What the hell are you talking about? He's the last author with about 8 people. There's a good chance he hasn't even read the paper. The last author is typically a courtesy. I'm guessing he lent a piece of equipment for the experiment. Being last means your cat probably had more involvement in the paper.
Kinda disappointed in this channel. It's pretty much click bait with the Nobel Prize spin.
@@johnsmithers8913 found the p hacker
@@johnsmithers8913 That may be true for some fields, but the comment above is correct that in many cases, the first author will be the PhD student responsible for the project, and the last (and more importantly, corresponding) author will be their supervisor, whose direct involvement depends a bit on the dynamics of a given research group but takes much of the long-term responsibility for the paper. The least involved are usually the group in the middle, which will be students (+supervisors) who just did maybe a small supporting measurement or calculation.
@@michaelbartram9944
?? I have my Ph.D. The first author, assuming a doctoral student, will publish papers based on his Thesis work. The Supervisor at the beginning of the Thesis will work with the Student to set up a structure of the thesis. From then on, the student is on his own but will periodically meet with his supervisor on a weekly or monthly basis. In my personal experience, I would go months without meeting my supervisor, and honestly, his input was minimal until the end of the thesis and was then a reviewer of the thesis and papers.
Yes, the supervisor is generally second author, the third is either a second researcher, which participates by providing something key, such as the samples or the specialized technique/equipment. He is likely to review the thesis and publications before publishing. You could break down the contribution to the research 80, 15, 5%, respectively, for the first 3 authors. All other authors after that are "courtesy" authors, generally lab technicians, professors who donated their moth-balled equipment for the thesis, or possibly people outside academia who provided help in excess that would require an acknowledgement. Whether these authors actually read the paper before publication (although I'm sure they received a copy) or after the publication is questionable.
In summary, one could make an argument that the first 3 authors are the only authors that actually put work into the paper, had enough exposure to the data and procedure to determine whether the data was good or bad.
If you look at the papers that were shown in this video Sudorf was dead last in the list of Authors. He must have contributed almost nothing to the work and anyone in academia would know this.
@@johnsmithers8913 he is last and corresponding author on the article Jude speaks about in the video which means that he is the most responsible. do you think that if is author on papers he didn't even read that somehow it makes it better?
If an accountant or lawyer signs off on something that ends up being false, they don't get to blame an associate.
This is just sad. Sudhof would have reacted with interest and joined in reviewing all the work submitted if he had not known of the false data. Science. You seek the truth, accolades are supposed to be the perks. Outstanding job, Dr. Elisabeth Bik. We need many more of you out there.
His response is not good!
"And as the result his paper was retracted from PNAS"
I think the correct term is “pull out” 😂😂
@@realGBx64 in this case it would be, "PNAS made decision to pull out"
@@14zrobot pull your pnas out of my articles
"This cannot have happened by chance. As in: much, much less than 10^986"
Brutal.
I thought peer review was supposed to prevent this from happening.
Sorry to have to tell you this, but you are very naive. Peer review has become a charade, and indeed, a significant proportion of it is faked. Also, even if a reviewer says a paper is rubbish, but another says it is quite interesting The editor will probably publish it anyway.
Having been a peer reviewer myself it can be difficult. Elizabeth Bik has an amazing visual cortex. She sees things i would miss. We need more like her.
Data manipulation can be difficult to spot at least initially. I have just finished a battery of tests on a published data set. This set actually looks good despite it being almost impossible to recreate. Science isnt yet totally broken 😂.
Peer review is very poorly rewarded. You have a few days to teview material that can be very technical. You dont get paid for this. You do not get thsnked or acknowledged for the time and effort. Some put a lot of effort in. Others less so. One set of papers in nature had an overall error rate of approximately 30%. Final figure when i had it sorted out was 32% if i remenber correctly. I found these problems the day the papers came out. I wrote a note to nature giving examples of the problems. Nature decided that my letter was not of sufficient interest. Huh?
I tried to get this pslublished elsewhere. The authors of tye original papers did everything they could to stop it being published. I eventually got this published in the middle of a different paper.
It was an unreal experience.
The authors later admitted the presence of errors and seversl years later published a revised paper. Residual error rate was only about 10%.
Peer review is not easy. Journal editors dont like admitting mess ups. Authors can make life difficult for you.
The whole process is messy
@@binaryguru I should add that this is not a speculation if it’s a fact, because I review papers for journals as a world expert in the particular field until seen several papers, what I said were rubbish published anyway. Needless to say no communication from the editor, justifying his decisions .
No, peer reviewers have no access to the raw data, they assess the logic of the conclusions and the relevance of the findings. They have no way or incentive to actually check the validity of the data itself.
@@psychotropicalresearch5653 Has become???! It was always this way. Friends help each other even in deceit and try and question some inventoLord or some serial discoveroSearcher. It's all bullshit. Most of it, anyway.
Those duplicated boxes in the images reminded me of something... In 2013 there was a scandal where Xerox photocopiers would sometimes change numbers on copied papers, 6 became 8 most commonly. The cause of this was some algorithm that detected repeated structures in the images and copied the same data onto all these places to save memory, but clearly it wasn't accurate enough to distinguish a small 6 from an 8. Perhaps this or a similar algorithm could cause such weird 'duplications'? There is plenty of information about this online if anyone wants to look more into it.
No, that makes zero sense that you would get entire sections of artifacting that were somehow identical in a blank section, multiple times, all within the same image.
Why are you trying to excuse what is OBVIOUSLY fraud by imagining some hypothetical scenario that you have zero evidence for, or understanding of? This attitude is how this fraudster managed to con his way to a nobel prize
No, that literally makes zero sense and could not possibly apply to the doctored images presented here. There are MULTIPLE perfectly rectangular, identical sections that appear "randomly" across the same data in otherwise "completely blank space" the odds of these being true artefacts created by compression would be so infinitesimally small that you would be just as likely to get multiple perfect bridge hands in a row than for it to occur the multiple times it did in this paper.
@@kezia8027 I don't follow your reasoning regarding them being perfectly rectangular and appearing in practically blank spaces. How would this make it less likely? Since all pixels are near-white, wouldn't that make it more likely to consider the areas 'similar enough'? And the areas being rectangular I'd almost take as a given for said type of compression artifacts. Much easier and more efficient to have an algorithm look for rectangular areas than any other shape.
Looking at the video again, there are other things that point to this not being the cause however, such as the fact they only seem to appear in specific 'lanes' of the results, and only in some of the results (assuming the highlighting squares are after an exhaustive search). If these specific spots are of high importance to the results that would increase suspicion further. The rotated images metioned earlier are much more difficult to explain away as well.
@@Jutastre yeah you're right, the rectangular aspect was an asinine point that is irrelevant and doesn't help my case at all, but the point is that all that 'white' is really just noise. One pixel is more cream, one is more gray, one is more beige, and to the naked eye, there are no perceptible differences, but for these sections to be modified to have the exact same characteristics, that is seemingly ONLY present in areas where we would expect to see relevant data, and not just in random completely irrelevant sections, shows a clear level of intent that could only be described through chance or an algorithm in an absurdly unlikely situation.
And as you say, there are many other indications that this is false. That is the issue, you cannot look at a case like this out of context. Yes specific data is being accused of being inaccurate/forged, but to determine the likely root cause, we have to look at comparable information, and the surrounding context. Given these various red flags (which is admittedly all they are) there are more than enough of them to seriously doubt this work, and those who took part in it.
These red flags warrant MORE scrutiny, not people making excuses or attempting to paint a narrative of innocence, based entirely on their own personal beliefs and understanding (or lack thereof). What benefit is there in making excuses for this behaviour? At best it was negligent scientific method and reporting, at worst its explicit fraud.
The scientific community should be able to hold up to any and all scrutiny, otherwise what is the point of science?
As someone who went through the submission process before, JPEG is not an accepted submission format for images, rather ALL images have to be TIFF for that particular reason of avoiding compression artifacts. So, I'm calling bs on that part. But more concerning is that there's definitely a pattern of copy-paste on 'several' occasions.
This is journal dependent. Plenty of journals accept jpeg format for images, but if you are submitting images where compression artifacts might affect how your images are interpreted then you probably shouldn’t submit compressed images.
@@thepapschmearmd I'm not sure which type of image you are referring to, but for me it was a western blot. But even if jpeg, there is no way it had anything to do with compression.
Sadly when large sums of money are involved as an incentive in almost any field, not just science, we get stuff like this happening.
This is disgustingly shameful and should be sanctioned harshly by the governing body.
They _are_ the governing body.
@@ferdinandkraft857 well who polices the police ?
@@jeskaaable no one, that is why science has become a laughing stock. They are all afraid of getting defunded for what ever reason.
@@jeskaaable , I thought your responder succinctly mooted that point...
@@jeskaaableunfortunately, no one.
Video idea: what happens to the professors after being caught in scandals.
I'm in research, and here are 2 interesting examples known in our field:
1. David Baker, who is often regarded by professors in my field as the next Nobel prize winner, recenty created and extremely well-funded startup. He also hired Tessier-Lavigne as a CEO. That's the Stanford/Genentech professor with potentially fraudulent papers in Alzheimer's.
2. David Sabatini, famous ex-Harvard and HHMI professor fired for sexual misconduct, recently got big funds for his own lab by 2 wealthy investors.
1. www.science.org/content/blog-post/another-new-ai-biopharma-company
2. www.science.org/content/article/sabatini-biologist-fired-sexual-misconduct-lands-millions-private-donors-start-new-lab
@@cpunykurde
There is a German hacker conference talk , exposing printers for sometimes actually producing these kinds of artifacts during scans. I don't wanna defend südhof here, but that really ought to be taken into account
The talk is called "traue keinem scan den du nicht selber gefälscht hast"
mfw the kid doesn't know the difference between a printer and a scanner.
If those duplicated areas were caused by scanning compression then why were they only duplicated in a few specific areas of some images? Wouldn’t you expect them throughout all the background areas that looked similar?
@@Sashazur It's truly staggering the number of people speaking out of abject ignorance, defending someone who clearly has been deceptive, and who without a doubt, does not need 'joe blow' imagining impossible scenarios that excuse deception.
If someone doesnt make a joke about the guy retracting his PNAS Im going to burst.
PNAS stands for Paper not Accepted in Science, not what you think
@@periotromsoe “not what you think” 🤡 oh my gosh so hes not talking about the sex organ??!!!? Wow thank you so much for educating me. Without your comment I would never have known. If only I had watched the video.
5:24 I'd guess that the duplicated regions in the noise are due to some lossy image compression used some step along the way. However I'd expect them in all the images, not just one or two of them.
I agree. If it was a compression artifact it would be more pervasive.
I'm always amazed on how mindnumbingly dumb they are in faking their data. They don't even add random "noise" onto their excel copypasta.
What's shocking to me about this is how crude the manipulations are. I am a researcher myself, and if I ever had the intention to make fake data for a paper, I can think of so many ways of doing it in a much more sophisticated manner so that it would be much harder to find indications for the manipulation than looking for duplicates. And there are MANY people with the technical skills to pull this off -- therefore it stands to reason that people probably are cheating in more sophisticated ways while evading detection.
AI engineer here, modern AI image-generating algorithms cannot produce exact duplication of the kind observed in these plots. They can produce very similar regions, but because of the way they create the image the exact pixel values will never be identical in different regions of the image. (It's similarly as unlikely to random noise being identical in two regions of a natural image.) This seems more like manual photoshopping.
The paper in question was published in 2010, there is no way, that any form of Generative AI was used in generating these plots. ML models were much less common and way way less user-friendly back then.
Issues like this are partially why public trust in academia have fallen drastically as govt policy decisions & products such as medical treatments are developed based on academic findings. I'm extremely happy to see public peer reviews occurring to start weeding out the snakes in the grass that have been an issue imo for far too long. Time to bring credibility back.
It is soooooo common for scientists to photoshop their images to make them look better, even if the general results would not change. And this has real implications. Investors make decisions on how good a discovery appears, and perfect images can convince them to invest.
“Trust the science” has forever become a meme.
I wouldn’t call academia broken just from a misconduct of a scientist. And Academia being able to flag and retract works that might be a product of misconduct and dishonesty, and from a Nobel prize winner at that, sounds like a good system to me. It’s not perfect by a long shot, but hey, it’s there.
Having identical blocks in an image is actually not as unlikely as the video suggests. It really depends on the encoding method used, digital artifacts really can produce these effects with non-trivial probability. For example, JPEG has a low-quality setting in which entire regions of the image will have the exact same pixel value; i.e., the pattern that's repeating across the image is just a constant value. In audio codec these things happen a lot too, and in measurement devices you have quantization errors... two random floating point values are very unlikely to be the same, but two measurements of very low amplitude signals are very very likely to be the same. Much like the image of basically a white background with low real-life variation being photographed.
I'm not sure which codec was used and I can't rule out wrongdoing just from this video, but I think people throw out these "it's as likely as 1 in 10^900" way too much without it being correct. The actual digital processes generating these numbers and images are complex, and NOT purely random, and it's usually really hard to tell the actual probability of something like this happening.
On top of that, for someone like Sudhof who has hundreds of published papers, with thousands of images in total, it might not be as unlikely as you might think to have 35 papers with random artifacts. That's before accounting for human error like the paste error in the video (which is really an honest mistake that can happen), and before accounting for the hundreds of thousands of scientists out there that are making mistakes all the time. I'd wager that finding a high-profile scientist with a lot of problematic papers due to no wrongdoing but simple pure chance is not an unlikely event.
These artifacts may be way more common than you think - czcams.com/video/7FeqF1-Z1g0/video.htmlsi=hXJ-J96cgwGXdM6r (or - if you prefer englisch - search for "Xerox glitch changes documents" ). In short - memory saving algorithm looked for similiar blocks of pixels in scanned documents and replaced them. Block size was similiar to letter size in common documants. Effect - mutliple documents where significant numbers were replaced with wrong ones (6 -> 8, 8 -> 9). And since - by definition - algorithm tried to find matchong blocks - changes werent obvious when reading. I dont think matching blocks of, basically, noise is a proof of wrongdoing. Sure - Sudhof could write better answer, but when someone throws "carrer-ending" accusation and you dont know whatys going on...
Sadly, arrogance isn't a good defense in science.
When I worked in academia, I was there long enough to hear rumors about specific professors. Some of these rumors even came from their own students. Only a few profs names came up again and again, but it became predictable after a time. There were certain names you came to mistrust, even if you had no direct proof or contact, because the stories never stopped coming.
Whether it was an academic integrity problem, or a problem with interpersonal conduct, some names became associated with this stigma. But the truly discouraging part were the recurring counterpart tales about students who supposedly HAD direct involvement in these issues, and took them to the administration, and their concerns were buried or ignored. These stories ALSO came up again and again.
These problems don't exist JUST because there are bad actors in academia. They also exist because *administrators* would rather sweep the concerns under the rug, and avoid a scandal that might hurt donations or grants, rather than maintain a rigid standard of integrity. I'm convinced that a lot of the fraudsters are given cover, intentionally or not, by their departments and institutions who are desperate for the gravy train to continue at any cost.
My neighbor has a sign on her lawn... "Science is real". I was always taught science should always be questioned. I remember all the scientists saying stuff like, "salt causes high blood pressure", "transfat is better then saturated fats", "tabacco is NOT addictive". Yes even published science papers are often proven wrong.
And what saddens me is the realization that a lot of that “wrong” science isn’t an inevitable side effect of the scientific method (where you expect conclusions to be revised as better data can be had and paradigms shift), or even the result of honest mistakes, but rather because of outright fraud.
Salt does cause high blood pressure. It is basic chemistry.
@@Sashazurwhat can you expect when you need money to live, and without positive results you are far less likely to get funding.
@@wumi2419so that makes it ok?....
It’s the advertising that makes these claims, FFS, not necessarily the scientists. Why are people so obsessed with tearing science/academia & scientists down? Drive at the examples, not the entire edifice….
sudof seems to be relatively offensive. which in of its own is absolutely terrible behavior from such a revered scientist. 😮😮😮😮
but absolutely unsurprising for anyone who has spent any time in academia
@@not_ever The guy even looks like a few I know who are as dodgy and arrogant as hell. Carbon copy.
Considering that he is about to become unemployable, have his tenure and pension taken away from him, along with possibly having to pay damages it makes a lot of sense!
Science is a search for truth. If such discrepancies appeared in my work, rather than being offended I would want to learn why they are there, and what they mean. The off-hand, poorly thought out dismissals are concerning. Whenever someone seems to be saying “don’t look over there”, I have the overwhelming desire to look.
Sounds like you want to get turned into a pillar of salt!
@@inthefade Sounds like the humans who invented that myth wanted to give their "don't look at it" a veneer of divine validation!
@@inthefade Thanks for the laugh. 👍
Thank you for making these videos!
Academia isn't broken in isolation, it has caught the industry bug. Industry is more scammy, and as industry and academia are quite fused, this isn't surprising.
When the CEO of a public company engages in activity that indirectly causes substantial reputational harm to that company, they typically get sacked or at least their bonuses get reduced or clawed back. It doesn't matter if they aren't directly responsible. It was their job to vet their people.
I don't see a good reason why that shouldn't also be true for a scientist, whose mandate is to advance the best evidence possible in support of their hypotheses. He should be put on some form of probation. His NSF (or relevant) grants should be frozen and should be limited only to paying existing employees (students/postdocs) -- no more expensing the taxpayer for conferences and other fun stuff until he's demonstrated that his data reliability processes are more stringent. If pharma companies want to keep paying him for dodgy research, i guess there's nothing that can be done there. But anything public should ansolutely come with strings to punish this type of behavior.
Kind of interesting that the cheaters seem pretty lazy about cheating, too.
*the cheaters who get caught
It's amazing how little effort these academia top brass put to forge a lie.
Why put in more effort? This amount got them a nobel prize. Elizabeth Holmes was worth BILLIONS.
More effort wouldn't get them any better results, so why bother?
Sounds to me like a large part of the issue is that the system is built such that professors that are in an advisory role (and a loose enough role that they can't spot faked data in the study) are getting the paper published under their own name.
If they didn't do the bulk of the work, their name should not be first on the paper.
Instead, established professors names are being put on papers that in reality they had very little to do with, and there is little incentive for those actually doing the work to do it right, because it is not their reputation on the line.
For my thesis, I brought the idea to my advisor, and we spoke for 30 minutes once a week, and she never even saw the experimental apparatus or the code.
For my cousins thesis, the project was his advisors initial idea, but the only time he got with his advisor was a 20 minute slot once a week, within which 15 students took turns updating the advisor on their own different projects. If all of those are published, that's 15 papers with the professors name at the front, none of which the professor actually knows all that much about.
profs names should never be at the front - that is reserved for their trainees who did the work. Senior authors almost always have their names listed last, not first, on published manuscripts.
Take the fame... take the blame...
Thank you for publicising these matters. Thanks to Dr. Bik. for doing her work. Nobel prize or not this is science and all should have their work scrutinized. This is the only way to make progress.
I am loving this disobedience to "authority".
Been waiting for this
There are few people as utterly usless as "an accademic".
These efforts remind me of the efforts people cheating at speedruns go through.
These might be artifacts from the Floyd-Steinberg dithering algorithm, which is used to convert color images into grayscale. This method employs predefined repeating patterns, known as 'error-diffusion,' applied to surrounding pixels in both dimensions to create various shades.
Then why wouldn’t it be noticeable in all areas of all the images?
I'm pretty sure I (and many other computer science students) have a good idea of what might have happened here. In 2013, Xerox (major printer and scanner manufacturer if you're unaware) had a PR disaster where an independent researcher, David Kriesel, had found that a large-scale business scanner a friend of his used, had scanned different numbers than shown on the original piece of paper that was scanned. He found out that Xerox used an image compression format known as JBIG2, which creates "patches" of images it scans to be reused in the document. The default settings of the Xerox printer had a very high tolerance (i.e. patches that looked nothing alike were used as copies), causing even numbers in spreadsheets to be entirely incorrect. I'm not saying that's 100% what happened here, but copied regions in images can certainly be explained by lossy image compression.
Yeah, I remember the video I watched about this somewhere on youtube. Was really interesting. However, that does not explain the numbers that appear to be reused multiple times as they were not scanned but directly printed (I would asume?). Also, if that was the case you definitely could reverse-engineer the results to identify if those or different numbers were being used (if a digital copy does not exist anymore which would be weird in itself but possible).
@@maudbrewster9413Ah sorry, I was not referring to the spreadsheet in the video. I meant scanned spreadsheets in general, just to give an example.
I do recall reading about that. However I don’t think that explains the two instances described here. For the first one where the blot image backgrounds were replicated in some areas, if it were due to a compression algorithm reusing the same patterns, one would expect those duplicated patterns to appear more frequently than they actually did. For the second one where groups of numbers were the same, the sample page shown in the video looked very high resolution with no artifacts - like something screen shot from a PDF, rather than something scanned from a printed page.
@@Sashazur Why would the patterns be more frequent? Maybe the chosen parameters just happened to result in these areas having a high "similarity score" and therefore being duplicated. Concerning the spreadsheet, see my previous reply.
@@mikee. That doesn't make any sense at all. Why would two different blank areas have LITERALLY THE EXACT SAME COMPRESSION ARTEFACTS? This is so impossibly absurd as to be asinine. Your ignorance is astounding, and your hubris is worrying. Attitudes like this are how this con man became a nobel prize winner.
This has happened before, many, many times. Nobel Prizes seem to bring out the worst in instututions and people. Examples include Jerome Friedman owing his physics Nobel Prize to a graduate student, who never got credit for anything. Or Samuel CC Ting/Burton Richter who owe their prizes to a a third party. The initial discovery was on Long Island Brookhaven National Laboratories. Ting doubted the results and dithered. In desperation a phone call to Stanford was made telling Richter where to look. Stanford had the better PR team, with book authors etc and they won the war of words. The guy who did the work, first, got tenure at MIT but is pretty much only known to the insiders-- this guy does not even have a Wikipedia page! Also Katalin Kariko/Drew Weissman would have stayed unknown too, considering their low status in the academic community if the pandemic hadn't happened.
@PeteJudo1, the cloned rectangles shown round 5:30 in the video can actually be artifacts of an image upscale. Actually if you try to upscale an image that has any one-colored area will result in repeated upscale artifacts because the algorithm will make the same "decisions" on what to do there and given it will be seeded with the only non blank area will repeat the same output until it fills the blank area. I am just a software developer and have no idea of the subject of the photos and it's composition, but I have enough experience to say that "well.. I have seen this before" ;)
Elizabeth is like Lone Ranger kicking ass. Respect.
The correct response would be to be concerned about the anamoly and to follow it up with the first author. The actual response leads me to think that their was either direct complicity, or an indirect culture that makes it acceptable to manipulate results. Although I would add that the blot images may have been manipulated for aesthetic reasons ... like someone put their coffee mug down on them and someone made the call to not repeat the expt.
edit* Also to add, some supervisors can be really supportive of their teams and phd students, and will defend them really strongly.
To be fair, someone who blindly supports their team in spite of evidence that they should be inquiring into their accuracy/efficacy is not someone who SHOULD be supervising.
I know a few people who have worked in pharma research which overlaps very closely with academic study. I summarised some of what I heard about as "interpolate, extrapolate, exaggerate, fabricate". And that was 30 years ago.
My science teacher used to say: "manipulation is wise estimation", but that was in the context of conducting the most basic experiments in practical examinations where students would sometimes get weird readings / measurements due to some instrument errors or fundamental mistakes, and then would have to rely on knowing the topic of the experiment enough to fabricate numbers to get through the exam.
5:38: Not sure how the images where digitized, but if they were scanned, these kind of artifacts might occur due to the scanners image size reduction method (i.e. the scanner might replace similar looking regions with identical ones to reduce the file size)...
As a visual digital artist of 25 years, that happens when you use a rectangular selection to copy and paste over a region. It looks like they used several small samples, put them together, then copied that again to move it to another area. Yes this is used to cover blemishes
I came for P NAS jokes, I got them. Never change, internet.
I’m a photographer and have edited many photos with the goal of getting them to “look right”. Sometimes the unmanipulated result looks misleading so you tweak it. That doesn’t excuse tweaking scientific images, but if an image in 2010 was ok in the meaningful area but had some artefact in the area of the coloured boxes then I would reach for a bandaid type tool to erase the artefact and clone some background area in. No one will ever know! Wrong, this kind of edit is pretty easy to detect if you know how they are done. I can’t tell if this is corrupt science or just the unwise application of photo editing software in the publication process. I would say there are two conflicting ethical standards here and the higher scientific standard rather than the image preparation standard should apply.
So many people who are attempting to claim it could have been caused by a XEROX. Clearly these people haven't seen what artefacts look like, or what copying and pasting "blank" areas over them looks like either. When you know, it is painfully obvious what has occurred.
WHY it occurred is still technically up for debate (though I have my suspicions) but without a doubt, these images have been deliberately doctored for SOME reason.
If people put in so little effort to fake data, they must be getting away with it quite a lot and be very confident
Just "what" ISN'T broken?
Nothing. Everything is broken. Because bad people in every field of life have noticed that scams work, and there are no repercussions for being caught.
Yet another sad nail in the coffin of the integrity of peer reviewing. How can research be done and actual advancement when no one's results can be trusted?
I hear stories that if you reject a paper from a prestigious university, they will just write an angry e-mail to the editor who will publish anyways.
or it’s just hard to tell with the human eye 90/180 degree rotations compared to ai
Peer review, however flawed, is not at stake here. Not everyone one, in fact almost no one, can have E. Bik's visual gift.
If a person is included in the list of authors of a paper a reasonable assumption is that person is familiar with the contents of the paper and has at the very least reviewed the entire paper, agrees with the hypothesis, methods and materials, data collection and analysis and the conclusions of that paper. If that is not the case then that person has no business being a listed author of the paper.
From Scooby Doo: and I would have gotten away with it if it wasn’t for you meddling kids.😂
PNAS retractions are tight!
You are our youtube news source of academia drama and, for that, we thank you.
Boosting for the algorithm 🙌 Love your work, keep it up! 🌻🐝
Keep doing what is right and exposing these. Society and institutions become better through honesty.
These stories tell us to not to listen to authority but rather by scientific merit alone.
I want to remind everyone of the Xerox bug: when scanning something, the machine compressed the image via pattern matching. That is absolutely what that looks like.
Interesting thought. However, the images being analyzed are the digitized images and not printouts.
@@arielspalter7425 Same difference unless the analyzed images are raw data themselves.
@@arielspalter7425 image compression is not only applied to printouts.
That's moronic. They explicitly point out that there are repeating sections within a single image. This is literally never caused by image compression. You have literally zero idea what you're talking about, and attempting to pass it off as fact. This is asinine and irresponsible and exactly why garbage like this paper manage to get so much traction.
@@kezia8027I advise you to read up on the Xerox scandal. Repeating patterns in the same image was exactly the problem. Image compression absolutely does this.
Checking papers for errors is honestly more important than the papers themselves in this state of academia.
Thanks for keeping us updated on all this stuff! I'm so glad to hear that there's a website where scientists publicly critique each others' papers! That gives me a lot of hope for a large-scale house-cleaning of junk science. ^^b
Why biology whyy, maybe because is one of the subjects with the highest number of images and graphs that can be manipilulated. Again as a future biologist this is really awful😥. At least I can uderstand all of the content of your videos haha
Because biologists are particularly plagued by competition within the field combined with experiments which are presently extremely time consuming
Most papers are done by MSc and PhD students. The Props just assume they made honest measurements.
That’s rarely is the case. Students are taught from BSc that there is a huge inceptive to fix the data a bit to get a good grade and finish the lab report sooner.
This mentality obviously continues to Graduate students. They want their diploma and to move on with their lives.
Speaking as a person who has his name on a chemistry paper as part of a BSc course - I couldn’t give a rats ass if cheating a bit would have gave the degree sooner
( I didnt, only cause someone else continued my research and wrote the paper. Maybe she cheated, who knows 🤷🏻♂️)
Perhaps it should have been double checked by someone before they were awarded a NOBEL PRIZE then...
@@kezia8027 you don't get it. The paper he was awarded the price for, was done by him and of-course was reproduced countless times. Also this information is already common knowledge and used extensively.
The issue is never with the most groundbreaking papers, which go through rigorous post-testing.
It's the countless not-really-that-important papers the Academia factory is producing.
The entire system is design to produce quantity, not quality.
Man i remember it's been like a year since you said..."nah my chosen field is dodgy im going to start focusing on bringing attention to how broken academia is now." It's wild how we never no what we are going to experience growing up
It always starts with one little "fix". And then it's two. And then it's just this paper. And then its...
Will you make a video about consciousness field, there is a debate that the most popular theory is pseudoscience and is popularized as new science. This is a scientific integrity issue too.
What do you mean by "consciousness field" precisley? Are you talking about the various academic fields that study conscioussness(i.e nueroscience, psychology, philosophy etc), or some concept of a "field of consciousness" as articulated in various spiritual belief systems& philosophical traditions& popular new age conceptions in the public? If the former- there is no single field that studies consciousness, it is studied across a variety of fields, & it isnt really possible for it to be a "psuedo science" when there is no established science or single approach to studying consciousness & zero consensus on the nature of conscioussness& how it arises& even what the defintion for consciousness is in ANY of the fields thay study it, let alone among all of them. And if the former, than unfortunatley that is a position that is FOREVER outside the scope of science. Currently neuroscience beleives it WILL EVENTUALLY be able to describe how consciousness functions& emerges from the brain, but spiritual beleivers will claim this is impossible, that doesnt make sciences attempts to exain it scientifically "psuedoscience", a spirtitual understanding on the nature of conscioussness is not something that can EVER be scientifically proven, so science will always keep trying to explain it...trying to find an answer to an unexplained topic isnt "pseudoscience", its actually science itself...it would ve psuedoscience if scientists stopped asking questions about it & started giving definitive answers that clearly dont align with observations, scientists are NOT saying theyve figured out how subjective conscious experience arises, merely that they beleive they should try to understand it & beleive theyll eventually be able to, which spirtual ideas will say is an impossible task since consciousness doesnt emerge from phsycial material reality or brains...niether of these are psuedoscience however. In fact they are philosophical positions & nothing more(i.e science takes the pbilosophy of "physicslism" while the spiritual consciousness appraoches take a few different philisophical forms- some of them are metaphysical, others ontic, others logic& self inquiry based, etc...niether are "psuedoscience" because in the first case they are doing real science & simply not arriving at an answer yet, & in the second case they are not "pretending" to be science- acknowledging it is forever outside the scope of science)
@@ar4203Yeah: I think someone is confusing CZcams & similar propagation & popularization of bullshit consciousness- & other - studies with academic papers. This cycle & hype is ALL about social media and not Academia, which then bears the brunt of the anti-intellectual brigade…
Academia has a massive integrity issue. I'm just trying to determine if I'm being unfair associating it with the drastic shift in political affiliation (and lack of diverse opinions) in University.
It likely has nothing to do with political affiliation as these problems were becoming a thing long ago. This is more so an issue of the system of universities not being designed for the size it is in today. Societies rely on mechanics to enforce responsibility ensuring people do their job and not just make shit up. In university those are basically "peer review" which almost entirely relies on good will as it is not rewarded at all in comparison to writing a paper.
No / poor checks and balances = a lot of frauds, liars, and cheats. The solution? Create effective mechanics for rewarding good work (eg. don't reward people for the amount of papers they push out but the value those papers contribute) and create mechanics for ensuring people are not rewarded for bad / fraudulent work.
This all happened because universities were effectively built by enthusiasts that brought money by nobility / powerful individuals, then that slowly grew to later universities. This MASSIVE societal effort of pushing science only really started in 20th century (at the scale of today). So the simple social mechanics of being in a group where everyone knew each other held people somewhat accountable.
After all: get caught in a lie once and everyone knew not to trust you within the week. Today you get caught in fraud in your field and maybe a handful of people know at best (equal to the handful of people in total in the field a bit over a century ago). That's if you don't have friends higher up that keep it covered up that is.
It’s the incentive structure inherent in academic research, not the beliefs of the people doing the research. Right wing Evo psych researchers are guilty of all sorts of sketchy practices, for example.
I agree
It’s the incentive structure of university research, not the beliefs of those doing the research.
@@seanbeadles7421 So no accountability??? So if I had an apple stand with a sign $1/apple and walked away for a few minutes. You're saying it's OK to take my apples because I incentives you to take them without paying while I was gone? That thought is immoral and pathetic.
I'd be most concerned about the pharmaceutical derived "conclusions", which may well be harming patients and/or making lots of money for some pharma-mafiosi. Has anyone investigated which drug and drug-making company benefits from these "errors"?
Excellent work! Thanks also to two scientists who have painstakingly to double checks the published works.
Sucks to have to retract your Pnas
This problem with academia, it seems to be pretty pervasive throughout society. Could it be… capitalism?!
No, it is a human issue.
You can't take credit by having your name on a paper, and then turn around and say I wasn't really all that involved. If you want the credit, you have to take the responsibility. Either don't take credit for papers you have no involvement in, or take responsibility for the content of the papers and take some credit.
Take the fame...take the blame...
Dude been loving your vids the past year or so since I found you. Its so interesting how so many things we believe are one discovery of duplicated data away from being destroyed