15. Hearing and Speech
Vložit
- čas přidán 26. 10. 2021
- MIT 9.13 The Human Brain, Spring 2019
Instructor: Nancy Kanwisher
View the complete course: ocw.mit.edu/9-13S19
CZcams Playlist: • MIT 9.13 The Human Bra...
Humans use hearing in species-specific ways, for speech and music. Ongoing research is working out the functional organization of these and other human auditory skills.
* NOTE: Lecture 14: New Methods Applied to Number (student breakout groups-video not recorded)
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
Support OCW at ow.ly/a1If50zVRlQ
We encourage constructive comments and discussion on OCW’s CZcams and other social media channels. Personal attacks, hate speech, trolling, and inappropriate comments are not allowed and may be removed. More details at ocw.mit.edu/comments.
* NOTE: Lecture 14: New Methods Applied to Number (student breakout groups-video not recorded)
View the complete course: ocw.mit.edu/9-13S19
CZcams Playlist: czcams.com/play/PLUl4u3cNGP60IKRN_pFptIBxeiMc0MCJP.html
Making sucking clicks happens to the babies too and dogs and we use clicker noise makers to train dogs
The fact that someone from Mozambique has access to an MIT lecture from the comfort of their couch is simply mind-blowing. By the way, I was surprised to hear that Professor Nancy has visited Mozambique! Anyways, thank you MIT for giving us access to high quality educational materials for free.
Dear Professor, the claim that nobody had done the reverbs before is not true. Many years ago, I was an acoustic engineer and my field was architectural acoustics. Specifically, I was designing acoustic environment in a room for different purpose e.g. music, speech etc. One of the key properties we study is the reverberation time of the room and there many many measurements of different type of rooms and the fact that sound level decays has been very well known in the field of architectural acoustics. Just thought I’d point it out. Happy to offer more details if you are interested.
Did you publish your results so other people can read it?
It's amazing how little upvotes those lectures have, especially when you compare with other type of videos on youtube.
I believe comments help with, so I'm here just to say thank you very much! Give us more high quality content!
Try and find another video on CZcams with a upvote ratio of 3.2% or more...
@@TheHuesSciTech first, i wasn't talking about upvote ratio, but about interest to the topic in general. Second, i'm literarily watching such a video right now, 89 views, 10 upvotes: czcams.com/video/q1wqDaNRuc0/video.html&ab_channel=OmarioRpg
Thanks for these free lectures!
Two random ideas off the top of my head about why the primary audio cortex might have two areas for high frequencies.
1. Backup/error detection. It would be evolutionary beneficial to still be able to hear if one side got damaged.
2. Each side might be subtly different, perhaps one side is close to being the true signal and the other is processed in some way. Some form of computation could take both these signals and use this to perform another function such as helping figure out sound direction or perform noise reduction to the signal to help us pick out the sound we're interested in.
Muchas gracias por habilitar los subtítulos en español. Un gran regalo para las personas de habla hispana no bilingües.
12:04 He asked about the "intensity or volume" and the answer was that it isn't well depicted on that graph. But I would have thought that the graph literally shows the overall loudness by how tall the squiggles are. They show the measured amplitude of the pressure wave at any given time, i.e. loudness/volume.
Thank you for lectures!
Thanks for the lecture on the hearing and speech. The study on the vowels and the consonants is very interesting. One falls on the vertical and the other on horizonal in the bar chart. Amazing. Thanks to Dr.Nancy Kan wisher and the MIT.
a vowel is a long-lasting sound - a crochet; a consonant is a fricative - a semiquaver - it doesn't last. the pictures don't reveal anything hidden in the sound. but they're pretty pictures :)
Thank you for the reply Dr. Nancy Kanwisher.
Great lecture
It is at sulcus I think and the areas of visual spatial, but using the near regions, I have autism spectrum disorder high functioning Asperger's it happens to me, and You helped me to know that.
You could think with different parts by focusing on different fields.
Amazing
Interesting!
I had to take a break at half way. I mean the brain is so amazing at carrying out these seemingly impossible tasks effortlessly... well, ironically, it's very hard to process that.
Good & nice
Very well put lecture although I can’t help myself from asking this question what essentially plays role in Choice of colors attributing to the specific Sounds stimuli is there a hidden psychoanalysis or is it just a random point of time choice 😅. Like the one depicts in Graphs !
#OUTSTANDING
Correlation between input and how hearing impairment is done
In the same as plasticity
Funny thing, for me the reverb example sounds quite obviously like a train station announcement. An alien example, I guess, for Americans. (And totally not like a cathedral.)
Hey MIT, please upload other videos from the lecture the missing ones.
Details on the missing videos (see the course on MIT OpenCourseWare for more info, readings lectures notes, etc. at ocw.mit.edu/9-13S19 ):
#3 did not have a lecture, it was an in-class dissection of a brain.
#12 was a guest lecturer on brain-machine interface (we don't always get permission from guests to publish.)
#14, 22 did not have lectures, they were student breakout groups/discussions.
#17 no notes given on why it was the video not recorded. The topic was MEG Decoding and RSA.
#19 class canceled, video not recorded.
#23 is still being edited (the lecture is being updated with material.)
#25 was a recap of course and final review, video not recorded.
i closed my eyes and she walked around but i couldn't tell the difference at all lmao
that's cause her mic walked along with her, so the relative offset between her voice and the auditory receptor (the mic, not just our ears) didn't move.
Btw, does anyone know by which mechanism neurons specialize to respond better to certain stimuli and not others (STRFs)? Is it analogous to what happens in deep learning neural networks? ( 54:59)
rofl - you betcha!
5 more to go
24:59
Amazing profilE
How does any of this explain people hearing either Yanny or Laurel?
BINGO! THAT'S WONDERFUL 🤣😅🤣😅🤣
physical fourier transform!
"Talking up ur ear is creepy" seems like prof not introduced to ASMR
ASMR is creepy!
@@Zalmoksis44 i agree but still give a nice relaxtion
22:30
40:40
🕊🌎🕊🕊
I wonder what type of people are following through with the lectures. The first one had like 200k views I think, the ones in the middle were 7k...
What are you like? Please reply this comment!
I'm 24, design/engineering student, Brazil, I enjoy painting and creating music mashups.
Hello there! My name is Belen, I’m a 23 y/o biological engineer from Perú ! I absolutely love these lectures and I love how the studies are presented and how students are motivated to think critically about them. I feel lucky to have the rare opportunity to learn all of this for free.
I’m also a sci fi short story writer and im trying to learn digital painting to be able to add visual artistry to the short stories (^-^)
You sound like a really interesting and curious guy! Keep doing what you do and take care :)
Hi! I am 33 y.o English teacher and psychologist
A year late but anyways; I'm 21, physics student in Germany, also interested in informatics and neuroscience
♾️🦋
So there is the answer to an algorithm
Wow.
In Portuguese we have the word "confundir", which is a lot more of a common word than "confound" in English! I just commented this because I've used English for 14 years and had never directly translated the word "confundir".
Confound is a specific term that is used in scientific literature. I'm not completely certain of its meaning, but I'm still pretty confident. In academic English, confounding means the following:
something has happened and a researcher intends to map the causal relations. That is, what caused the thing that happened. But instead of choosing the right cause they choose the wrong one, thinking that it's the right one.
The wrong cause would be called a confounding variable, because it appears to the researcher as the cause of something when in reality it's not. For example, you can have the following situation: "Students who have trouble understanding mathematics disturb the class."
So, you may think that the students having trouble with mathematics is the cause of them disturbing the class. So you map a causal relation between having trouble understanding mathematics and disturbing the class, but actually, it can be that they only disturb the class if they don't get help, so then the real cause is not having trouble understanding mathematics, but not getting help is the real cause.
Having trouble understanding mathematics would be the confounding variable in this cause, because it only creates the conditions in which the real cause - not getting help - can take place.
So attractive!
Beautiful woman