Jordan Boyd-Graber
Jordan Boyd-Graber
  • 405
  • 1 195 072
What Makes for a Bad Question? [Lecture]
This is a single lecture from a course. If you like the material and want more context (e.g., the lectures that came before), check out the whole course: users.umiacs.umd.edu/~jbg/teaching/CMSC_470/ (Including homeworks and reading.)
What's a "Manchester" question? czcams.com/video/9peCcstwGtU/video.html
Adversarial questions: czcams.com/video/ipLDqWhobN8/video.html
Spiel des Wissens: www.amazon.de/Jumbo-Spiele-Spiel-des-Wissens/dp/B09J18F6HV
Answer equivalence: aclanthology.org/2021.emnlp-main.757/
Music: soundcloud.com/alvin-grissom-ii/review-and-rest
zhlédnutí: 216

Video

Don’t Cheat with ChatGPT in my class! [Rant]
zhlédnutí 456Před 3 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: users.umiacs.umd.edu/~jbg/teaching/CMSC_470/ (Including homeworks and reading.) Music: soundcloud.com/alvin-grissom-ii/review-and-rest
Is ChatGPT AI? Is it NLP? [Lecture]
zhlédnutí 538Před 4 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: users.umiacs.umd.edu/~jbg/teaching/CMSC_470/ (Including homeworks and reading.) Music: soundcloud.com/alvin-grissom-ii/review-and-rest
What made ChatGPT Possible? [Lecture]
zhlédnutí 477Před 4 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: users.umiacs.umd.edu/~jbg/teaching/CMSC_470/ (Including homeworks and reading.) Music: soundcloud.com/alvin-grissom-ii/review-and-rest
Are two Heads better than One also True for Large Language Models [Research]
zhlédnutí 234Před 4 měsíci
More details: users.umiacs.umd.edu/~jbg/docs/2023_findings_more.pdf
HackAPrompt Best Theme Paper Presentation at EMNLP 2023 [Research]
zhlédnutí 258Před 4 měsíci
Read the paper: umiacs.umd.edu/~jbg/docs/2023_emnlp_hackaprompt.pdf Project webpage: paper.hackaprompt.com/
What Makes for a Good Video Presentation: The Best ACL 2023 Videos
zhlédnutí 409Před 4 měsíci
Full list of the 2023 ACL Best Papers Here: 2023.aclweb.org/program/best_papers/ 6:21 - When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi 12:07 - KILM: Knowledge Injection into Encoder-Decoder Language Models Yan Xu, Mahdi Namazifar, Devamanyu Haza...
How would you feel if this video's title didn't match what it's about? [Research]
zhlédnutí 191Před 5 měsíci
Full paper: arxiv.org/pdf/2310.13859.pdf
Helping Computers add "A Little Extra" when they Translate [Research]
zhlédnutí 162Před 5 měsíci
Research talk for our paper: Automatic Explicitation to Bridge the Background Knowledge Gap in Translation and its Evaluation with Multilingual QA arxiv.org/pdf/2312.01308.pdf Presented at EMNLP 2023
How is a good Question is like an NP-Complete Problem? [Lecture]
zhlédnutí 176Před 6 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: umiacs.umd.edu/~jbg/teaching/CMSC_848/ (Including homeworks and reading.) Music: soundcloud.com/alvin-grissom-ii/review-and-rest
How can Trivia Games around the World improve AI? [Lecture]
zhlédnutí 172Před 6 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: umiacs.umd.edu/~jbg/teaching/CMSC_848/ (Including homeworks and reading.) Manchester Paradigm: czcams.com/video/JcNpiD4odT0/video.html ABC Dataset: www.nlp.ecei.tohoku.ac.jp/projects/jaqket/ ABC Competition: abc-dive.com/portal/ Manchester...
QA is not one size fits all: Getting different answers to the same question from an AI [Lecture]
zhlédnutí 163Před 6 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: umiacs.umd.edu/~jbg/teaching/CMSC_848/ (Including homeworks and reading.) Ambiguous Question and Natural Questions Five Years Later: czcams.com/video/ZUN0GkaekHw/video.html Vector Retrieval Review: czcams.com/video/A5ounv0D_cs/video.html S...
My video making process (what not to do) [Lecture]
zhlédnutí 148Před 7 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: boydgraber.org/teaching/CMSC_723/ (Including homeworks and reading.) Teleprompters: czcams.com/video/YeRu4xYH_W0/video.html Editing Tutorial: czcams.com/video/keoszhf4DZ8/video.html Music: soundcloud.com/alvin-grissom-ii/review-and-rest
Do iid NLP Data Exist? [Lecture]
zhlédnutí 263Před 8 měsíci
This is a single lecture from a course. If you you like the material and want more context (e.g., the lectures that came before), check out the whole course: boydgraber.org/teaching/CMSC_723/ (Including homeworks and reading.) Why I call GPT Muppet Models: czcams.com/video/u0DgoRVLTE8/video.html NLI Artifacts: aclanthology.org/N18-2017/ Hypothesis-only baselines: aclanthology.org/S18-2023.pdf P...
Natural Questions: Google's QA Dataset Five Years Later and Why it's Impossible Today [Lecture]
zhlédnutí 224Před 8 měsíci
SQuAD Paper: aclanthology.org/D16-1264 NQ Paper: aclanthology.org/Q19-1026/ Cheater’s Bowl: aclanthology.org/2022.findings-emnlp.266/ AmigQA: aclanthology.org/2020.emnlp-main.466/ Lightbulb: aclanthology.org/2021.acl-long.304.pdf CREPE: aclanthology.org/2023.acl-long.583.pdf Overlap: aclanthology.org/2021.eacl-main.86.pdf QA Datasets: czcams.com/video/p8tnM1_waQ8/video.html This is a single lec...
The Fulfilling Straight Line Mission (from a Computer Science Perspective) [Rant]
zhlédnutí 208Před 8 měsíci
The Fulfilling Straight Line Mission (from a Computer Science Perspective) [Rant]
Update: Why you should call Large Language Models Muppet Models [Rant]
zhlédnutí 872Před 8 měsíci
Update: Why you should call Large Language Models Muppet Models [Rant]
Academic Conferences' Dark Secret and Why Virtual Conferences will never Improve [Rant]
zhlédnutí 235Před 8 měsíci
Academic Conferences' Dark Secret and Why Virtual Conferences will never Improve [Rant]
How to Know if Your Language is Broken [Rant]
zhlédnutí 197Před 9 měsíci
How to Know if Your Language is Broken [Rant]
What I expect from TAs in my Course [Lecture]
zhlédnutí 282Před 9 měsíci
What I expect from TAs in my Course [Lecture]
Recurrent Neural Networks as Language Models and the two Tricks that Made them Work [Lecture]
zhlédnutí 1,4KPřed rokem
Recurrent Neural Networks as Language Models and the two Tricks that Made them Work [Lecture]
Explaining Recurrent Neural Networks through a silly Word-Counting Sentiment Example [Lecture]
zhlédnutí 713Před rokem
Explaining Recurrent Neural Networks through a silly Word-Counting Sentiment Example [Lecture]
What general term should you use for models like BERT and GPT? [Rant]
zhlédnutí 1,5KPřed rokem
What general term should you use for models like BERT and GPT? [Rant]
No, CICERO has not "mastered" Diplomacy [Rant]
zhlédnutí 1,1KPřed rokem
No, CICERO has not "mastered" Diplomacy [Rant]
Can ChatGPT and You.com answer questions I thought no AI can answer? [Rant]
zhlédnutí 1KPřed rokem
Can ChatGPT and You.com answer questions I thought no AI can answer? [Rant]
How to read my course webpage [Lecture]
zhlédnutí 514Před rokem
How to read my course webpage [Lecture]
Why I Teach Using a Flipped Classroom and How it Works [Lecture]
zhlédnutí 432Před rokem
Why I Teach Using a Flipped Classroom and How it Works [Lecture]
Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain QA [Research]
zhlédnutí 374Před rokem
Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain QA [Research]
Learning to Explain Selectively, EMNLP 2022 [Research]
zhlédnutí 526Před rokem
Learning to Explain Selectively, EMNLP 2022 [Research]
Re-Examining Calibration: The Case of Question Answering [Research]
zhlédnutí 521Před rokem
Re-Examining Calibration: The Case of Question Answering [Research]

Komentáře

  • @michaelmoore7568
    @michaelmoore7568 Před 3 dny

    As much as I hate LLMs... do LLMs use Chinese Restaurant Processes and/or Kneser-Ney?

    • @JordanBoydGraber
      @JordanBoydGraber Před 2 dny

      Not really, this is older technology to relate similar contexts together. Modern LLMs (or Muppet Models, as I like to call them) use continuous representations to do that.

  • @maryam2677
    @maryam2677 Před 20 dny

    Perfect! Thank you so much.

  • @RajivSambasivan
    @RajivSambasivan Před 26 dny

    Thanks, that was informative. Learned something.

  • @420_gunna
    @420_gunna Před měsícem

    I haven't finished the video, so apologies if you cover it, but in the 2023 CS224N NLP lecture on coreference resolution, Chris Manning introduces the (very complicated and demoralizing, to me) Hobb's algorithm, and then basically says something like "Hobbs HIMSELF said publicly that he didn't like the algorithm, and often pointed to it as ~an example of how we clearly needed something better."

  • @amoghmishra9222
    @amoghmishra9222 Před měsícem

    Synthetic data generations has become so easy now thanks to LLM!

  • @exploreyourdreamlife
    @exploreyourdreamlife Před 2 měsíci

    Your video has sparked a meaningful conversation. How has being a young-onset Parkinson's patient shaped Jessica's perspective on life? As the host of a dream interpretation channel, I'm curious to explore how her experiences with Parkinson's influence her dreams and subconscious mind. I truly appreciate the opportunity to learn more about Jessica's journey, and I've already liked and subscribed to the channel for more insightful content like this.

  • @donfeto7636
    @donfeto7636 Před 2 měsíci

    13:11 there is mistake in last line t(e1,f2) * ( t(e2,f0) + t(e2,f1) + t(e2,f2) ) should be this slides duplicate f2

  • @user-nm8tj4rh2t
    @user-nm8tj4rh2t Před 3 měsíci

    Jordan is soooooooo cool ...🤭 I really want to meet you at the NLP conference ...!!!!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Před 3 měsíci

    Can you share this video with the president of Harvard? I don’t think she got the message. Yet somehow DEI still think it was okay for her to cheat. DEI is accusing everyone of racism.

  • @RajivSambasivan
    @RajivSambasivan Před 3 měsíci

    Awesome, can't believe guys tried doing this in your class. This is like commiting a burglary and leaving a confession note and a business card. This is really funny.

    • @JordanBoydGraber
      @JordanBoydGraber Před 3 měsíci

      Not just that. I'm not sure what the right analogy is, but it's that *plus*: trying to rob the safe company, the thief's guild, or the police station.

  • @lianghuang3
    @lianghuang3 Před 3 měsíci

    thanks for using my slides! :)

  • @gametimewitharyan6665
    @gametimewitharyan6665 Před 3 měsíci

    My book mentioned about continuous and discrete data, but they did not explain anything. Your video clarified it so well for me Thanks a lot!!!

    • @JordanBoydGraber
      @JordanBoydGraber Před 3 měsíci

      You're welcome! Glad to be of help. This is an old video (pre-neural revolution), I just went through it again and it holds up pretty well (except for my not-so-great green screen).

  • @sebastianM
    @sebastianM Před 4 měsíci

    Fire video after fire video with this guy. Incredible.

    • @JordanBoydGraber
      @JordanBoydGraber Před 4 měsíci

      If you're a human, thank you! If you're a bot, you're an excellent example of the technology in the video, so thank you for providing a real-world example. :)

  • @leslietetteh7292
    @leslietetteh7292 Před 4 měsíci

    Great intro video, and lovely coverage of the key concepts there. I listened to the guy credited with coming up with the transformer model, and I think in adjusting the word vectors to predict the next word in a sequence more effectively, its also mapping phrases, sentences, ideas and concepts into multidimensional space, up to its input context length. So it ends up having what Isaac Asimov described as a "perceptual schematic" of the world, how everything relates to everything else, encoded in multidimensional space. Then all the behaviours it's trained to perform based on rlhf are possible because it has this initial perceptual schematic.

    • @JordanBoydGraber
      @JordanBoydGraber Před 4 měsíci

      Yes, but that schematic isn't a schematic (yet). It's just a vector space, which means that the exact meanings can get fuzzy. This association can only get us so far, which is why we're starting to see the technology's limits. Exciting to see what happens!

    • @leslietetteh7292
      @leslietetteh7292 Před 4 měsíci

      @JordanBoydGraber I'm not sure we are starting to see the technology's limits? I appreciate your breadth and depth of knowledge in the field, but all of the indications from these companies would appear to suggest that we're not close to approaching an asymptote with regards to these models yet. I do think I know what you're saying though, and I agree, what it has is a set of interrelated numbers, it has no actual "knowledge" per se, its what it's trained to do with these interrelated numbers really. I think the best analogy to get at what im saying is with the vision transformer model. It starts off representing small patches of the image as vectors like words, and has an associated positional encoding vector for each patch too. It learns to not only classify the entire image, and to cluster similar images in dimensional space when it classifies them, but it also learns positional encodings for each patch of the image, adjusting the positional encodings for each patch of the image, to orient it correctly in terms of the image so it has a much better chance of classifying the whole image. I see the same with the language transformer model. It's adjusting vectors on a word level, but because it's using these word vectors to do something with the whole block of text, its still learning to place the entire block of text, in one word iterations, up to its context length, in certain positions in interrelated dimensional space, just like it does with images, even though it only has vectors for words, like it only has vectors for small image patches. Then further training helps it prune down this vast interrelation to a conceptual map (2nd part just a theory from me here). I think there may be a limit with purely language based models, but potentially the sky is the limit with multimodality. The constraining factor appears to be hardware ATM imo.

  • @dipaco_
    @dipaco_ Před 4 měsíci

    This is an amazing video. Very intuitive. Thank you.

  • @sebastianM
    @sebastianM Před 4 měsíci

    Incredible work. Sharing with my class.

  • @Kaassap
    @Kaassap Před 5 měsíci

    This was very helpful tyvm!

  • @yusufahmed2233
    @yusufahmed2233 Před 5 měsíci

    9:42 for Rm(H), what is the use of taking expectation over all samples? As we saw previously, like from 6:12, calculation of empirical Rademacher does not use the true label of samples, rather just the size of the sample.

  • @grospipo20
    @grospipo20 Před 5 měsíci

    Interesting

  • @sebastianM
    @sebastianM Před 6 měsíci

    It's really wonderful when no nonsense science communication comes with a generous helping of low-key courage. Dope.

  • @sebastianM
    @sebastianM Před 6 měsíci

    Really excellent like the other videos on this series. I am sharing the course with colleagues and hoping to go thru the syllabus in the Spring. Thank you for the excellent work, Prof!

  • @taofiqaiyeloja1820
    @taofiqaiyeloja1820 Před 6 měsíci

    Excellent

  • @sebastianM
    @sebastianM Před 6 měsíci

    This is incredible. Thanks!

  • @user-qx9cg5hx9w
    @user-qx9cg5hx9w Před 7 měsíci

    in 6:55, it is said that H(x, M) = sum(log(M(xi))), but accroading to the defination of cross entropy, it should be H(P, Q) = sum(-1 *P(x)log(Q(x))), so are we assuming P(x) is always one when computing perplexity?

    • @JordanBoydGraber
      @JordanBoydGraber Před 7 měsíci

      This is a really good point. Typically when you evaluate perplexity you have one document that somebody actually wrote. E.g., you're computing the perplexity of the lyrics of "ETA". In that case we have a particular sequence of words. Given the prefix "He's been totally", the probability of P(x_t) = "lying" and everything else is zero. For some generative AI applications, this might not be true. E.g., for maching translation you might have multiple references. Thanks for catching this unstated assumption!

  • @AlbinAichberger
    @AlbinAichberger Před 7 měsíci

    Excellent interview. Excellent YT Channel, thank you!

  • @heyman620
    @heyman620 Před 7 měsíci

    That's just a brilliant video, I appreciate the fact that your videos always introduce an uncommon point of view that still makes a lot of sense.

  • @jeromeeusebius
    @jeromeeusebius Před 7 měsíci

    IS there a link to the "mark riddle(?)" transformer diagram? can't find it in the description.

  • @candlespotlight
    @candlespotlight Před 7 měsíci

    Amazing video!! I’m so glad you covered this. Your passion and enjoyment about the subject really comes through. Thanks so much for this ☺️

  • @tombuteux9294
    @tombuteux9294 Před 8 měsíci

    should equation 6) be: 2e^(-epsilon*m/2)? This is because the chance of sampling from the whole highlighted region is epsilon, so the probability of sampling from a specific region is epsilon/2? Thank you for the great lecture!

    • @andyvon034
      @andyvon034 Před 8 měsíci

      Yes I think so too, epsilon/2 for each side

  • @dundeedideley1773
    @dundeedideley1773 Před 8 měsíci

    Cool idea! Other rating ideas: how evenly does the straight line cut the country into two pieces? Are they the same size? Same Population each side of the line? This way you can allow for easy countries and hard countries, where you can score the "even" disection of countries irrespective of how long the line is. Also a hint: your microphone has some awful automatic gain setting or something, where all the quiet sounds are amplified and all the loud sounds are quieted down, so your tiniest breathing in is the same volume as your loudest talking bits. It's really annoying

    • @JordanBoydGraber
      @JordanBoydGraber Před 8 měsíci

      1) I like the population bisection idea. It's obviously easier to go through less popular areas. 2) Thanks for mentioning that, it's easy to tune these sorts of things out.

  • @kwesicobbina9207
    @kwesicobbina9207 Před 8 měsíci

    Loved this video 😅 for some reason ❤

    • @JordanBoydGraber
      @JordanBoydGraber Před 8 měsíci

      Thanks! Good to know. Perhaps I'll do more things like this. Not relevant to any of my classes, really, but I enjoyed doing it.

  • @mungojelly
    @mungojelly Před 8 měsíci

    the name muppet models is super cute but alas the perspective that muppet models just make stuff up is misplaced, true in some ways but also dangerously wrong, they do get things wrong or out of place when speaking off the top of their head, but, um, statistically far less than humans do already, the confusion is that they're so much better at talking than humans that they can give almost accurate coherent essays about stuff completely off the top of their heads while a human would just be saying "uhhhhh", if you give them the equivalent of a human salary worth of compute they can also check the accuracy of things a zillion times better than any human could ever check

  • @DomCim
    @DomCim Před 8 měsíci

    Dance your cares away <clap><clap> Worries for another day <clap><clap>

    • @JordanBoydGraber
      @JordanBoydGraber Před 8 měsíci

      Sing und schwing das Bein, <klatschen> lass die Sorgen Sorgen sein.

  • @darkskyinwinter
    @darkskyinwinter Před 8 měsíci

    It's canon now.

  • @shakedg2956
    @shakedg2956 Před 8 měsíci

    You don't have enough views.

  • @JordanBoydGraber
    @JordanBoydGraber Před 9 měsíci

    Yuval Pinter makes the excellent point that I shouldn't conflate "writing system" and "language". Indeed, this video should have been titled "How to Know if Your Writing System is Broken". See more in their excellent position paper on the subject: aclanthology.org/2023.cawl-1.1/

  • @jayronfinan
    @jayronfinan Před 9 měsíci

    Lol what was that short powermark on question 32

  • @triton62674
    @triton62674 Před 9 měsíci

    Interesting way of presenting while teaching, never seen this method before!

  • @giacomofrascarelli8853
    @giacomofrascarelli8853 Před 9 měsíci

    Loved the lecture

  • @Dnlrmrez
    @Dnlrmrez Před 10 měsíci

    Stop teleporting!!!

    • @JordanBoydGraber
      @JordanBoydGraber Před 10 měsíci

      Thanks (honestly) for the feedback. I was trying out a new multicam setup and I agree that it didn't work out as well as I would have hoped.

  • @JordanBoydGraber
    @JordanBoydGraber Před 10 měsíci

    Content starts at 3:13

  • @Lannd84
    @Lannd84 Před 10 měsíci

    Thank you master

  • @bhesht
    @bhesht Před 11 měsíci

    Splendid explanation!

  • @Saiju.
    @Saiju. Před 11 měsíci

    I need to do Google reviews classification into n categories and will this model give me 💯 accurate results..i am worried about the accuracy 😮

    • @JordanBoydGraber
      @JordanBoydGraber Před 11 měsíci

      No, probably not! Indeed, 100% accuracy of a model is only possible if you have training data where the annotators are 100% consistent, which is usually not the case for any interesting data. However, in settings where you don't have a lot of training data, topic models can help you figure out what the labels should be and can sometimes be a useful feature when you do build a classifier: mimno.infosci.cornell.edu/papers/baumer-jasist-2017.pdf

  • @creativeuser9086
    @creativeuser9086 Před rokem

    what if them calibration error is not uniform across all topics? then we cannot use generalized bins since sometimes the model would output the correct accuracy estimate right?

    • @JordanBoydGraber
      @JordanBoydGraber Před rokem

      Yes, that's right. This is a problem in similar models like item response theory or ideal point models. Then you can imagine the calibration term being a vector that has weightings depending on how prominent a given topic is in a given example. Hoping to get a video out about this out for my fall seminar.

  • @MiladAmiri
    @MiladAmiri Před rokem

    Thank you for the video, would you be able to share a link to Mark Riedl paper showing the detailed architecture of transformers?

    • @JordanBoydGraber
      @JordanBoydGraber Před rokem

      Sadly, no. He tweeted the image out, but he's not sure if he's allowed to make the underlying materials freely available. (I asked, because it's so awesome.) You can always sign up for GATech's online education offerings! I'm sure it's higher quality than the stuff I give out for free on CZcams. I'm really jealous of their support for teachers offering courses online.

  • @creativeuser9086
    @creativeuser9086 Před rokem

    Hey Jordan, there’s a product called gptzero which says it can detect whether a certain piece of text is generated by an LLM like ChatGPT or not, they do that by outputting a perplexity score for the text you provide and assume that it’s most likely LLM-generated if it has a low PPL. My question: how can they calculate the PPL score if they do not have the actual model that can give them the entropy loss?

    • @JordanBoydGraber
      @JordanBoydGraber Před rokem

      Great question! I have no inside information and the project isn't open source, but from what I've seen, I think it looks at a corpus of generated text, computes a perplexity estimate from that, and then uses that to make the decision. An alternate approach without access to the model weights but with access to models as a black box would be to compute an empirical distribution and then using that for the perplexity calculation. I haven't seen any evidence that it's doing that, though. I think a better approach would be to use watermarking, but that requires the cooperation of those making the language models.

    • @creativeuser9086
      @creativeuser9086 Před rokem

      @@JordanBoydGraber but I'm trying to wrap my mind around what you said regarding: computing a perplexity estimate from a corpus of generated text. not sure I get what you meant and I would appreciate if you'd expand a little.

    • @JordanBoydGraber
      @JordanBoydGraber Před rokem

      ​@@creativeuser9086​This used to be the (only) way you built language models back in the day! Here's the simplest way to do it: czcams.com/video/rO80BH5FI3s/video.html Here's a more complex way to do it: czcams.com/video/4wa2WyDrgMA/video.html In any event, you have text that comes in and then you have a probability distribution over sequences that comes out. This would be one way of estimating the probability distribution and then you could see how likely or unlikely a query text is with respect to that model.

  • @hengzhezhang6507
    @hengzhezhang6507 Před rokem

    Super clear!😀😀

  •  Před rokem

    You are great teacher! Love your videos. And I am so happy that you made all the course available online!

  • @omarthefabulous9967

    Hello , may I ask that if I will study computational linguistics I will find good chances of jobs and good salary or better to take general linguistics

    • @brunogatti383
      @brunogatti383 Před 3 měsíci

      Join the cult, we're all going to starve together bro