Stanford Online
Stanford Online
  • 2 435
  • 37 937 980
Information Session: Stanford Graduate Degrees, Certificates, and Courses I 2024
Looking to enhance your knowledge and build expertise in your career? Consider enrolling in a graduate course or program offered through Stanford Online! In this online informational session you will hear more about our portfolio of graduate program options, what you can expect to experience , as well as key information to know before enrolling. Stanford Online allows students to take Stanford University graduate courses online, remotely. Students take the exact same courses as on campus students. On campus lectures are recorded and available to stream for online students.
The session includes:
Overview of what you can expect while taking a graduate course
Key information about applying and enrolling
Q+A from the audience
Learn more about Stanford Online's graduate education options: online.stanford.edu/graduate-education
#gradschool #graduateprogram #onlineeducation
zhlédnutí: 834

Video

Information Session: Leading People, Culture, and Innovation Program
zhlédnutí 762Před 9 hodinami
Ready to level up your leadership skills? Check out our new Leading People, Culture, and Innovation Program: online.stanford.edu/programs/leading-people-culture-and-innovation-program Led by Assistant Director of Creativity and Innovation Programs, Robyn Woodman, this session provides an engaging dive into the core curriculum and unique benefits of the program. Whether you're a seasoned executi...
Stanford CS25: V4 I Overview of Transformers
zhlédnutí 29KPřed 12 hodinami
April 4, 2024 Steven Feng, Stanford University [styfeng.github.io/] Div Garg, Stanford University [divyanshgarg.com/] Emily Bunnapradist, Stanford University [www.linkedin.com/in/ebunnapradist/] Seonghee Lee, Stanford University [shljessie.github.io/] Brief intro and overview of the history of NLP, Transformers and how they work, and their impact. Discussion about recent trends, breakthroughs, ...
Stanford Seminar - Towards Safe and Efficient Learning in the Physical World
zhlédnutí 1,8KPřed 21 hodinou
April 5, 2024 Andreas Krause of ETH Zurich How can we enable agents to efficiently and safely learn online, from interaction with the real world? I will first present safe Bayesian optimization, where we quantify uncertainty in the unknown objective and constraints, and, under some regularity conditions, can guarantee both safety and convergence to a natural notion of reachable optimum. I will ...
Stanford EE274: Data Compression I 2023 I Lecture 18 - Video Compression
zhlédnutí 903Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 8 - Beyond IID distributions: Conditional entropy
zhlédnutí 481Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 5 - Asymptotic Equipartition Property
zhlédnutí 604Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 3 - Kraft Inequality, Entropy, Introduction to SCL
zhlédnutí 638Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 11 - Lossy Compression Basics; Quantization
zhlédnutí 228Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 6 - Arithmetic Coding
zhlédnutí 210Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 16 - Learnt Image Compression
zhlédnutí 212Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 1 - Course Intro, Lossless Data Compression Basics
zhlédnutí 1,6KPřed dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 17 - Humans and Compression
zhlédnutí 303Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 2 - Prefix Free Codes
zhlédnutí 369Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 9 - Context-based AC & LLM Compression
zhlédnutí 196Před dnem
To follow along with the course, visit the course website: stanforddatacompressionclass.github.io/Fall23/ Tsachy Weissman Professor of Electrical Engineering at Stanford University web.stanford.edu/~tsachy/ Shubham Chandak shubhamchandak94.github.io/ Pulkit Tandon Learn more about the online course and how to enroll: online.stanford.edu/courses/ee274-data-compression-theory-and-applications To ...
Stanford EE274: Data Compression I 2023 I Lecture 4 - Huffman Codes
zhlédnutí 328Před dnem
Stanford EE274: Data Compression I 2023 I Lecture 4 - Huffman Codes
Stanford EE274: Data Compression I 2023 I Lecture 7 - ANS
zhlédnutí 269Před dnem
Stanford EE274: Data Compression I 2023 I Lecture 7 - ANS
Stanford EE274: Data Compression I 2023 I Lecture 10 - LZ and Universal Compression
zhlédnutí 196Před dnem
Stanford EE274: Data Compression I 2023 I Lecture 10 - LZ and Universal Compression
Stanford Seminar - Silicon Valley & The U.S. Government: DoD’s Office of Strategic Capital
zhlédnutí 1,4KPřed 21 dnem
Stanford Seminar - Silicon Valley & The U.S. Government: DoD’s Office of Strategic Capital
Stanford Seminar - Silicon Valley & The U.S. Government: Vannevar Lab's Brett Granberg
zhlédnutí 1,3KPřed 21 dnem
Stanford Seminar - Silicon Valley & The U.S. Government: Vannevar Lab's Brett Granberg
Stanford Seminar - Silicon Valley & The U.S. Government: Anduril Industries’ Trae Stephens
zhlédnutí 577Před 21 dnem
Stanford Seminar - Silicon Valley & The U.S. Government: Anduril Industries’ Trae Stephens
Stanford Seminar - Silicon Valley & The U.S. Government: Former U.S. JSOC Commander Scott Howell
zhlédnutí 653Před 21 dnem
Stanford Seminar - Silicon Valley & The U.S. Government: Former U.S. JSOC Commander Scott Howell
Stanford Seminar - Silicon Valley & The U.S. Government: The Honorable Sue Gordon
zhlédnutí 481Před 21 dnem
Stanford Seminar - Silicon Valley & The U.S. Government: The Honorable Sue Gordon
Stanford Seminar - Silicon Valley & The U.S. Government: In-Q-Tel’s Steve Bowsher
zhlédnutí 455Před 21 dnem
Stanford Seminar - Silicon Valley & The U.S. Government: In-Q-Tel’s Steve Bowsher
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 18
zhlédnutí 1,4KPřed měsícem
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 18
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 17
zhlédnutí 1KPřed měsícem
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 17
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 16
zhlédnutí 1,2KPřed měsícem
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 16
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 15
zhlédnutí 1,1KPřed měsícem
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 15
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 14
zhlédnutí 1,3KPřed měsícem
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 14
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 13
zhlédnutí 1,4KPřed měsícem
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 13

Komentáře

  • @KrishnanshAgarwal
    @KrishnanshAgarwal Před 38 minutami

    5 topics of this class 1) Supervised Learning 2) Machine Learning Strategy 3) Deep Learning 4) Unsupervised Learning 5) Reinforcement Learning

  • @Muthumalai_Tiger_Reserve_Aana

    The audio is so bad..

  • @RishiKaura
    @RishiKaura Před 5 hodinami

    Sincere students and smart

  • @TheNewton
    @TheNewton Před 7 hodinami

    19:47 so is there a functional difference between calling the usage of softmax `attention` instead of the simpler word `search` beyond trying to be catchy?

  • @dukensonguerrier5369
    @dukensonguerrier5369 Před 11 hodinami

    It’s mind blowing how much overlap there is between startup info ( lean start up & y combinator ) & design thinking.

  • @ferencszalma7094
    @ferencszalma7094 Před 14 hodinami

    0:00:30 Review: Excess risk bound; Uniform convergence for 1) finite hypothesis class √{log|H|/n} 2) with p parameters √{p/n}; complexity measures of hypothesis 0:03:50 Limitations of these simple bounds 0:06:40 This lecture: More elaborate bounds: based on √{complexity(Θ,P)/n} 0:10:55 Def: Rademacher complexity, RC or R_n(H), a bound for uniform convergence 0:13:50 Correlation of f and σ: output {f(z_i)} and {σ_i}, loss fn and σ 0:15:00 High complexity function family F 0:18:55 Theorem E_{~data}[sup_{h \in H}(training loss - population loss) ≤ R_n(H)] 0:24:35 Interpretation of Rademacher complexity: How well the losses can correlate w random data 0:25:30 Binary classification example, 0-1 loss: l_01 0:32:15 Still binary, R_n(F)=1/2 R_n(H) How well can H memorize randomized labels: h(x_i)=σ_i 0:35:25 Intuition on R_n(F)=1/2 R_n(H) (true only for binary w 01 loss) 0:38:50 Connection btw RC and degrees of freedom 0:39:45 How do you think about the family of hypotheses and losses. Same cardinality, but very different output, non-linear loss fns (eg exponential) 0:42:20 Symmetrization proof 0:56:35 Removed expectation, removed randomness beyond that of the σ_i randomness 1:01:25 RC can depend on P distribution 1:09:35 Empirical Rademacher complexity ERC R_S(F) for dataset S, concentration around R_n(F) with high probability 1-δ 1:13:35 Theorem R_n(F) bounded by ERC + √{log δ/n} with high probability whp 1:15:45 Remark 1: logδ term is tiny compared to ERC 1:17:15 Remark 2: Rademacher complexities are translation invariant

  • @rudraprasaddash3809
    @rudraprasaddash3809 Před 16 hodinami

    The most funniest part comes when prof talks about booking the flight 😂😂

  • @benjaminy.
    @benjaminy. Před 21 hodinou

    Hello Everyone! Thank you very much for uploading these materials. Cheers

  • @fgfanta
    @fgfanta Před dnem

    Pity the resolution is so low, it is hard to read the code.Great content nevertheless!

  • @marcinkrupinski
    @marcinkrupinski Před dnem

    AMazing stuff! Thank you for publishing this valuable material!

  • @ferencszalma7094
    @ferencszalma7094 Před dnem

    0:00:15 Rademacher Complexity RC ≤ formula for linear model; classification; RC bound 0:01:35 Binary classification setup 0:03:15 Finite hypothesis: Theorem: ∀f ∈ F bounded by M, then RC R_S(F)≤√{2M^2log(|F|)/n} on S={z_1,...,z_n} is bounded 0:07:15 l_{01}=l_γ((x,y),h)=𝟙(y sgn[h(x)]≤0) loss; signum issue → margin idea to resolve sign issue; apply to linear model 0:09:15 Margin intuition: Zero training error, perfect separation, can define margin(x) = y h_θ(x); dataset margin γ=min_i{y^(i) h_θ(x^(i))} 0:16:15 Margin dependent loss: l_γ=l_γ((x,y),h)=l_γ(yh(x)), ramp/margin loss, γ margin region/size 0:21:45 Ramp loss fn ≥ 0-1 loss fn: l_γ ≥ l_{01} 0:24:10 Test loss L_γ ≤ training loss \hat L_γ + 2 R_S(F). Look for bounds on R_S(F)! F=loss fn space indexed by h_θ ∈ H 0:26:20 Talagrand's lemma R_S(F) vs R_S(H): R_S(l_γ•H)≤1/γ R_S(H), where l_γ is γ-Lipschitz 0:48:30 Test loss bound is invariant to scaling, RC/γ_min 0:51:50 Linear model's RC: R_S(H)=BC/√{n}, with H={h(x) = w^T x} and ||x_i||_2<C, ||w||_2<B 1:07:25 Linear model's RC: R_S(H)=BC√{lg(d)/n}, with H={h(x) = w^T x} and ||x_i||_\infty<C, ||w||_1<B

  • @slymastera
    @slymastera Před dnem

    very informational

  • @milakohen630
    @milakohen630 Před dnem

    Here's a engaging summary of the CZcams lecture you linked, focusing on the key highlights of Natural Language Understanding (NLU): **Natural Language Understanding: A Look Back and a Leap Forward 🚀** * **The Exciting Evolution of NLU:** Professor Manning kicks off the lecture by highlighting how far Natural Language Understanding has come. Starting from 2012's early wave of interest to today, the field has exploded! This makes it a super interesting time to study NLU, as things are changing fast! * **NLU's Amazing New Skills:** The things we can do with language models today are honestly mind-blowing. They can: * Write different styles of text (think poems, code, scripts... you name it!) ✍ * Translate between languages like a pro 🌎 * Answer your questions in actually helpful ways 🤔 * **The Tricky Question Test:** One way to see NLU progress is to ask the same question to models over time. The question "Which U.S. states border no U.S. states?" is surprisingly hard due to the word 'no'. Here's how models tackled it: * 1980: SHRDLU, a classic system, could reason about complex situations...but failed on basic things outside its knowledge 🙅‍♂ * 2009: Wolf... pretty much just listed all U.S. states 😅 * 2020-2021: OpenAI models start getting the idea of 'no other states', but make some geographical blunders (Puerto Rico isn't a state!) * 2022: DaVinci nails it! Alaska and Hawaii are the champs of isolation 🥇 * **Challenges and the Future:** This is just a taste of NLU's journey. There are still things these models struggle with, but the field is moving rapidly. It's an awesome time to be involved in shaping where NLU goes next! ✨

  • @ferencszalma7094
    @ferencszalma7094 Před 2 dny

    41:25 Lemma 1 Maximizing margin w/ l_2 regularization 51:55 Lemma 2 Maximizing margin w/ regularizing cross-entropy loss 58:55 Connection to l_1-SVM 1:12:25 Tools for multi-layer networks, intro

  • @ferencszalma7094
    @ferencszalma7094 Před 2 dny

    3:13 Into to this lecture: Test loss ≤ poly(Lipschitzness of f(x) on x1,...,xn; norm of θ) 4:14 Classical uniform convergence: 2 types 6:16 Union bound 7:15 Data dependent generalization bound 8:18 Data as regularizer 17:18 Generalized margin 21:53 Infinite covering number 23:11 Loss on f is bounded by complexity/(min gen margin * sqrt n) 38:07 Intuition: linear model, normalized margin, perturbation to cross the separation plane 41:03 All-layer margin, perturbation in each layer 46:55 Theorem L_{01}(f) loss is bounded O(sum of ||w_i||_{1,1}/(min gen margin * sqrt n)) 52:03 Lemma (decomposition) 56:32 Corollary 58:18 Proof 1:19:18 Compare w/ Bartlett etal '17 1:22:48 SGD prefers Lipschitzness on data points, implicitly maximizing all-layer margin 1:25:00 SAM Sharpness Aware Regularization: Lipschitzness on param, not on latent 1:27:20 Average margin test error, sum of complexities of each layer

  • @ferencszalma7094
    @ferencszalma7094 Před 2 dny

    4:35 Non-convex optimization, local minima, global minima 8:20 Neural Tangent Kernel NTK approach - Taylor-expansion (to linear) of f_θ(x) around θ^0 ≈ g(x) 14:25 Tangent feature map, tangent kernel 17:48 Extent of the validity of the linearization 20:57 When linearization works: 1,2,3 - Idea of good neighborhoods B(θ_0) 30:40 The n tangent features, one at each x data points. Mean squared loss of linearized f(x) 33:32 Lemma: What neighborhoods are good? Based on lowest singular value of tangent matrix. 39:00 Lemma: β-Lipschitzness in θ of ∇_θ f_θ(x) affects size of neighborhood 48:00 β/σ^2 is not scale invariant in the scale in f(x) 49:05 Case 1: Re-parametrize model with a scalar multiplier 58:22 Case 2: Overparametrize model 1:15:20 Key quality: more β-Lipschitz, ie, smoother in β, as more m neurons 1:21:50 Remark on online gradient descent

  • @ferencszalma7094
    @ferencszalma7094 Před 2 dny

    59:45 Theorem: NTK kernel sample efficiency vs NN sample efficiency

  • @ddxoxbb9106
    @ddxoxbb9106 Před 3 dny

    dead zone 💀

  • @ShahNawazKhan-jz8wl

    When i am listening to these lectures alone in my room and my office, and when the professor says "good good good question with a unique energy", it makes me with a silent smile. This man is amazing!!!!!.

  • @420_gunna
    @420_gunna Před 3 dny

    Re: 2 vs 3 in the distillation objectives slide, he doesn't make it super clear, but 2 (output scores) refers to the softmax-produced probability vectors, and 3 refers to the vector of raw logits pre-softmax.

  • @NathanSimmonds
    @NathanSimmonds Před 3 dny

    He's talking about Dungeon Keeper, which was a great game, I look down and see the likes at "666" - hilarious 😂 Great lecture!

  • @IamPotato_007
    @IamPotato_007 Před 3 dny

    Where are the professors?

  • @alexbui0609
    @alexbui0609 Před 3 dny

    Francis Galton's quote at the end and students clapping. Wow a wonderful lecture again! I am speeding through this like watching Netflix. Chris Piech is an awesome professor!

  • @dougmicheals6037
    @dougmicheals6037 Před 3 dny

    When the power goes out, where can i find a Stanford post-grad with masters degree to peddle a generator for me?

  • @YS-VALUED
    @YS-VALUED Před 3 dny

    I love internet because I can learn from top universities

  • @1919Nada
    @1919Nada Před 4 dny

    This is very very helpful, I would like to know if you have a course or if you teach product managements, as almost always PM courses share the same knowledge but in this 5 min video I was able to gain knowledge that I literally didn't gain in 2 hours course. THANK YOU <3

    • @stanfordonline
      @stanfordonline Před 2 dny

      Thanks for your comment and feedback! You can browse our product management program and courses here: online.stanford.edu/programs/product-management-program

  • @Anbu_Sampath
    @Anbu_Sampath Před 4 dny

    it would be great if CS25: V4 created another playlist in youtube.

  • @williamss4277
    @williamss4277 Před 4 dny

    Thank you sooooooo much and missssssss the psets eaaaaaaagerly.

  • @therealjohnshelburne

    Finally an explanation of the the acronym INTER PLANETARY FILE SYSTEM. amazing presentation!!!! Love the comment "if you do anything that will cause people to change their application system it will never be deployed"

  • @emc3000
    @emc3000 Před 4 dny

    Thank you again to stanford for making materials accessible online. Still working to fully grasp how the data is stored (I understand semantic triples) in terms of the actual programming languages used.

  • @GerardSans
    @GerardSans Před 4 dny

    Be careful using anthropomorphic language when talking about LLMs. Eg: thoughts, ideas, reasoning. Transformers don’t “reason” or have “thoughts” or even “knowledge”. They extract existing patterns in the training data and use stochastic distributions to generate outputs.

    • @ehza
      @ehza Před 3 dny

      That's a pretty important observation imo

    • @junyuzheng5282
      @junyuzheng5282 Před 2 dny

      Then what is “reason” “thoughts” “knowledge”?

  • @420_gunna
    @420_gunna Před 4 dny

    Chris is an excellent lecturer! I love his sense of humor, it's a little absurd.

  • @proreduction
    @proreduction Před 4 dny

    It is still lost on me as to why we would pass the training images to a RNN that outputs the learned parameters of an entirely different deep neural network (a MLP) that is useful in a future, similar task. The more straight forward method seems to be passing the training images directly to the MLP itself and having the parameters be learned that way. What am I missing here?

  • @gemini_537
    @gemini_537 Před 4 dny

    Gemini: This lecture is about prompting instruction fine-tuning and RLHF, which are all techniques used to train large language models (LLMs). LLMs are trained on a massive amount of text data and are able to communicate and generate human-like text in response to a wide range of prompts and questions. The lecture starts with going over zero-shot and few-shot learning, which are techniques for getting LLMs to perform tasks they weren't explicitly trained for. In zero-shot learning, the LLM is given a natural language description of the task and asked to complete it. In few-shot learning, the LLM is given a few examples of the task before being asked to complete a new one. Then the lecture dives into instruction fine-tuning, which is a technique for improving the performance of LLMs on a specific task by fine-tuning them on a dataset of human-written instructions and corresponding outputs. For example, you could fine-tune an LLM on a dataset of movie summaries and their corresponding reviews to improve its ability to summarize movies. Finally, the lecture discusses reinforcement learning from human feedback (RLHF), which is a technique for training LLMs using human feedback. In RLHF, the LLM is given a task and then asked to complete it. A human expert then evaluates the LLM's output and provides feedback. This feedback is then used to improve the LLM's performance on the task. The lecture concludes by discussing some of the challenges and limitations of RLHF, as well as the potential future directions for this field. One challenge is that it can be difficult to get humans to provide high-quality feedback, especially for complex tasks. Another challenge is that RLHF can be computationally expensive. However, RLHF is a promising technique for training LLMs to perform a wide range of tasks, and it is an area of active research.

  • @ashishkatiyar4240
    @ashishkatiyar4240 Před 4 dny

    😂 wt a joke

  • @DeepakYadav-rr6dx
    @DeepakYadav-rr6dx Před 4 dny

    9:20 - BERT 33:17 - RoBERTa 34:40 - XLNet 38:28 - ALBERT 41:18 - T5 43:00 - ELECTRA

  • @TV19933
    @TV19933 Před 4 dny

    future artificial intelligence i was into talk this probability challenge Gemini ai talking ability rapid talk i suppose so it's splendid

  • @ddxoxbb9106
    @ddxoxbb9106 Před 5 dny

    just do it

  • @RajdeepKauramar26
    @RajdeepKauramar26 Před 5 dny

  • @EstellaWhite-ws7gh
    @EstellaWhite-ws7gh Před 5 dny

  • @lollollollol1622
    @lollollollol1622 Před 5 dny

    how are there no comments on this

  • @Drazcmd
    @Drazcmd Před 5 dny

    Very cool! Thanks for posting this publicly, it's really awesome to be able to audit the course :)

  • @styfeng
    @styfeng Před 5 dny

    it's finally released! hope y'all enjoy(ed) the lecture 😁

    • @laalbujhakkar
      @laalbujhakkar Před 4 dny

      Don't hold the mic so close bro. The lecture was really good though :)

    • @gemini22581
      @gemini22581 Před 4 dny

      What is a good course to learn NLP?

    • @siiilversurfffeeer
      @siiilversurfffeeer Před 2 dny

      hi feng! will there be more cs25 v4 lectures upload in this channel?

    • @styfeng
      @styfeng Před dnem

      @@siiilversurfffeeer yes! should be a new video out every week, approx. 2-3 weeks after each lecture :)

  • @lebesguegilmar1
    @lebesguegilmar1 Před 5 dny

    Thanks for sharing this course and palestry Staford. Congratulations . Here the Brazil

  • @laalbujhakkar
    @laalbujhakkar Před 5 dny

    Stanford's struggles with microphones continue.

    • @jeesantony5308
      @jeesantony5308 Před 5 dny

      it is cool to see some negative comments in between lots of pos... ✌🏼✌🏼

    • @laalbujhakkar
      @laalbujhakkar Před 5 dny

      @@jeesantony5308 I love the content, which makes me h8 the lack of thought and preparation that went into the delivery of all that knowledge even more. Just trying to reduce the loss as it were.

  • @poonhenry73
    @poonhenry73 Před 5 dny

    that's 12 year ago, if you held $100 nvidia stock, now it will be $200,000+

  • @hussienalsafi1149
    @hussienalsafi1149 Před 5 dny

    ☺️☺️☺️🥰🥰🥰

  • @fatemehmousavi402
    @fatemehmousavi402 Před 5 dny

    Awesome, thank you Stanford online for sharing these amazing video series

  • @user-kt1cw1fg4t
    @user-kt1cw1fg4t Před 5 dny

    What are the pre requirements of this course?

    • @stanfordonline
      @stanfordonline Před 2 dny

      Thanks for your question! There are no required perquisites for the courses in this program, it covers foundational knowledge. Let us know if we can answer any other questions you have! online.stanford.edu/programs/medical-statistics-program