Mod-01 Lec-05 Bayes Decision Theory

Sdílet
Vložit
  • čas přidán 5. 09. 2024
  • Pattern Recognition and Application by Prof. P.K. Biswas,Department of Electronics & Communication Engineering,IIT Kharagpur.For more details on NPTEL visit nptel.ac.in

Komentáře • 23

  • @adityaanand81
    @adityaanand81 Před 7 lety +30

    Bayes Decision Theory 32:00
    Probability of error 50:44

  • @sumitvaise5452
    @sumitvaise5452 Před 4 lety +1

    Amazing lecture. Simply explained this difficult topic. Salute to you Sir.

  • @SheshadriMadhu
    @SheshadriMadhu Před 7 lety +7

    Watch from 32:00 for Bayesian Decision Theory.

  • @Sanyat100
    @Sanyat100 Před 6 lety +2

    Best video best teacher. lots of respect !

  • @nuwanatthanayake
    @nuwanatthanayake Před 2 lety

    Graet Sir

  • @RushikeshTade
    @RushikeshTade Před 9 lety +1

    Very helpful ,Thanks for uploading.

  • @Code-09
    @Code-09 Před 5 lety

    Very helpful. Thank you very much.

  • @UraratkK
    @UraratkK Před 9 lety +1

    Thank you very much professor.

  • @neomuks
    @neomuks Před 4 lety

    very helpful sir.

  • @sam41619
    @sam41619 Před 6 lety

    very well-explained!

  • @aanandakrishnan102
    @aanandakrishnan102 Před 7 lety +4

    Isn't the dimension of the hyper plane one less than the dimension of the vector space defining the pattern ?

    • @nicolagnecco
      @nicolagnecco Před 7 lety

      I agree with you

    • @aishwaryagoel149
      @aishwaryagoel149 Před 6 lety

      same doubt

    • @domste
      @domste Před 5 lety

      The question is old but someone may still be interested:
      It's not:
      The hyperplane dividing the classes has the same dimension of the feature space.
      If you see he is using a line (dimension = 2 ) to divide examples on a 2d space.
      If you want to classify a set of gym athletes as Tall or Short based only on their height you can plot the height values on a 1D space and use a point (1D boundary) that acts like a threshold and divides the classes Tall and Short.
      You just have to propagate this reasoning to a higher dimensional space

    • @namangupta2228
      @namangupta2228 Před 5 lety

      hyperplane will have the same dimension as vector space but the difference is that, if in vector space you have n independent variable then in hyperplane you will have n-1 independent variable.

  • @ConsuelaPlaysRS
    @ConsuelaPlaysRS Před 6 lety

    At 51:12, shouldn't the vertical axis be labeled P(x|w) ? Otherwise, the points along the graphs at any given x would have to sum to 1.

  • @YoungAthleticclub
    @YoungAthleticclub Před 5 lety

    can we say evidence that is (p(x)) can be considered as scalable factor for determining the class?

  • @dr.neerajkumar4264
    @dr.neerajkumar4264 Před 4 lety

    Nice lecture on this topic but sound is very poor

  • @chinnakotlasreenath8337

    Bayes decision theory 31.00

  • @vaibhavsharda9443
    @vaibhavsharda9443 Před 6 lety

    watch in 1.25

  • @chinmayajitm
    @chinmayajitm Před 6 lety +1

    Very low sound

  • @KunalSaini97
    @KunalSaini97 Před 4 lety

    bhot zyada volume h kam kro thoda