Lecture 5 - Big O Notation | DSA Basics For Beginners | Placement Course

Sdílet
Vložit
  • čas přidán 10. 09. 2024
  • Welcome to Lecture 5 - Big O Notation of our DSA Placement Course. In this lecture, we will get an understanding of the Time Complexities of Algorithms and how to implement them in coding.
    What is notation in data structures?
    - Notation is used to describe the running time of an algorithm - how much time an algorithm takes with a given input, n.
    What is Big O Notation?
    Big O notation is a mathematical notation used in computer science to describe the upper bound, or worst-case scenario, of an algorithm's runtime complexity. It's a concise way to express how an algorithm's performance scales as the size of the input increases.
    Why is it called Big O?
    - Big O notation is named after the term "order of the function", which refers to the growth of functions. Big O notation is used to find the upper bound (the highest possible amount) of the function's growth rate, meaning it works out the longest time it will take to turn the input into the output.
    What is the best Big-O notation?
    - The Big O chart above shows that O(1), which stands for constant time complexity, is the best. This implies that your algorithm processes only one statement without any iteration.
    This Data Structures and Algorithms course is a part of our Data Structures and Algorithms playlist: • Video
    🔗Follow Us:
    - Our Website: myprojectideas...
    - Github: github.com/myp...
    Join this channel to get access to the perks like 1-to-1 error resolution:
    / @myprojectideas
    #dsa #algorithm #notation #omega #placement #course #coding

Komentáře •