What is LOAD BALANCING? ⚖️

Sdílet
Vložit
  • čas přidán 4. 06. 2024
  • Load Balancing is a key concept in system design. One simple way would be hashing all requests and then sending them to the assigned server.
    The standard way to hash objects is to map them to a search space and then transfer the load to the mapped computer. A system using this policy is likely to suffer when new nodes are added or removed from it.
    One term you would hear in system design interviews is Fault Tolerance, in which case a machine crashes. And Scalability, in which devices must be added to process more requests. Another term used often is request allocation. This means assigning a request to a server.
    Load balancing is often tied with service discovery and global locks. The type of load we want to balance is that of sticky sessions.
    Looking to ace your next interview? Try this System Design video course! 🔥
    interviewready.io
    00:00 Load Balancing - Consistent Hashing
    00:33 Example
    01:29 Server-Client Terms
    02:12 Scaling
    02:40 Load Balancing Problem
    03:58 Hashing Requests
    06:37 Request Stickiness
    08:00 Splitting the Pie
    10:35 Request Stickiness
    13:29 Next Video!
    With video lectures, architecture diagrams, capacity planning, API contracts, and evaluation tests. It's a complete package.
    Code: github.com/coding-parrot/Low-...
    References:
    stackoverflow.com/questions/1...
    www.citrix.co.in/glossary/loa...
    www.nginx.com/resources/gloss...
    en.wikipedia.org/wiki/Load_ba...)
    www.tomkleinpeter.com/2008/03/...
    michaelnielsen.org/blog/consis...
    • Consistent Hashing - G...
    System Design:
    highscalability.com/
    • What is System Design?
    www.palantir.com/how-to-ace-a...
    #LoadBalancer #Proxy #SystemDesign

Komentáře • 485

  • @bithon5242
    @bithon5242 Před 7 měsíci +44

    For anyone confused by the pie chart, the explanation he does makes sense only when you watch the whole video. In a nutshell, when you at first have 4 servers, each server handles 25% of the users. The hashing function takes users' user id or some other information that somehow encapsulates the user data (and is consistent) so any time you want to, for example, fetch a user profile you do it via the same server over and over again since user id never changes (therefore hash of a user id never changes and will always point to the same server). The server remembers that and it creates a local cache for that information for that user so that it doesn't have to execute the (expensive) action of calculating user profile data, but instead just fetches it from the local cache quickly instead. Once your userbase becomes big enough and you require more processing power you will have to add more servers. Once you add more servers to the mix, the user distribution among server will change. Like in the example from the video, he added one server (from 4 servers to 5 servers). Each server needs to now handle 20% of the users. So here is where the explanation for the pie chart comes from.
    Since the first server s0 handles 25% of the users, you need to take that 5% and assign it to 2nd server s1. The first server s0 no longer serves the 5% of the users it used to, so the local cache for those users becomes invalidated (i.e. useless, so we need to fetch that information again and re-cache it on a different server that is now responsible for those users). Second server s1 now handles 25%+5%=30% of the traffic, but it needs to handle 20%. We take 10% of its users and assign it to the third server s2. Again like before, the second server s1 lost 10% of its users and with it the local cache for those users' information becomes useless. Those 10% of users become third server's users, so the third server s2 handles 25%+10%=35% of the traffic. We take third server's 15% (remember, it needs to handle only 20%) and give it to the fourth server s3. Fourth server now handles 25%+15%=40% of the traffic. Like before, fourth server lost 20% of its users (if we're unlucky and careless with re-assignment of numbers it lost ALL of its previous users and got all other servers' users instead) and therefore those 20% of users' local cache becomes useless adding to the workload of other servers. Since fourth server handles 40% of the traffic, we take 20% of its users and give it to the new fifth server s4. Now all servers handle users uniformly but the way we assigned those users is inefficient. So to remedy that, we need to look at how to perform our hashing and mapping of users better when expanding the system.

    • @samjebaraj24
      @samjebaraj24 Před 7 měsíci +3

      Nice one

    • @swatisinha5037
      @swatisinha5037 Před 6 měsíci +4

      amazing explanation dude

    • @hetpatel1772
      @hetpatel1772 Před 2 měsíci +1

      thanks buddy it got cleared here, now i want to ask you that how would we utilize this and make it scalable because loosing cache data will be costly.

    • @Pawansoni432
      @Pawansoni432 Před 2 měsíci +1

      Thanks buddy ❤

    • @suvodippatra2704
      @suvodippatra2704 Před měsícem +2

      thanks dude

  • @Karthikdravid9
    @Karthikdravid9 Před 4 lety +89

    I'm a UX Designer, irrelevant for me to know this but just watching your video and sitting here willing to complete the whole series, thats how brilliant you are explaining chap.

  • @nxpy6684
    @nxpy6684 Před rokem +57

    If anyone is confused by the pie diagram,
    We need to reduce the distribution from 25 each to 20 each. So we take 5 from the first server and merge it with the second one. Then we take 10(5 from the first one and 5 from the second one) and merge it with third. So now, both one and two have 20 each. Then we go on taking 15 from third and merging it with fourth and finally, taking 20 from four to create the fifth server's space.
    Please correct me if I'm wrong. This is just a simple breakdown which I think is what he intended

    • @jasminewu1847
      @jasminewu1847 Před rokem +11

      your comment saved my day.
      I replayed the video for the pie diagram so many times but didn't get it.

    • @sairamallakattu8710
      @sairamallakattu8710 Před rokem +2

      Thanks for explanation...
      Same here..

    • @wendyisworking2297
      @wendyisworking2297 Před rokem +1

      Thank you for your explanation. It is very helpful.

    • @arymansrivastava6313
      @arymansrivastava6313 Před 10 měsíci

      Can you please tell what is this pie chart signifying, as to what are these 25 buckets, is it storage space or number of requests handled? It will be very helpful if you could help with this question.

    • @vanchark
      @vanchark Před 8 měsíci +1

      @@arymansrivastava6313 I think the numbers represent # of users. Let's say each user has one request. Before, we can say users 1-25 were mapped to server 0, 26-50 mapped to server 1, 51-75 mapped to server 2, and 76 - 100 mapped to server 3. By adding another server (server 4), we have to redistributed/remap these users across 5 servers now instead of 4. The redistribution process she showed in the video made it so that each user is now assigned to a new server. This is problematic because server 1 used to cache the information of users 1-25, but now that entire cache is useless. Instead, it's better to minimize the changes we make to each server. That's how I understood it, please correct me if I'm wrong

  • @proosdisanayaka1900
    @proosdisanayaka1900 Před 4 lety +80

    First of all, this is a perfect lesson and I have absorbed 100% of it as a school student. the pie chart was little confusion at first cus with 4 servers it's like 25 Buckets and then you added 1 server it's pretty much 20 - 5 buckets. so divide pie to 5 and mark each one 20 buckets is the easiest way

  • @valiok9880
    @valiok9880 Před 5 lety +78

    This channel is a total gem. I don't think i've seen anything similar on youtube in regards to quality. Really appreciated it !

  • @manalitanna1685
    @manalitanna1685 Před rokem +12

    I love how you've taken the time and effort to teach complex topics in a simple manner with real world examples. You also stress on words that are important and make analogies. This helps us students remember these topics for life! Thank you and really appreciate the effort!

  • @sumitdey827
    @sumitdey827 Před 4 lety +67

    seems like a 4th-year senior teaching the juniors...your videos are Beast :)

  • @UlfAslak
    @UlfAslak Před 2 lety +79

    Notes to self:
    * Load balancing distributes requests across servers.
    * You can use `hash(r_id) % n_servers` to get the server index for a request `r_id`.
    -> Drawback: if you add an extra server `n_servers` changes and `r_id` will end up on a different server. This is bad because often we want to map requests with the same ids consistently to the same servers (there could e.g. be cached data there that we want to reuse).
    * "Consistent hashing" hashes with a constant denominator `M`, e.g. `hash(r_id) % M`, and then maps the resulting integer onto a server index. Each server has a range of integers that map to their index.
    * The pie example demonstrates, that if an extra server is added, the hashing function stays the same, and one can then change the range-to-server-index mapping slightly so that it remains likely that an `r_id` gets mapped to the same server as before the server addition.

    • @naatchiarvel
      @naatchiarvel Před 2 lety +2

      Thanks for this. I have a question which may easy
      But I am not sure about it.
      It is basically based on the hash value we are deciding which server do we save our data out of n number of database.
      What is the guarantee that hash function returns value by which request will be stored n different number of servers equally

    • @raghvendrakumarmishra8035
      @raghvendrakumarmishra8035 Před 2 lety +3

      Thanks for notes. It's good for lazy people watching video, like me :)

    • @adityachauhan1182
      @adityachauhan1182 Před 2 lety +1

      @@naatchiarvel Take few processes with different r_id and let's say there are 5 servers ...now do mod and try to find which process will load in which server...You will get your answer

    • @naatchiarvel
      @naatchiarvel Před 2 lety

      @@adityachauhan1182 thank you .it answered my question.

    • @RohitSharma-ji2qh
      @RohitSharma-ji2qh Před 2 lety

      thanks for the summary

  • @ruchiragarwal4741
    @ruchiragarwal4741 Před rokem +4

    That's an eye-opener. Have been working in industry for a few years now but never realised how small changes like this can affect the system. Thank you so much for the content!

  • @aryanrahman3212
    @aryanrahman3212 Před rokem +4

    This was really insightful, I thought load balancing was simple but the bit about not losing your cached data was something I didn't know before.

  • @sadiqraza1658
    @sadiqraza1658 Před 3 lety +3

    Your way of explanation with real-life examples is really effective. I can visualize everything and remember it easily. Thanks for this.

  • @sagivalia5041
    @sagivalia5041 Před rokem +8

    You seem very passionate about the subject.
    It makes it 10x better to learn that way.
    Thank you.

    • @gkcs
      @gkcs  Před rokem

      Thank you!

  • @Codearchery
    @Codearchery Před 6 lety +169

    Notification Squad :-)
    The problem with having awesome teachers like Gaurav Sir, is that you want the same ones to teach you in college too :-) Thanks Gaurav sir for System Design series.

    • @gkcs
      @gkcs  Před 6 lety +7

      Thanks CodeArchery!

    • @hiteshhota6519
      @hiteshhota6519 Před 5 lety

      relatable

    • @yedneshwarpinge8049
      @yedneshwarpinge8049 Před rokem

      @@gkcs sir could you please explain that hoe does the server goes upto 40 buckets .......I did not understand at (8:46)

  • @KarthikaRaghavan
    @KarthikaRaghavan Před 5 lety +22

    Hey man, cool explanation for all the advanced system design learners... nice! keep it coming!!

    • @gkcs
      @gkcs  Před 5 lety +4

      Thanks!

  • @ShubhamShamanyu
    @ShubhamShamanyu Před 3 lety +5

    Hey Gaurav,
    You have a knack for explaining things in a very simple manner (ELI5).
    There is one part of this discussion which I feel conveys some incorrect information (or I might have understood it incorrectly). You mention that 100% of requests will be impacted on addition of a new server. However, I believe that only 50% of the requests should be impacted (server 1 retains 20% of its original requests, server 2 15%, server 3 10%, and server 4 5%).
    In fact, it's always exactly 50% of the requests that are impacted on addition of 1 new server irrespective of the number of original servers. This turned out to be a pretty fun math problem to solve (boils down to a simple arithmetic progression problem at the end).
    The reason your calculation results in a value of 100% is because of double calculation: Each request is accounted for twice, once when it is removed from the original server, and then again when it is added to the new server.

  • @mukundsridhar4250
    @mukundsridhar4250 Před 5 lety +63

    This method is ok when accessing a cache and the problems that arise are somewhat mitigated by consistent hashing.
    However there are two thing i want to point out
    1. Caching is typically done using a distributed cache like memcached or redis and the instances should not cache too muchinformation.
    2. If you want to divert requests from a particular request id then you should configure your load balancer to use sticky sesssions . The mapping between the request id and the ec2 instance can be stored in a git repo or cookies can be used etc.

    • @gkcs
      @gkcs  Před 5 lety +30

      Yes, distributed caches are more sensible to cache larger amounts of data. I read your comment on the other video and found that useful too.
      Thanks for sharing your thoughts 😁

  • @vanshpathak7565
    @vanshpathak7565 Před 4 lety +1

    You give great explanations !! And little video editing efforts make the video so interesting. Going to watch all videos uploaded by you.

  • @Varun-ms5iv
    @Varun-ms5iv Před 2 lety

    I subscribed/bookmarked your channel I don't know when I knew that I'll need it at some point in time and that time is now. Thank you for the series...❤️❤️

  • @ryancorcoran3334
    @ryancorcoran3334 Před 3 lety

    Man, thanks! You made this easy to 100% understand. Your teaching style is excellent!

  • @sekhardutt2457
    @sekhardutt2457 Před 4 lety

    you made it really simple and easy to understand. I don't bother searching any other video on system design, I can simply look into your channel. Thank you very much, appreciate your efforts .

    • @gkcs
      @gkcs  Před 4 lety +1

      Thank you!

  • @umangkumar2005
    @umangkumar2005 Před měsícem

    you are amazing teacher, teaching such a complicated topic in such a efficient manner,I didn,t get even a first job , and i am able to understand

  • @maxwelltaylor3544
    @maxwelltaylor3544 Před 5 lety +2

    Thanks for the concept lesson !

  • @akashpriyadarshi
    @akashpriyadarshi Před rokem

    I really like how you explained why we need consistent hashing.

  • @mostinho7
    @mostinho7 Před rokem

    Done thanks
    Traditional hashing will make requests go to different servers if new servers are added, and ideally we want requests from the same user to hit the same server to make use of local caching

  • @adheethathrey3959
    @adheethathrey3959 Před 3 lety

    Quite a smart way of explaining the concept. Keep up the good work. Subscribed!

  • @ayodeletim
    @ayodeletim Před 3 lety

    Just stumbled on your channel, and in few minutes, i have learned a lot

  • @deepanjansengupta7944
    @deepanjansengupta7944 Před 3 lety

    amazing video with extremely lucid explanations. wishing you the best, keep growing your channel. from a Civil Engineer just randomly crazy about Comp Science.

  • @nomib_k2
    @nomib_k2 Před rokem

    indian accent is one of accent that can make my learning process easier. It sounds clear on my ears rather native english spoken by most western tutors. Great job man

  • @SK-ju8si
    @SK-ju8si Před měsícem

    Brilliant. thank you!

  • @SusilVignesh
    @SusilVignesh Před 6 lety +4

    This is informative and thanks once again :-)

  • @arvindgupta8991
    @arvindgupta8991 Před 5 lety

    Thanks bro for sharing your knowledge.Your style of explaining is Great.

  • @tacowilco7515
    @tacowilco7515 Před 4 lety +4

    The only tutorial with an Indian accent which I enjoy watching :)
    Thanks dude! :)

  • @farflunghopes
    @farflunghopes Před 5 lety +1

    Love this!

  • @_romeopeter
    @_romeopeter Před rokem

    Thank you for putting this out!

  • @dhruvseth
    @dhruvseth Před 3 lety +115

    Hi Gaurav great video, can you quickly elaborate on the Pie Chart you made and did the 5+5+10... maths you kinda lost me there and I am trying to figure out intuitively what you tried to show using the Pie Chart example when a new server is added. Thank you!

    • @gkcs
      @gkcs  Před 3 lety +92

      The pie represents the load on each server, based on the range of requests it shall handle.
      The requests have random ids between 0 and M. I draw the pie chart with each point in the circle representing a request ID. Now a range of numbers can be represented by a slice in the pie.
      Since the request IDs are randomly and uniformly distributed, I assume that the load on each server is proportional to the thickness of pie slice it handles.
      IMPORTANT: I assume the servers cache data relevant to their request ID range, to speed up request processing. For example, take the profile service. It caches profiles. It's pie chart will be all profiles from 0 to M. The load balancer will assign loads to different nodes of this service.
      Suppose one node in this service handles the range 10 to 20. That means it will cache profiles from 10 to 20. Hence the cached profiles are (20 - 10) = 10.
      The pie calculations in the video show the number of profiles a node has to load or evict, when the number of nodes changes. The more a node has to load and evict, the more work it needs to do. If you put too much pressure on one node (as shown here with the last node), it has a tendency to crash.
      The consistent hashing video talks about how we can mitigate this problem :)

    • @dhruvseth
      @dhruvseth Před 3 lety +15

      @@gkcs Thank you so much for a fast and well detailed response! I understand perfectly now. Very much appreciative of your hard work and dedication when it comes to making videos and reaching out to your audience. Keep up the great work and stay safe! Sending love from Bay Area! 💯

    • @jameysiddiqui6910
      @jameysiddiqui6910 Před 3 lety +9

      thanks for asking this question, I was also lost in the calculation.

    • @Purnviram03
      @Purnviram03 Před 3 lety +2

      ​@@gkcs This comment really helped, was a bit confused about the pie chart explanation at first. Thanks.

    • @sonnix31
      @sonnix31 Před 3 lety +2

      @@gkcs Still not very clear. DOnt know how you jump to 40 :(

  • @ShahidNihal
    @ShahidNihal Před 8 měsíci

    10:26 - my expression to this video. Amazing content!

  • @sivaramakrishnanj300
    @sivaramakrishnanj300 Před 5 lety

    Helped me a lot! Thank u👏

  • @akshayakumart5117
    @akshayakumart5117 Před 5 lety +10

    You got a subscriber!!!

  • @vinayakchuni1
    @vinayakchuni1 Před 4 lety +8

    Hey Gaurav , in the first part of the video you mention and by taking the mod operation you distribute the requests uniformly . What kind of assumptions do you make ( and why) on the hash function which insures that the requests are uniformly distributed . I could come up with a hash function which would send all the requests to say server 1

  • @paridhijain7062
    @paridhijain7062 Před rokem

    Nice Explanation. Was finding a perfect playlist on YT to teach me SD. Now, its solved. Thank you for such a valuable content.

    • @gkcs
      @gkcs  Před rokem

      Awesome!
      You can also watch these videos ad-free (and better structured) at get.interviewready.io/learn/system-design-course/basics/an_introduction_to_distributed_systems.

  • @voleti19
    @voleti19 Před 5 lety

    Very well explained!!

  • @thomaswiseau2421
    @thomaswiseau2421 Před 5 lety

    Thank you Gaurav, very cool!
    Seriously though, thanks, your videos are really helping me through college. Very epic.

  • @tejaskhanna5155
    @tejaskhanna5155 Před 3 lety

    This is like the closest thing to an ELI5 on CZcams !! Greta stuff man 😀🙌

  • @Grv28097
    @Grv28097 Před 5 lety +6

    Really amazing lecture. Waiting for your B+ Trees video as promised :P

    • @gkcs
      @gkcs  Před 5 lety

      Haha, thanks!

  • @adeepak7
    @adeepak7 Před 5 lety +1

    Thanks for sharing. The concept of changing the cache on every server is cool.

  • @jagadeeshsubramanian239

    Simple & clean explanation

  • @AbdulBasitsoul
    @AbdulBasitsoul Před 4 lety

    Hi Gaurav, you explained the concepts properly. keep it up.
    Also focus on the tools you use.
    use a better and darker maker.
    manage light reflection from the board.
    in this video these 2 were bit annoying.
    Your way of teaching is good and to the point.
    Good luck.

  • @amonaurel3954
    @amonaurel3954 Před 3 lety

    Great videos, thank you!

  • @saransh9306
    @saransh9306 Před 4 lety

    amazing info..thank you for sharing

  • @alitajvidi5610
    @alitajvidi5610 Před 2 lety

    You are a great teacher!

  • @sundayokoi2615
    @sundayokoi2615 Před 4 měsíci

    This is what I understand about the pie chart explanation. The objective is to reduce the measure of randomness in the distribution of request among the servers because of the caching done on each server and the fact that user request are mapped to specific servers. So you have 100% request been served to 4 servers which means the load is distributed by 25% to each server. When you scaled up and add a new server, you reduce the load on each server by 5% to make up the extra 20% needed by the new server, that way reducing the random user distribution to each server and ensuring that user data stored in the caches is relatively consistent. Correct me if I'm wrong about this, thank you.

  • @valentinfontanger4962
    @valentinfontanger4962 Před 3 lety +1

    Thank you sensei !

  • @abhishekpawar921
    @abhishekpawar921 Před 2 lety +1

    I'm late here but the videos are amazing. I'm going to watch the whole playlist

  • @samitabej9279
    @samitabej9279 Před 2 lety +2

    Hi Gaurav, the concept of load balance design is very good and understandable. Thanks for your effort.
    I have small doubt that if same number of old servers removed and added new servers to the distributed system, will this load balance have effect? or will this consistent hashing mechanism be same with no cost effective?

  • @vijayjagannathan5164
    @vijayjagannathan5164 Před 4 lety

    Hey @Gaurav! Amazing set of videos on system design for newbies like me. I would just like to know if there is any suggested order in which we've to grasp material? Like after the 8th video in this playlist, we have whatsapp system design and then what is an API, NoSQL databases and so on. So, if at all any order exists which allows to learn/grasp concepts in a better way, what order/approach would you suggest ? Thanks :)

  • @kabiruyahaya7882
    @kabiruyahaya7882 Před 4 lety +1

    You got a new subscriber.
    You are very very great

  • @mayureshsatao
    @mayureshsatao Před 6 lety +1

    Thanks Gaurav for this interesting series
    Keep it up... : )

  • @mahendrachouhan7788
    @mahendrachouhan7788 Před 5 lety

    nice explanation of the concepts

  • @Arunkumar-eb5ce
    @Arunkumar-eb5ce Před 5 lety +2

    Gaurav, I just love the way you deliver the lectures.
    I have a query, You spoke about spending people to specific servers and have their relevant information will be stored in the cache. But won't it be a good idea if we have an independent cache server running master-slave?

    • @gkcs
      @gkcs  Před 5 lety +1

      That's a good idea too. Infact, for large systems, it's inevitable.

  • @vulturebeast
    @vulturebeast Před 3 lety

    On the entire CZcams this is best❤️

  • @akashtiwari7270
    @akashtiwari7270 Před 5 lety +14

    Hi Gaurav, Awesome video . Can you please do a video on Distributed Transaction Managment

    • @gkcs
      @gkcs  Před 5 lety +3

      Thanks! I'll be working on this, yes 😁

  • @yourstrulysaidi1993
    @yourstrulysaidi1993 Před 2 lety

    Gaurav, your explanation is awesome . i addicted to your way of teaching .
    God bless you with more power :-)

  • @knightganesh
    @knightganesh Před 5 lety

    Superb bro .. thanks a lot for your efforts to make videos...

  • @nishantdehariya5769
    @nishantdehariya5769 Před 4 lety

    Awesome explanation

  • @TheSumanb
    @TheSumanb Před 4 lety

    Awesome, I subscribed ...thanks

  • @lovemehta1232
    @lovemehta1232 Před rokem +1

    Dear Mr Gaurav,
    I am a civil engineer and currently working on one project where we are trying to add some technology in construction activity,
    I was struggling to understand what is system design, which is the best combination of front end and back end language, which system design should I adopt and many more things like this as I am not from IT field but I must say you made me understood so many technical thing in very layman language.
    Thank you so much for that

    • @gkcs
      @gkcs  Před rokem

      Thanks Love!
      Check out the first video in this channel's system design playlist for a good definition of system design 😁
      I would suggest using python and JavaScript (React or Vue.js) as backend and frontend tech respectively, to start with your software development journey.

  • @apratim1919
    @apratim1919 Před 5 lety +2

    Nice video.
    I feel that you are meant to be a teacher :) ! You have the flair for it. Not everyone with knowledge can teach it. Do consider when you get time in life ;) ! Cheers, all the best!

  • @lyhov.2572
    @lyhov.2572 Před 5 lety +1

    Nice topic, thank you

  • @NeverMyRealName
    @NeverMyRealName Před 3 lety

    awesome video brother. keep it up!

  • @DarshanSenTheComposer
    @DarshanSenTheComposer Před 4 lety +2

    Hello @Gaurav Sen. Started watching this awesome series because I wanted to get some good knowledge about building large scale applications. You teach extremely well (as usual).
    However, I can't really wrap my head around what the pie chart at 8:16 exactly means and how it changes when we add the fifth server. It would be really awesome if you could kindly explain that here or share a link that discusses it instead.
    Thanks a lot! :)
    Edit:
    I finally understood it! Yusss! The issue was in the positioning of the numbers in the pie chart. I think, the initial numbering was supposed to be 0, 25, 50 and 100 instead of all 25's. Thanks again. :)

    • @singhanuj620
      @singhanuj620 Před 2 lety +1

      How come he added 5+5+10 .... etc on addition of 5th server. Can you help me understand ?

  • @rajatmohan22
    @rajatmohan22 Před 2 lety +3

    Gaurav, one genuine question..All these concepts are so well taught and I'd love to buy your interview prep course. Why isn't concepts like these covered there ?

    • @gkcs
      @gkcs  Před 2 lety

      These are fundamental concepts which I believe should be free. You can watch them ad-free and better structured there.

    • @hossainurrahaman
      @hossainurrahaman Před 2 lety

      @@gkcs Hi Gaurav...I have an onsite interview day after tomorrow...and I only gave 1 day to system designing....I am not so proficient in this.... moreover I think I will not do better on coding interviews.... Is 1 day enough for system designing?

    • @gkcs
      @gkcs  Před 2 lety

      @@hossainurrahaman It's less. Are you a fresher or experienced?

    • @hossainurrahaman
      @hossainurrahaman Před 2 lety

      @@gkcs I have 4 years of experience...in a service based company

  • @pallav29
    @pallav29 Před 2 lety +3

    Hi Gaurav. Thanks for the informative video . Quick question. How did u got the values h(10) , h(20) and h(35) ? are these the random numbers generated from the server for each request ? Also did how did each of these request generated the hash value of 3, 15 and 12 respectively?

  • @manishasinha6694
    @manishasinha6694 Před 5 lety

    Gaurav your videos are amazing ! Its being a great help . Thank you so much :-)

  • @letslearnwi
    @letslearnwi Před 3 lety

    Very well explained

  • @mikeendsley8453
    @mikeendsley8453 Před 4 lety

    Damn man, you are really good at instruction... sub’d!

  • @brunomartel4639
    @brunomartel4639 Před 4 lety

    awesome! consider blocking the background light with something to reduce light reflection!

  • @dilawarmulla6293
    @dilawarmulla6293 Před 6 lety +2

    Awesome gk. I would suggest you to more focus on system design videos as it's resources are less

  • @tsaed.9170
    @tsaed.9170 Před 3 lety

    That moment at 10:25.... 😂😂 I actually felt the server engineer cry for help

  • @user-oy4kf5wr8l
    @user-oy4kf5wr8l Před 4 lety

    Amazing!

  • @nareshkaktwan4918
    @nareshkaktwan4918 Před 5 lety +3

    @Gaurav Sen: I want to summarize what I understood from PI chart part, just rectify me if I understood it wrong.
    Directly adding another server will definitely reduce the load on each server but if not implemented properly then it
    could impact on the cache part. If major of the request got re-directed to different servers than our cache part will be of
    not that help to us.
    Taking small % of req load will not impact the cache part much.

    • @gkcs
      @gkcs  Před 5 lety +2

      Exactly!

  • @aishwaryaramesh4877
    @aishwaryaramesh4877 Před 6 lety +2

    Neat explanations :))

  • @rohithegde9239
    @rohithegde9239 Před 6 lety +7

    Hey one suggestion before making new videos: you could list in the description what are the prerequisites before watching this video. Like in this video I didn't know about hashing. So I had to know about that first. So if you could list down the prerequisites we could go and get to know those terms before hand for understanding the video better.

    • @gkcs
      @gkcs  Před 6 lety +9

      Ahh, that's a veey good suggestion. I'll take it, thanks!

  • @anon_1858
    @anon_1858 Před 4 lety +1

    Very well explained Sir. Can you please suggest any books for the same?

  • @mohitmalhotra9034
    @mohitmalhotra9034 Před 4 lety

    Great job

  • @josephrigby7743
    @josephrigby7743 Před 3 lety

    love this guy

  • @m13m
    @m13m Před 6 lety +2

    More system design Videos

  • @VishwajeetPandeyGV
    @VishwajeetPandeyGV Před 5 lety +5

    I like your videos. However in this one, the ending was a bit abrupt. Could you explain a bit more through an example how that is done? How we keep 'empty slots' when dividing the requests on that pie chart? I mean the way it's done is we don't immediately use the whole 360° for "100%" (dividing by N servers). We use like only 90° for 100% and then we have flexibility to add/autoscale 3x more servers with moving minimal amount of cache data from older ones to newer ones. And of course distributed cache is also useful in these cases. Makes adding/removing servers easier. However, there's one place where even distributed cache doesn't work: to scale out writes. That's one place where consistent hashing is the only way out. The id generator needs to be a function of that hash.

    • @luuuizpaulo
      @luuuizpaulo Před 4 lety

      Exactly. I also enjoy the videos, but the ending of this one is confusing. I could also not understando the Pie explanation, i.e., after minute 8.

  • @adiveppaangadi7107
    @adiveppaangadi7107 Před 3 lety

    Awesome explanation... Waiting for middle ware admin related issues... Was tomcat weblogic

  • @yizhihu8477
    @yizhihu8477 Před 3 lety +1

    Great content!
    Why wouldn't the first approach work for a newly added server? Are we worried more about the cache miss? I believe if the cache has a timeout, and the frequency of adding a new server is not often. The system will be in a good performance after the old cache timeout.

    • @nezukovlogs1122
      @nezukovlogs1122 Před rokem

      one more problem other than cache miss,
      Lets say request r1 always will be going to server s1 and req r1 has session info stored on s1. So, if number of server changes, chances of request r1 will be going to other server and not server s1 where it has its session info. SO, user will lost the session info.

  • @deepakudaymuddebihal7772
    @deepakudaymuddebihal7772 Před 4 lety +1

    Like the videos! Really useful! What videos/courses would you recommend for those starting with System design interview preparations??

    • @gkcs
      @gkcs  Před 4 lety +2

      1) Playlist: czcams.com/play/PLMCXHnjXnTnvo6alSjVkgxV-VH6EPyvoX.html
      2) Video course: get.interviewready.io/courses/system-design-interview-prep
      3) Designing Data Intensive Applications: amzn.to/2yQIrxH

    • @deepakudaymuddebihal7772
      @deepakudaymuddebihal7772 Před 4 lety

      @@gkcs Thank you!!

  • @aditigupta6870
    @aditigupta6870 Před 4 měsíci

    Hello gaurav, when the number of servers increase due to scaling reason from 4 to 5, why do we need the previous request which were getting redirected to a server calculated using %4 needs to be changed? Because those requests are already served that time in past using %4, and from the time we introduce new servers, then the requests from that point of time can be redirected to a server using %5 right?

  • @AshokYadav-np5tn
    @AshokYadav-np5tn Před 4 lety +2

    I wished i could have watched this video earlier so i could answer in an interview. Thanks gourav for teaching. It is quite similar to how hashmap work.. isn't it?

  • @HELLDOZER
    @HELLDOZER Před 4 lety +2

    Dude... great vids, you can't expect uniformity out of randomness [ it wouldn't random anymore if it was uniform).... apache, ngnix etc all look at the number of requests that each machine is processing and send figures out the one with the lowest and sends it to that

    • @gkcs
      @gkcs  Před 4 lety

      Thanks Harish!
      crypto.stackexchange.com/questions/33387/distribution-of-hash-values

  • @PradeepKumar-ie3vu
    @PradeepKumar-ie3vu Před 5 lety

    Hi,you are just great!!,when I watched your video first time ,Atonce I subscribed your channel ... I want to know that this system design playlist is enough for placements interview preparation???

  • @rahulsharma5030
    @rahulsharma5030 Před 3 lety

    nice video.thanks.
    In case of pie chart explanation, i got overall idea but when u are summing up the delta changes and u sum that to 100.My question is , if a server has lost 10 key and other gained 10 keys , why are we adding this 10 two times?Once as gain and other as loss?Only one time we could have considered 10?
    Because it seems u made a total change of 100 keys but instead total change is less than 100 as some keys stayed at original location.Did i miss anything?

  • @algoseekee
    @algoseekee Před 4 lety +2

    Hey, thanks for this video. I have a question. If request IDs are randomly generated (with equal probability) why don't we just have them in a range 0..n which maps directly to servers?

    • @algoseekee
      @algoseekee Před 4 lety

      Answer: there is an infinite number of requests and only n servers, that's why we need modulo here.

  • @gatecomputerscience1484
    @gatecomputerscience1484 Před 2 lety +1

    Impressed 🙂

  • @amolnagotkar3037
    @amolnagotkar3037 Před rokem

    thnx for this video

  • @MrKridai
    @MrKridai Před 3 lety +1

    Hi Gaurav, nice an informative video, thanks for that. I have a question, what happens when one server gets overloaded, how does the request gets transferred to the next server, do we just send it the the next server, or is there a better approach for that, like keeping track of requests and sending it to the one having least requests.

    • @MrKridai
      @MrKridai Před 3 lety

      Saw the next video, found the answer , thanks