How load balancing and service discovery works in Kubernetes

Sdílet
Vložit
  • čas přidán 1. 10. 2019
  • Subscribe to show your support! goo.gl/1Ty1Q2 .
    Patreon 👉🏽 / marceldempers
    In this video we dive into service discovery and how load balancing works in Kubernetes. Kubernetes automates some of the linux container networking features and we're going to demystify all the magic in this video.
    Check out part 1 for how to install Kubernetes on Windows:
    • Kubernetes Getting Sta...
    Check out part 2 of how to use KUBECTL:
    • Kubectl basics for beg...
    Check out part 3 of how to do deployments
    • Kubernetes Deployments...
    Check out part 4 of how to manage application configurations
    • Configuration manageme...
    Check out part 5 of secret management explained
    • Kubernetes Secret Mana...
    Like and Subscribe for more :)
    Source Code
    github.com/marcel-dempers/doc...
    Also if you want to support the channel further, become a member 😎
    marceldempers.dev/join
    Checkout "That DevOps Community" too
    marceldempers.dev/community
    Follow me on socials!
    Twitter | / marceldempers
    GitHub | github.com/marcel-dempers
    Facebook | thatdevopsguy
    LinkedIn | / marceldempers
    Instagram | / thatdevopsguy
    Music:
    Track: Dixxy. - bounce your head | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / bounce-your-head
    Track: SACHKO - ChillHop Instrumental - "Meant to be" | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
    Listen: / chillhop-instrumental-...
  • Věda a technologie

Komentáře • 57

  • @crikxouba
    @crikxouba Před rokem +2

    1:43 I love how you used that opportunity to flash those biceps 🤣🤣🤣

  • @abhishekpadadale
    @abhishekpadadale Před 4 lety +6

    Thank you for explaining kube proxy !!! Your explanation is Simple yet very effective and easy to grasp

  • @Tech-ub8dd
    @Tech-ub8dd Před 6 měsíci

    Thank you for demystifying the k8s services behind the scenes, it was clear and simple explanation! Subbed and looking forward to explore more of your videos where it breaks down the 'magic' behind the scenes

  • @ashwaniahuja
    @ashwaniahuja Před 3 lety

    super video to explain the concepts , I love your all other tutorials/demos as well.

  • @pallavkanaujiya
    @pallavkanaujiya Před 4 lety +3

    this is the best explanation i have came across!! Bloody brilliant!!

    • @OggerFN
      @OggerFN Před 3 lety

      *I have come across
      the 'have' is already past tense

  • @Achillerostand
    @Achillerostand Před 3 lety +6

    Hello, thank you for your explanations. I really appreciate the "Behind the scene" part. Could you please make a full "behind the scene" video(s) explaining the most important linux feature concepts used by Docker/Kubernetes? Best regards

  • @djstr0b3
    @djstr0b3 Před 4 lety

    Excellent video. Thanks heaps for the deep dive

  • @umermustafa2560
    @umermustafa2560 Před 4 lety +1

    I cant thank you enough for all your videos👍

  • @ravinathedirisinghe4024

    This is 2021 and still a valuable video

  • @stefanw8203
    @stefanw8203 Před 3 lety

    Great, - really great. This was an awesome explanation!

  • @hariomgarg1127
    @hariomgarg1127 Před 2 lety

    Thank you sir, I really appreciate. nice explation and easy to learn.

  • @randulakoralage5743
    @randulakoralage5743 Před 2 lety

    such a clear explanation! good job

  • @kevinyu9934
    @kevinyu9934 Před 3 lety

    Very clear. Awesome content!

  • @patricknelson
    @patricknelson Před 2 lety +3

    I actually usually just standardized on port 8080 instead. Still easy to remember but has the advantage of not requiring the container user itself to run with special privileges (helps a bit with security).

  • @paradoxfx
    @paradoxfx Před 4 lety

    Thank you. Very detailed and clear.

  • @wanderless3863
    @wanderless3863 Před 2 lety

    very well explained. it really helped.

  • @massgo969
    @massgo969 Před 4 lety

    Yeah !:), Great explanation thank you,
    Please add more and more real time stuff on based microservices.

  • @zmxn007
    @zmxn007 Před rokem

    Thanks for the very clear explanation

  • @timurjoro1995
    @timurjoro1995 Před 2 lety +1

    You made it look so easy. I think my 4-year old daughter would understand this :)

  • @vishekkumar3184
    @vishekkumar3184 Před 2 lety

    Great explanation thanks :)

  • @MohammadMoynulHaqueBiswas

    Great video

  • @MZ-ii4bb
    @MZ-ii4bb Před 3 lety

    Thank you

  • @wayne1435
    @wayne1435 Před 4 lety

    Nice one

  • @mitch5222
    @mitch5222 Před 2 lety

    Is there option like in docker swarm ( mode: host )?

  • @zaibakhanum203
    @zaibakhanum203 Před 2 lety

    Iam not able to access my application using loadbalancer service type can u please help

  • @herousall9353
    @herousall9353 Před 3 lety

    Bien fait

  • @wolfdhib1
    @wolfdhib1 Před 3 lety

    great explanation but I think you have missed NodePort services.

  • @Itsme-kx1us
    @Itsme-kx1us Před rokem

    Tq sir

  • @alxx736
    @alxx736 Před 2 lety +1

    What is the difference between service discovery in Kubernetes and Service Discovery in Istio?

  • @SunnyG1987
    @SunnyG1987 Před 4 lety

    Awesome

  • @prashantjha439
    @prashantjha439 Před 4 lety

    can single worker node run pods with different labels or it is not recommended or k8 controller manager can schedule a pod in any worker node irrespective of label

    • @MarcelDempers
      @MarcelDempers  Před 4 lety +1

      A single node can run multiple pods or you can pin pods to a single node with node selectors. However its best avoided since a node outage will sink all the pods on that node. Also best to decouple your application from the node. Best to schedule on any node unless you have pods that have state on certain nodes, like a data store. In that case you want to pin pods to the node that has it's data and use a StatefulSet. Managing stateful workloads also depends on how well the software in the pod can react to restarts. Databases like Cockroach DB is good at dealing with this.

    • @prashantjha439
      @prashantjha439 Před 4 lety

      @@MarcelDempers , Thanks Marcel
      Can you also please upload a video where you also touch upon envoy proxy and istio and how it runs alongside k8 pods as sidecars for communicating and how istio/service mesh helps in service discovery. i know these concepts in broad level , but learning these same things from you would be great , given your expertise and way of explaining things

  • @PePTo-dx2yj
    @PePTo-dx2yj Před 3 měsíci +1

    sorry my question my english bad)) but how Service choose POD (in replicaset for example have 5 POD) where network traffic must go, which mechanizm use for it? ty

    • @MarcelDempers
      @MarcelDempers  Před 3 měsíci

      Service has a "selector" which selects the pods by label. Kubernetes will then define endpoints (local IP) for each pod that is "selected". When you `kubectl describe` a service you will see endpoints under it.
      The endpoints are what ends up being load balanced

    • @PePTo-dx2yj
      @PePTo-dx2yj Před 3 měsíci

      @@MarcelDempers ty , sorry may be Im not good explain, I mean between endpoints which use roundrobin or somethink else?

  • @brianhorakh1104
    @brianhorakh1104 Před 3 lety

    this could have been as an interactive cli-demo using _b00t_

  • @squalazzo
    @squalazzo Před 3 lety

    a clarification: labels in deployment spec are used to match labels in containers, while labels in deployment metadata are used to be selected by the service, right? Even if they're named the same, labels in deployment metadata and in its spec section are NOT related, or am i wrong? Thanks

    • @mgjulesdev
      @mgjulesdev Před rokem +1

      I know this is 1 year old. A deployment manages a replicaSet which in turn manages a pod. The service has a selector to know which pods to target and not the deployment or replicaSet. Same the replicaSet has a selector to know which pods to manage.

    • @squalazzo
      @squalazzo Před rokem

      @@mgjulesdev yeah, i know now, i got both cka and ckad in the meantime :D

  • @abhilashpatel6852
    @abhilashpatel6852 Před 8 měsíci

    this playlist just made it to my list. I will follow it up one by one. main reason being you are using vscode.

  • @ajadavis2000
    @ajadavis2000 Před 2 lety

    would u want to use port 80 across the whole org or would 443 be better for more secure?

    • @MarcelDempers
      @MarcelDempers  Před 2 lety

      Generally offload SSL at the edge, so 443 for ingress controllers (public traffic), and private connections all over 80 (all pods) is a more common approach. So 80 across all internal private comms.

    • @ajadavis2000
      @ajadavis2000 Před 2 lety

      @@MarcelDempers thank u , ok i understand, so for the port mirroring notated in the service.yaml @8:40 the pod is reachable via network call internally on port 80 for its respective internal subnet, and port 5000 is the localhost port of the pod?

    • @MarcelDempers
      @MarcelDempers  Před 2 lety

      100%
      The pod should also have an IP and can be reached on port 5000 if you know the IP, but service is the best approach since its a DNS so you dont worry about changing pod IPs

    • @ajadavis2000
      @ajadavis2000 Před 2 lety

      @@MarcelDempers hm i thought we were reaching pods on port 80 using the pod IP - I thought the 5000 port would be only specific to the pod itself in its own specific namespace

  • @laurentiuspurba2735
    @laurentiuspurba2735 Před 3 lety

    What if a POD in cluster A wants to talk to a Service-1 in cluster B?

    • @MarcelDempers
      @MarcelDempers  Před 3 lety

      You would have to either 1) expose Service-1 in cluster B using service type=LoadBalancer or 2) Expose Service-1 using an ingress (Recommended)

  • @shannonchoy1449
    @shannonchoy1449 Před 4 lety

    The services refer specially to REST services or can they be of any kind of protocol? Since K8 does the load balancing for us automatically does that mean we no longer need to include load-balancers for our systems? Thank you for the video!

    • @shannonchoy1449
      @shannonchoy1449 Před 4 lety

      I just realized I did not see your video on Ingress for Beginners.

    • @MarcelDempers
      @MarcelDempers  Před 4 lety +1

      You most likely dont need other proxies internally just to do basic load balancing. For REST services you will want an Ingress controller at your edge which can serve multiple services to the public. Be sure to checkout the most recent Ingress video too. (I have 2 now):
      czcams.com/video/u948CURLDJA/video.html

  • @johnlam900
    @johnlam900 Před 4 lety +1

    Came for the memes, accidentally learnt shit.

  • @harilakshminarayanaa9469
    @harilakshminarayanaa9469 Před 2 lety +1

    Who is listening this in 2022

  • @TythosEternal
    @TythosEternal Před rokem

    Dear God this explained so much "behind the scenes" it's almost depressing

  • @trash2trash
    @trash2trash Před rokem

    NOTHING about Load Balancing. Only one comment and thats all.

  • @coneryj
    @coneryj Před rokem

    please talk slower and take a breath between sentences