How load balancing and service discovery works in Kubernetes
Vložit
- čas přidán 1. 10. 2019
- Subscribe to show your support! goo.gl/1Ty1Q2 .
Patreon 👉🏽 / marceldempers
In this video we dive into service discovery and how load balancing works in Kubernetes. Kubernetes automates some of the linux container networking features and we're going to demystify all the magic in this video.
Check out part 1 for how to install Kubernetes on Windows:
• Kubernetes Getting Sta...
Check out part 2 of how to use KUBECTL:
• Kubectl basics for beg...
Check out part 3 of how to do deployments
• Kubernetes Deployments...
Check out part 4 of how to manage application configurations
• Configuration manageme...
Check out part 5 of secret management explained
• Kubernetes Secret Mana...
Like and Subscribe for more :)
Source Code
github.com/marcel-dempers/doc...
Also if you want to support the channel further, become a member 😎
marceldempers.dev/join
Checkout "That DevOps Community" too
marceldempers.dev/community
Follow me on socials!
Twitter | / marceldempers
GitHub | github.com/marcel-dempers
Facebook | thatdevopsguy
LinkedIn | / marceldempers
Instagram | / thatdevopsguy
Music:
Track: Dixxy. - bounce your head | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
Listen: / bounce-your-head
Track: SACHKO - ChillHop Instrumental - "Meant to be" | is licensed under a Creative Commons Attribution licence (creativecommons.org/licenses/...)
Listen: / chillhop-instrumental-... - Věda a technologie
1:43 I love how you used that opportunity to flash those biceps 🤣🤣🤣
Thank you for explaining kube proxy !!! Your explanation is Simple yet very effective and easy to grasp
Thank you for demystifying the k8s services behind the scenes, it was clear and simple explanation! Subbed and looking forward to explore more of your videos where it breaks down the 'magic' behind the scenes
super video to explain the concepts , I love your all other tutorials/demos as well.
this is the best explanation i have came across!! Bloody brilliant!!
*I have come across
the 'have' is already past tense
Hello, thank you for your explanations. I really appreciate the "Behind the scene" part. Could you please make a full "behind the scene" video(s) explaining the most important linux feature concepts used by Docker/Kubernetes? Best regards
Excellent video. Thanks heaps for the deep dive
I cant thank you enough for all your videos👍
This is 2021 and still a valuable video
Great, - really great. This was an awesome explanation!
Thank you sir, I really appreciate. nice explation and easy to learn.
such a clear explanation! good job
Very clear. Awesome content!
I actually usually just standardized on port 8080 instead. Still easy to remember but has the advantage of not requiring the container user itself to run with special privileges (helps a bit with security).
Thank you. Very detailed and clear.
very well explained. it really helped.
Yeah !:), Great explanation thank you,
Please add more and more real time stuff on based microservices.
Thanks for the very clear explanation
You made it look so easy. I think my 4-year old daughter would understand this :)
Great explanation thanks :)
Great video
Thank you
Nice one
Is there option like in docker swarm ( mode: host )?
Iam not able to access my application using loadbalancer service type can u please help
Bien fait
great explanation but I think you have missed NodePort services.
Tq sir
What is the difference between service discovery in Kubernetes and Service Discovery in Istio?
Awesome
can single worker node run pods with different labels or it is not recommended or k8 controller manager can schedule a pod in any worker node irrespective of label
A single node can run multiple pods or you can pin pods to a single node with node selectors. However its best avoided since a node outage will sink all the pods on that node. Also best to decouple your application from the node. Best to schedule on any node unless you have pods that have state on certain nodes, like a data store. In that case you want to pin pods to the node that has it's data and use a StatefulSet. Managing stateful workloads also depends on how well the software in the pod can react to restarts. Databases like Cockroach DB is good at dealing with this.
@@MarcelDempers , Thanks Marcel
Can you also please upload a video where you also touch upon envoy proxy and istio and how it runs alongside k8 pods as sidecars for communicating and how istio/service mesh helps in service discovery. i know these concepts in broad level , but learning these same things from you would be great , given your expertise and way of explaining things
sorry my question my english bad)) but how Service choose POD (in replicaset for example have 5 POD) where network traffic must go, which mechanizm use for it? ty
Service has a "selector" which selects the pods by label. Kubernetes will then define endpoints (local IP) for each pod that is "selected". When you `kubectl describe` a service you will see endpoints under it.
The endpoints are what ends up being load balanced
@@MarcelDempers ty , sorry may be Im not good explain, I mean between endpoints which use roundrobin or somethink else?
this could have been as an interactive cli-demo using _b00t_
a clarification: labels in deployment spec are used to match labels in containers, while labels in deployment metadata are used to be selected by the service, right? Even if they're named the same, labels in deployment metadata and in its spec section are NOT related, or am i wrong? Thanks
I know this is 1 year old. A deployment manages a replicaSet which in turn manages a pod. The service has a selector to know which pods to target and not the deployment or replicaSet. Same the replicaSet has a selector to know which pods to manage.
@@mgjulesdev yeah, i know now, i got both cka and ckad in the meantime :D
this playlist just made it to my list. I will follow it up one by one. main reason being you are using vscode.
would u want to use port 80 across the whole org or would 443 be better for more secure?
Generally offload SSL at the edge, so 443 for ingress controllers (public traffic), and private connections all over 80 (all pods) is a more common approach. So 80 across all internal private comms.
@@MarcelDempers thank u , ok i understand, so for the port mirroring notated in the service.yaml @8:40 the pod is reachable via network call internally on port 80 for its respective internal subnet, and port 5000 is the localhost port of the pod?
100%
The pod should also have an IP and can be reached on port 5000 if you know the IP, but service is the best approach since its a DNS so you dont worry about changing pod IPs
@@MarcelDempers hm i thought we were reaching pods on port 80 using the pod IP - I thought the 5000 port would be only specific to the pod itself in its own specific namespace
What if a POD in cluster A wants to talk to a Service-1 in cluster B?
You would have to either 1) expose Service-1 in cluster B using service type=LoadBalancer or 2) Expose Service-1 using an ingress (Recommended)
The services refer specially to REST services or can they be of any kind of protocol? Since K8 does the load balancing for us automatically does that mean we no longer need to include load-balancers for our systems? Thank you for the video!
I just realized I did not see your video on Ingress for Beginners.
You most likely dont need other proxies internally just to do basic load balancing. For REST services you will want an Ingress controller at your edge which can serve multiple services to the public. Be sure to checkout the most recent Ingress video too. (I have 2 now):
czcams.com/video/u948CURLDJA/video.html
Came for the memes, accidentally learnt shit.
Who is listening this in 2022
Dear God this explained so much "behind the scenes" it's almost depressing
NOTHING about Load Balancing. Only one comment and thats all.
please talk slower and take a breath between sentences