Understanding Kubernetes Networking Part 3: Calico Kubernetes CNI Provider in depth.

Sdílet
Vložit
  • čas přidán 25. 02. 2021
  • In this video, I will comprehensively cover Calico CNI for Kubernetes. I will start with an overview of the Container Network Interface (CNI) architecture before proceeding to cover Calico in depth. I will also cover some fundamental technologies such as Border Gateway Protocol (BGP) and IP-in-IP encapsulation protocol which Calico uses behind the scenes. And last but not least, I will also discuss various Calico network options such as non-overlay (TOR and single L2), and cross-subnet options.
    Complete playlist for these series: • Kubernetes Networking ...
    **Note: In this course, we will not go through the process of setting up a Kubernetes cluster and installing Calico, I have covered that in another course: "Setup a "Docker-less" Multi-node Kubernetes Cluster On Ubuntu Server" • Setup a "Docker-less" ... .
    In the video, I made a reference to another course of mine for creating a new cluster on CentOS, however after that video was published, Red Hat announced CentOS 8 will be the last supported version so I don't recommend using CentOS.
    Course References:
    Calico install YAML file: raw.githubusercontent.com/oct...
    Calicoctl CLI: docs.projectcalico.org/gettin...
    Keywords: CNI, Flannel, BGP, IPIP, IP in IP
    My Other Videos:
    ► Cilium Kubernetes CNI Provider, Part 1: Overview of eBPF and Cilium and the Installation Process • Cilium Kubernetes CNI ...
    ►Cilium Kubernetes CNI Provider, Part 2: Security Policies and Observability Leveraging Hubble
    • Cilium Kubernetes CNI ...
    ► Cilium Kubernetes CNI Provider, Part 3: Cluster Mesh
    • Cilium Kubernetes CNI ...
    ► Managing Linux Log-ins, Users, and Machines in Active Directory (AD): Part 2- Join Linux Machines to AD:
    • Managing Linux Logins,...
    ► Managing Linux Log-ins, Users, and Machines in Active Directory (AD): Part 1- Setup AD:
    • Managing Linux Logins,...
    ► Sharing Resources between Windows and Linux:
    • Sharing Resources betw...
    ► Kubernetes kube-proxy Modes: iptables and ipvs, Deep Dive:
    • Kubernetes kube-proxy ...
    ►Kubernetes: Configuration as Data: Environment Variables, ConfigMaps, and Secrets:
    • Kubernetes: Configurat...
    ►Configuring and Managing Storage in Kubernetes:
    • Configuring and Managi...
    ► Istio Service Mesh - Securing Kubernetes Workloads:
    • Istio Service Mesh - S...
    ► Istio Service Mesh - Intro
    • Istio Service Mesh (si...
    ► Understanding Kubernetes Networking. Part 6: Calico Network Policies:
    • Understanding Kubernet...
    ► Understanding Kubernetes Networking. Part 5: Intro to Kubernetes Network Policies:
    • Understanding Kubernet...
    ► Understanding Kubernetes Networking. Part 4: Kubernetes Services:
    • Kubernetes services - ...
    ► Understanding Kubernetes Networking. Part 2: POD Network, CNI, and Flannel CNI Plug-in:
    • Understanding Kubernet...
    ►Understanding Kubernetes Networking. Part 1: Container Networking:
    • Video
    ► A Docker and Kubernetes tutorial for beginners:
    • A Docker and Kubernete...
    ► Setup a "Docker-less" Multi-node Kubernetes Cluster On Ubuntu Server:
    • Setup a "Docker-less" ...
    ►Step by Step Instructions on Setting up Multi-Node Kubernetes Cluster on CentOS:
    • Step by Step Instructi...
    ►Setup and Configure CentOS Linux Server on A Windows 10 Hypervisor - CZcams:
    • Setup and Configure Ce...
    ►Setup NAT (Network Address Translation) on Hyper-V:
    • Setup NAT (Network Add...
    ► Enable Nested Virtualization on Windows to run WSL 2 (Linux) and Hyper-V on a VM:
    • Enable Nested Virtuali...
    ►Setup a Multi-Node MicroK8S Cluster on Windows 10:
    • Setup a Multi Node Mic...
    ► Detailed Windows Terminal, (WSL 2), Linux, Docker, and Kubernetes Install Guide on Windows 10:
    • Detailed Windows Termi...
  • Věda a technologie

Komentáře • 140

  • @bijanpartovi9768
    @bijanpartovi9768 Před 3 lety +17

    Great video! These series have helped demystified Kubernetes POD networking. The breakdown of how Calico works was really brilliant with animation and network capture. Well done!

  • @geetikabatra
    @geetikabatra Před 23 dny

    This is great! For so many years every book and folks used to refer switch as a layer 2 device, nobody explained it in terms of subnets. Now I am actually able to distinguish between Data layer anf Network layer.

  • @rougearlequin
    @rougearlequin Před 3 lety

    I love the detailed information of the traffic captures. Great tutorial!

  • @rakra4551
    @rakra4551 Před 2 lety

    You deserve kudos for this simple but in-depth video. Spectacular job.

  • @amitlpawar
    @amitlpawar Před 9 měsíci

    Very nice video...Thanks !!!

  • @seyyidahmedlahmer1166

    The best networking lectures ever! I really appreciate what you are doing! thanks a lot

  • @mail2sirshendu
    @mail2sirshendu Před rokem

    Exceptional content... Loved the details you have put in... Kudos!!!

  • @manedurphy
    @manedurphy Před 2 lety +7

    This series is insanely good! Love the detail in the slides and the live demos.

  • @ramprasad_v
    @ramprasad_v Před rokem +2

    Good explanation & I learnt a lot. Thanks

  • @jmmtechnology4539
    @jmmtechnology4539 Před rokem

    Great work, really pieced the detailed info together in a way that makes a lot of sense!

  • @tothetech
    @tothetech Před rokem +1

    Wonderful tutorial, thanks a lot for valuable information

  • @user-fg6ng7ej6w
    @user-fg6ng7ej6w Před rokem

    watching 10th video of yours - superb details and simplicity in explaining complex topics. thanks

  • @sSP1878
    @sSP1878 Před rokem

    one of best explaination

  • @user-tz5jz1yy8k
    @user-tz5jz1yy8k Před 5 měsíci

    It is clear and detailed, great!

  • @anupmahajan1435
    @anupmahajan1435 Před 3 lety

    Thanks ! Extremely resourceful and informative.

  • @javierpena3097
    @javierpena3097 Před 2 lety

    Excellent video has helped improve my understanding of this topic a lot. Thank you for your work!

  • @guents
    @guents Před 2 lety +1

    Great job again! You are one of the few people on youtube who does k8s videos and actually knows what they're talking about :D

  • @mikhailgorbov5265
    @mikhailgorbov5265 Před 2 lety

    There are few such deep videos. Thanks for your hard work.

  • @robertscott5535
    @robertscott5535 Před rokem +1

    Outstanding presentation! Thank you for using the KISS method!

  • @prkrng
    @prkrng Před rokem

    Excellent content and thank you

  • @bommuu3524
    @bommuu3524 Před 2 lety

    This is one of the best networking in Kubernetes explanation. Really learned and enjoyed. Thanks for the videos.

  • @khemrajdhondge
    @khemrajdhondge Před 8 měsíci

    What great efforts taken to explain this details!!! Great breakdown with all steps

  • @santosharakere
    @santosharakere Před 11 měsíci

    Excellent video as always sir, thank you very much.

  • @alexs4112
    @alexs4112 Před 2 měsíci

    I finally understand how BGP works, thanks for explaining!

  • @yasinlachini1791
    @yasinlachini1791 Před 2 lety

    You are my hero!
    Please create more video.

  • @robannmateja5000
    @robannmateja5000 Před rokem

    Great videos! They were really clear. Thank you!

  • @MixTuBOGirlBoY
    @MixTuBOGirlBoY Před 3 měsíci

    Thanks a lot for this series! It’s been very very helpful for me

  • @PremKumar-kj5lr
    @PremKumar-kj5lr Před 3 lety

    Thanks for the detailed explanation, Great Video !!

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 lety

      Thank you very much and glad it was helpful! If you haven't done so please consider subscribing as I'm working on new materials. Many thanks again!

  • @Banjour9
    @Banjour9 Před 2 lety +3

    Best 1 hour I ever spent studying something on CZcams. Thanks for explaining Calico so wonderfully - especially the parts about felix, bird, and IPIP mode. Could you do a small one for VXLAN mode and its benefits? Thanks!

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      @Saugato Banerjee , thank you very much for your feedback, much appreciated! I'll consider your suggestion for a future video. Thanks again.

  • @Sid-sl3xk
    @Sid-sl3xk Před 2 lety

    Wow...amazing series...this has helped me so much to understand k8s networking concepts...awesome work..big fan :)

  • @saparapaful
    @saparapaful Před 2 lety +2

    wow..wow.... how did i miss this video for all these days ..amazing explanation.. way better than all the paid courses combined ...please do more videos

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety +1

      Thank you for your feedback, much appreciated! I'm working on new materials, please subscribe to be notified when they are released. In the meantime, if you like to know more about networking which is at the core of Kubernetes, please watch my networking playlist: czcams.com/video/B6FsWNUnRo0/video.html. Thanks again.

  • @sumithtm
    @sumithtm Před rokem

    Excellent.. thank you 🙏

  • @sami_rhimi
    @sami_rhimi Před 11 měsíci

    Great demonstration and very clear explanation thank you

  • @ankitbansal001
    @ankitbansal001 Před 3 lety

    Very detailed explanation, Thank-you

  • @oceanmih2646
    @oceanmih2646 Před 16 dny

    Great tutorial

  • @manuelmedina24
    @manuelmedina24 Před 2 lety

    De lo mejor...very good job! , thanks for sharing!

  • @igorfedorishchev9128
    @igorfedorishchev9128 Před rokem

    Very good material! Keep going!

  • @bvr333
    @bvr333 Před rokem

    fantastic, thank you

  • @DecodingGermany
    @DecodingGermany Před měsícem

    thanks for such detailed video.

  • @123dearisit
    @123dearisit Před 2 lety

    Great Video, very good explanation.

  • @sci3ntist
    @sci3ntist Před 3 lety

    Great video, thank you so much

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 lety +1

      @Ahmad Hamad, thank you very much for your kind feedback! Glad you found it helpful. If you haven't already done so please consider subscribing as I'm working on new materials. Thanks again!

  • @ucdavisvb
    @ucdavisvb Před 2 lety +1

    Thank you for the great step by step explanations. Can you provide links to some of your yaml scripts (like hello world) so we can deploy and follow along with our own setup?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      @Van Phan, thank you for your feedback! I just uploaded the script file here: github.com/gary-RR/my_CZcams_CNI_And_Calico/blob/main/scripts.sh
      Please make sure to change the IP addresses (nodes and PODS) to match your installation! Hope this helps.

  • @parimi001
    @parimi001 Před rokem

    Thanks!

  • @Anand171991
    @Anand171991 Před 6 měsíci

    Thanks for these videos, really helpful.
    Do you have any video(or planning) which talks about how kubelet connects with the pods for Readniness and Liveness probes, does it involves components like DNS, Kube-api server, etcd, etc.

  • @karteekchalla7451
    @karteekchalla7451 Před 4 dny

    Very good informative video!
    Have a question. At the time stamp 17:00, you mentioned that the tunnel interface will masquerade the actual source IP of the pod and the source IP in the inner IP header changes to tunl0's IP. But why is this required? Technically, even with keeping the actual IP address of the source pod in the traffic and then adding the outer IP header with the source IP as the eth0 of kube-node1-cal's eth0 and with destination IP as the eth0 of the destination node kube-master-cal , the return traffic can still reach the pod in kube-node1-cal, as the destination node will have the bgp route towards the entire pod subnet that is used in the source node kube-node1-cal.

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 dny

      Hi, the reason is that these pods are not routable outside their host worker nodes. If the destination pod tries to send the response directly to the source pod, its host wouldn't know how to send it as there are no entries in the route table to assist it, so the tunnels play the middleman role facilitating this communication.

  • @aylacaliskan5596
    @aylacaliskan5596 Před 2 lety

    excellent tutors :)

  • @manikishoresannareddy8387
    @manikishoresannareddy8387 Před 10 měsíci

    Awesome Stuff missed a diamond series

  • @nileshgore5499
    @nileshgore5499 Před 3 lety

    Thank you for you efforts and putting all these details together, can you please help regarding the calicoctl installation ?
    I used this option for calicoctl installation - "Install calicoctl as a Kubernetes pod"
    and alias calicoctl = "kubectl exec -i -n kube-system calicoctl -- /calicoctl"
    however I am not able to view the BGP status
    calicoctl get bgpConfiguration (no output)
    calicoctl get bgpPeer (no output)
    The only command that give me valid output is "calicoctl get ippool"
    when I run "calicoctl node status" I get - Calico process is not running

    • @nileshgore5499
      @nileshgore5499 Před 3 lety

      after hours of troubleshooting, I was able to get the output for "calicoctl node status" using "sudo -E env "PATH=$PATH" calicoctl node status". This is because of my poor linux knowledge. in the calico configmap calico_backend: bird so BGP is enabled by default

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 lety +1

      Glad you got it working!

  • @florianbachmann
    @florianbachmann Před rokem

    yeah 🕺

  • @horizonbrave1533
    @horizonbrave1533 Před 2 lety

    Great video! Do those eth0 interfaces on the Pods have to be unique across hosts? so can pod 1 on host 1 have the same 172.16.94.5 IP as Pod 1 on host 2?

  • @vinothkumaar2568
    @vinothkumaar2568 Před 2 lety

    Excellent video 😍. Can you tell how we can configure Non-overlay network:- BGP to peer with TOR. Thanks in advance

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      Hi and thanks for your feedback!
      Unfortunately, I don't have the required infrastructure to create and demo this. They key things are that you'll need to disable Calico’s default full-mesh and peer Calico with your L3 ToR routers.

    • @vinothkumaar2568
      @vinothkumaar2568 Před 2 lety

      @@TheLearningChannel-Tech Ok can you share any resource which can help me out on this.

  • @jayantprakash6425
    @jayantprakash6425 Před 6 měsíci

    Slight correction at 38:40
    Destination IP in second node's route table should be pod's IP 172.16.94.5 and not the other end of tunnel?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 5 měsíci

      Hi, sorry for late response as your post had been flagged as spam and I just saw it. You are correct, that is a typo in the slide. Thanks for noticing it.

  • @youngbae7170
    @youngbae7170 Před 2 lety

    In the demo where you were doing curl from one pod to another pod in different node, the packet trace showed the source pod ip was the tunnel ip addr. Why is the source ip changed to the tunnel ip and how does the response reach the pod if the return traffic is destined to the tunnel ip?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      As I explained in the video, as the POD on node1 calls a service on a POD on node2, the "tunnel0" on node1 does a SNAT (Source NAT, meaning it changes the source IP from POD's to its own IP in order to get through the tunnel to the other side of the tunnel). As far as the destination POD on the destination is concerned, the call has been made by tunnel0 and not the POD, in fact, the destination POD has no knowledge of the source POD. Once the response is received by tunnel0 on NODE1, it does a DNAT (destination NAT and changes the destination IP address to the IP address of the POD that made the call, and the message is delivered).

    • @youngbae7170
      @youngbae7170 Před 2 lety

      @@TheLearningChannel-Tech thank you. I must have missed that part. All clear now. Thank you for the helpful video!

  • @rahulsawant485
    @rahulsawant485 Před 20 dny

    Please can you explain the part how the packet is routed in the case where we get response from the pod on master having destination ip of the tunnel.
    how the response is sent from tunnel to the respective pod on the worker node

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 20 dny

      I'm trying to understand your question but if you are asking how a call from a pod on master is routed to a pod on node 1, it is done exactly like the scenario I explained in the video but is routed through the tunnel on node 1. Nothing is different.

    • @rahulsawant485
      @rahulsawant485 Před 19 dny

      @@TheLearningChannel-Tech
      correct but as soon as it reached tunnel on node 1 how it knows to which pod it needs to send the response as in the IP header which we captured on master there was no information (IP) about the pod on node 1 as it was NAT to node 1 tunnel IP address.
      I am trying to understand how the packet is routed from node 1 tunnel to pod on node 1 as the response arrives

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 18 dny

      @@rahulsawant485 This is a call/response situation. The tunnel on the callin server masqurates the calling pod's IP address and sends the request to ther side. The pod on the other side (server) thinks the tunnel on the other side made the call and sends the responds back to the tunnel on the other side. That tunnel is sitting there waiting for the results and as soon as it gets it, it simplay forward it to the pod.

    • @rahulsawant485
      @rahulsawant485 Před 18 dny

      Thank you. This statement "That tunnel is sitting there waiting for the results and as soon as it gets it, it simplay forward it to the pod." makes it clear

  • @horizonbrave1533
    @horizonbrave1533 Před 2 lety

    So does calico's set of config files, just overwrite the networking files for K8's? Like this ippool.yaml file...once you install Calico, it just ignores the file from Kubernetes? What if you had pods deployed using the base K8s networking scheme and then installed Calico later, would the IP's of the alrady deployed pods change to Calico's IP's?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety +1

      You cannot deploy PODs without first installing a CNI provider such as Calico, Cilium, Flannel, etc. first. The CNI provider manages the POD networking and IP management of PODs. Once the CNI provides creates the PODs through the Container Runtime Interface (CRI), it assigns them IPs and network them then when you create a service, Kubernetes provides a load balancer over the PODs involved in that service. So, in brief, Kubernetes has no role in POD networking.

    • @horizonbrave1533
      @horizonbrave1533 Před 2 lety

      @@TheLearningChannel-Tech But doesn't K8's have it's own basic CNI built in? or no?

  • @spiraldynamics6008
    @spiraldynamics6008 Před rokem

    In 23:17
    The response from the pod in node2 , the destination IP is the IP of tunnel0 of node 1
    How the tunnel0 knows to which pod in node1 he has to send this response ? ( he has to do a DNAT)

  • @simo47768
    @simo47768 Před 3 lety

    Thank you. Awsome. Can you also do one with weavenet cni ?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 lety

      Thank you @Mohamed Loudiyi!. I will consider Weave for a future topic, I need to weigh the demand for this vs. other requests but will defiantly put this on the list for consideration. Thank you again for watching and please consider subscribing if you haven't already done so, Thanks.

  • @jayashankaradm1942
    @jayashankaradm1942 Před 2 lety

    Hi Thanks for great contents.
    I have doubts related to Namespace concepts wrt container and pod network namespace in kubernates.
    1> Lets say we have Container image for my application (Example: docker image of my application), Which run on it's own Network namespaces(which is isolated from host network namespaces) if we run the docker image using "docker run " command on host
    2> POD will also have a pod network namespace created by CNI plugin (ex: Calico).
    With this if we create deployment/POD manifest file for this application and deployed it k8s cluster, Will these result in two namespaces one for container which is running inside POD and another one is for POD network namespaces (Which is shared by all the container inside the that POD).
    Note: I am not considering host networking namespace here, since it will always exist.
    Could please share some info around this. I am totally confused about Containers namespaces and POD namespaces .
    In case If there is no container namespace as such, then how containers inside POD is isolated from each others ?
    Thanks

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      In the Kubernetes environment, if a POD contained multiple containers, they all share the same network stack. They all share the same IP address.

    • @jayashankaradm1942
      @jayashankaradm1942 Před 2 lety

      @@TheLearningChannel-Tech Thanks for the quick response.
      1> In this case how to send request to specific container from outside (from some other pod)?
      2> One more doubt, when we run docker run command on host , will this docker image runs on different namespace or on same host network namespace ?

  • @rewantasubba5180
    @rewantasubba5180 Před 2 lety

    Wondering how the bridge fits with Calico CNI? it was mentioned in the video.

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      @Rewanta Subba, it doesn't. Where in the video you are referring to? Thanks.

    • @rewantasubba5180
      @rewantasubba5180 Před 2 lety

      @@TheLearningChannel-Tech my aploigies, i meant in flannel one. Is bridge not needed when using calico as CNI?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      @Rewanta Subba, Calico operates in layer 3 so no bridges are involved. Note that Calico can be configured in a VXLAN mode but even then the entire frame is sent over a UDP tunnel so no bridges are involved. Thanks for watching and your comments!

    • @rewantasubba5180
      @rewantasubba5180 Před 2 lety

      @@TheLearningChannel-Tech Thanks. Just one more question- how does containers communicate in such case, where there are multiple containers in a pod? I assume that bridge connects to cni-plug in any case ?

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      Containers in the same POD share the same network name space and IP address, they communicate with each other through localhost.

  • @nileshgore5499
    @nileshgore5499 Před 3 lety

    Problem - after changing to - ipipMode: Never, when I ping from pod-1 in host 1 to pod-2 in host 2 (both hosts in same subnet), the source IP address is seen as IP address of host 1 interface instead of pod-1 IP address,
    spent many hours to check why the source IP is changed
    Solution - checked the NAT table on the node using - "sudo iptables -t nat -L" and found "MASQUERADE all -- 172.17.0.0/16 anywhere"
    The command "firewall-cmd --add-masquerade --permanent" was issued during the k8s node setup by watching the cluster setup video.
    I now issued the command to disable masquerade - "sudo firewall-cmd --remove-masquerade"
    pop to pod traffic between different hosts now uses pod IP.

  • @aungsoemoe552
    @aungsoemoe552 Před 2 lety

    Hi Sir! What is POD Network Name Space? Does it means Kubernetes Namespace ? is POD Network Name space created by kubelet or CNI Plugin? And veth eth0 is inside POD Network Name Space or POD? I read somewhere like that CRI need to pass Container ID and Network Namespace to CNI plugin, is it true? Does CRI means kubelet? Are Network Namespace, Kubernetes Namespace and POD Network Namespace the same? I am a little confused at there. Please which video should I watched in your channel? In Part I video, I saw network namespace is created by using 'ip netns add' command,but in real kubernetes,that command or this 'kubectl create namespace' command is used? And Part I video, veth pair is attached in network namespace, I would like to know in kuberentes , veth is attached in kubernetes namespace or POD? I am clearly understand Part I network namespace video but i can't understand what would be in kubernetes. So sorry for my poor English.

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety +1

      @Aungsoe Moe ,
      Hi!
      Network name spaces are Linux networking constructs, they manage anything network-related such as IP addresses, firewall rules, etc. When you create a new VM, Linux automatically creates a default name space on that VM. In the case of Kubernetes PODs, when a new POD is about to be created, Kubelet instructs Container Runtime Interface (CRI) to create the container(s) that constitute that POD. Once the container(s) is created, CRI calls the CNI provider which then creates the default name space on that container(s) in that POD and sets up its IP address gateway where POD will be able to communicate to the outside world. You can learn more about CRI in my video: czcams.com/video/H9YfKliGuUY/video.html
      To learn more about container networking and POD networking make sure to watch the following videos. Hope this helps.
      Understanding Kubernetes Networking. Part 1: Container Networking
      czcams.com/video/B6FsWNUnRo0/video.html
      Understanding Kubernetes Networking. Part 2: POD Network, CNI, and Flannel CNI Plug-in.
      czcams.com/video/U35C0EPSwoY/video.html

    • @aungsoemoe552
      @aungsoemoe552 Před 2 lety +1

      @@TheLearningChannel-Tech Thanks a lot. Sir. I will watch it now. Really Thanks.

  • @Fayaz-Rehman
    @Fayaz-Rehman Před 2 lety

    Thank you for the good stuff - BGP does not work in kubernetes 1.20 and above - you need to make changes in your BGP yaml file especially the apiVersion.

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      @Fayaz Rehman, thank you very much for your kind words! Can you provide a link to where you read about BGP support and Kubernetes 1.20+? Thank you.

    • @Fayaz-Rehman
      @Fayaz-Rehman Před 2 lety

      @@TheLearningChannel-Tech root@DESKTOP-8O0EMMT:~/calico# k get nodes
      NAME STATUS ROLES AGE VERSION
      master1.example.com Ready control-plane,master 134d v1.20.2
      master2.example.com Ready control-plane,master 134d v1.20.2
      worker1.example.com Ready 134d v1.20.2
      worker2.example.com Ready 134d v1.20.2
      ----------------------------------------------------
      root@DESKTOP-8O0EMMT:~/calico# k explain ippool
      KIND: IPPool
      VERSION: crd.projectcalico.org/v1
      ----------------------------------------------------
      NOTE:- watch for apiVersion change "crd.projectcalico.org/v1"

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      Hi @Fayaz Rehman, to ensure this does not trip people up BGP does work in Kubernetes v-1.20+, just set up a Cluster for my latest video and found no issues. Thanks.

    • @Fayaz-Rehman
      @Fayaz-Rehman Před 2 lety

      @@TheLearningChannel-Tech Got it - Thank you - I tested BGP on Kubernetes 1.20 and 1.21 by replacing apiVersion with "crd.projectcalico.org/v1" and everything works fine. Kubernetes new releases keep on changing apiVersions for different reasons - my bad, I should look for apiVersion change before testing BGP out of box. Thank you for the new great detailed video " Setup a Linux-Windows (Calico based) Hybrid Kubernetes Cluster to Host .NET Containers ".

  • @shamstabrez2986
    @shamstabrez2986 Před rokem

    i didnt get part of 6:10 minutes about the ethernet frame n wrapping inside that

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před rokem +1

      PODs on different nodes are in different subnets. When a POD needs to communicate to another POD on a different node, the message(frame) is put inside an outer frame which has the IP address of the destination server. Once the frame gets to the other side, the outer frame is discarded, and the message is delivered to the destination POD.

    • @shamstabrez2986
      @shamstabrez2986 Před rokem

      @@TheLearningChannel-Tech thnk u soo much man for ur reply

  • @SacrificialGoat94
    @SacrificialGoat94 Před 2 lety

    Me this morning: "I'm pretty confident with k8s" - My colleague: ""Do you know if this pod runs on host or CNI? - Me: "CZcams: What is Kubernetes CNI? What is Calico"..... fuck "

    • @SacrificialGoat94
      @SacrificialGoat94 Před 2 lety +1

      Thanks for doing this :) There is always more to learn

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 2 lety

      Lol! I know the feeling, no break for us in this business, there is always something new to learn.

  • @singalong8836
    @singalong8836 Před 3 lety

    Hi I am Srini . The session was really awesome. I was able to follow it .
    Let me know if my understand is right . I am using a single node k3s cluster with calico installed . All my pod will be connected to eth0 with an ip which is a subnet of ippool where we set the CIDR . Pod eth0 will be mapped to veth (calico virtual ethernet) and in my case veth are as below calia10dfdd80c8,cali9b52c469db7,cali4c7f7d12ecf,cali3443d4d9c67 and these veth are connected to the server host eth0 which in my case is my host network 192.168.0.173 and calico node is created outside the k3s cluster in the host server which is why it takes the host ip. Now I have two pods which is running heloo world . From one pod say "example1-7d5df98f78-xznzg" with pod id 192.168.106.5 , i get into the shell and do curl the ip of other pod "example2-69648c9799-6m6fj" with pod ip 192.168.106.6 , the routing should happen as below . Let me know if my understanding is right ?
    Source destination
    192.168.106.5 cali4c7f7d12ecf
    cali4c7f7d12ecf 192.168.0.173(host eth0)
    192.168.0.173 cali3443d4d9c67
    cali3443d4d9c67 192.168.106.6
    root@srini-Virtual-Machine:~# ip a
    1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:00:a4:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.173/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
    valid_lft 4790sec preferred_lft 4790sec
    inet6 fe80::62c4:de6e:3d15:874e/64 scope link noprefixroute
    valid_lft forever preferred_lft forever
    11: vxlan.calico: mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 66:10:b3:68:05:8a brd ff:ff:ff:ff:ff:ff
    inet 192.168.106.0/32 scope global vxlan.calico
    valid_lft forever preferred_lft forever
    inet6 fe80::6410:b3ff:fe68:58a/64 scope link
    valid_lft forever preferred_lft forever
    14: calif8dc50caec7@if3: mtu 1450 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
    valid_lft forever preferred_lft forever
    15: cali1d07a8255da@if3: mtu 1450 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
    valid_lft forever preferred_lft forever
    16: calia10dfdd80c8@if3: mtu 1450 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
    valid_lft forever preferred_lft forever
    17: cali9b52c469db7@if3: mtu 1450 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
    valid_lft forever preferred_lft forever
    18: cali4c7f7d12ecf@if3: mtu 1450 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
    valid_lft forever preferred_lft forever
    19: cali3443d4d9c67@if3: mtu 1450 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
    valid_lft forever preferred_lft forever
    root@srini-Virtual-Machine:~# kubectl get pods --all-namespaces -o wide
    NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    tigera-operator tigera-operator-7df96bbd5-2rsgr 1/1 Running 0 118m 192.168.0.173 srini-virtual-machine
    calico-system calico-typha-86558d66f9-d5q5l 1/1 Running 0 118m 192.168.0.173 srini-virtual-machine
    calico-system calico-node-v7n7m 1/1 Running 0 118m 192.168.0.173 srini-virtual-machine
    kube-system coredns-854c77959c-pg7r9 1/1 Running 0 119m 192.168.106.1 srini-virtual-machine
    kube-system metrics-server-86cbb8457f-6pjlc 1/1 Running 0 119m 192.168.106.2 srini-virtual-machine
    kube-system local-path-provisioner-5ff76fc89d-gg8nl 1/1 Running 0 119m 192.168.106.4 srini-virtual-machine
    calico-system calico-kube-controllers-5ccf85d9c8-nbflq 1/1 Running 0 118m 192.168.106.3 srini-virtual-machine
    default example1-7d5df98f78-xznzg 1/1 Running 0 41m 192.168.106.5 srini-virtual-machine
    default example2-69648c9799-6m6fj 1/1 Running 0 30m 192.168.106.6 srini-virtual-machine
    root@srini-Virtual-Machine:~# route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0
    169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0
    192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
    192.168.106.1 0.0.0.0 255.255.255.255 UH 0 0 0 calif8dc50caec7
    192.168.106.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali1d07a8255da
    192.168.106.3 0.0.0.0 255.255.255.255 UH 0 0 0 calia10dfdd80c8
    192.168.106.4 0.0.0.0 255.255.255.255 UH 0 0 0 cali9b52c469db7
    192.168.106.5 0.0.0.0 255.255.255.255 UH 0 0 0 cali4c7f7d12ecf
    192.168.106.6 0.0.0.0 255.255.255.255 UH 0 0 0 cali3443d4d9c67
    root@srini-Virtual-Machine:~# calicoctl get ippool default-ipv4-ippool -o yaml
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
    creationTimestamp: "2021-05-23T05:40:53Z"
    name: default-ipv4-ippool
    resourceVersion: "776"
    uid: 7381c41f-c9d7-4802-a4df-c311669239a5
    spec:
    blockSize: 26
    cidr: 192.168.0.0/16
    ipipMode: Never
    natOutgoing: true
    nodeSelector: all()
    vxlanMode: CrossSubnet

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 lety

      @Sing Along, Hi Srini, that is correct, the communication from pod1 to pod2 goes from pod1's eth0 to it's companion veth on the host, then through the host's eth0 to Pod2's veth on the host and finally arrives to pod 2 through it's eth0. Hope this makes sense. Thanks again for your kind words and thanks for watching!

    • @singalong8836
      @singalong8836 Před 3 lety

      ​@@TheLearningChannel-Tech Hey thanks for the quick response.
      Even i was thinking the same but currently what's happening is when verified the tshark ( sudo tshark -i eth0 -V -Y "http") after http call to pod2 heloo world app from pod1 shell , i dont see any calls landing in . But if i verify the tshark for sudo tshark -i cali3443d4d9c67 -V -Y "http" , i see the calls

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 lety

      @@singalong8836 In this case I think because the call never leaves the host (as opposed to calling a pod on a different node).

    • @singalong8836
      @singalong8836 Před 3 lety

      @@TheLearningChannel-Tech hi both the pods , pod1 and pod2 are running in the same node . I am using k3s single node cluster with flannel disabled and installed calico i . And also when i do the curl http call to the pod2 from pod1 shell i am also getting 200 response .
      I have shared my config , network interface details , route table and pods node details in my above comment. Please let me know if i am doing something wrong.
      For installing calico in kubernetes k3s i followed the below link
      docs.projectcalico.org/master/getting-started/kubernetes/k3s/quickstart

    • @TheLearningChannel-Tech
      @TheLearningChannel-Tech  Před 3 lety +1

      Sorry, maybe I'm not quiet understanding your posts. Are you having an actual issue or you are wondering tshark is not capturing traffic between pods on host's eth0? If the the latter, I believe since all communications are occurring on that single host and internally, although host's eth0 is involved, that traffic is not captured on host's eth0, this is not a problem. If you have an actual issue then if you could explain in more detail what the issue is. Cheers!