[ Kube 31 ] Set up Nginx Ingress in Kubernetes Bare Metal

Sdílet
Vložit
  • čas přidán 17. 03. 2019
  • In this video, I will show you how to set up Ingress controller using Nginx in your Kubernetes cluster. Traffic routing in Kubernetes cluster is taken care automatically if you use one of the cloud provider. But if your cluster is in bare metal, you are left with few choices.
    In this demo, all virtual machines I used are from LXC containers.
    Github: github.com/justmeandopensourc...
    Nginxinc Ingress: github.com/nginxinc/kubernete...
    For any questions/issues/feedback, please leave me a comment and I will get back to you at the earliest. If you liked the video, please share it with your friends and do not forget to subscribe to my channel.
    Hope you found this video useful and informative. Thanks for watching this video.
    If you wish to support me:
    www.paypal.com/cgi-bin/webscr...
    #nginxingress #kubernetesingress #learnkubernetes #justmekubernetes

Komentáře • 585

  • @tonytwostep_
    @tonytwostep_ Před 4 lety +8

    Thanks a ton for the tutorial. Got it up and running rather quickly with the examples you provided, now to take that knowledge into my own ingress ventures.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Anthony, many thanks for watching this video and taking time to comment. Cheers.

    • @vinaytalla2905
      @vinaytalla2905 Před rokem

      Hi,
      How the request from haproxy to worker nodes are flowing to port 80. In my case when i configured haproxy backed with worker IP’s with port 80, it posting an error connection refused 80, how an ingress controller open port 80 on Worker Nodes? Any Suggestions

  • @Kumar-zq6xl
    @Kumar-zq6xl Před 2 lety +2

    Your vides are the best. Easy to follow and very clear. The context you provide at the beginning of each video is perfect. I have learned so much from your instructions in last couple of week to setup my K8s infrastructure. Thanks really for such great quality content.

  • @dremedley970
    @dremedley970 Před 4 lety +4

    Thank you so much for all your tutorials. You do a fantastic job. I look forward to continue learning from you.

  • @felipecaetano15
    @felipecaetano15 Před rokem +1

    AMAZING work, your didactic is on point and doing it hands-on is exactly what I needed. Gonna watch the whole playlist for sure!

    • @justmeandopensource
      @justmeandopensource  Před rokem

      Hi Felipe, many thanks for watching. Bear in mind some of the videos might be outdated and I am relying on viewers to tell me whether something is broken so that I can do a follow up video with latest versions of softwares. Cheers.

  • @darlingtonmatongo9436
    @darlingtonmatongo9436 Před rokem +1

    This is a brilliant tutorial, i really enjoyed the simple nicely paced step by step approach. Great work

  • @sivasankarramani6678
    @sivasankarramani6678 Před 4 lety +1

    Glad to hear that you are doing this great videos for us💐💐

  • @jamesaker7048
    @jamesaker7048 Před 4 lety +2

    This is a great video that explains nginx ingress very well. Some viewers might be trying to run this on vps servers in the cloud using the new lxd/lxc version and can't get haproxy to work. You run lxc config device add haproxy myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 (note: haproxy in the command is the name of the lxc container). So if you followed the video and haproxy is not forwarding the traffic you may need this command or see if there is a firewall enbaled. Another helpful command is from inside the haproxy container use haproxy -c -V -f /etc/haproxy/haproxy.cfg which checks to make sure your configuration is valid before starting/restarting the haproxy service. Thank-you for putting this video series together they are one of the best ones out here.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +2

      H iJames, many thanks for sharing this info. It will be really of great help to others looking for proper implementation. Cheers.

  • @vitusyu9583
    @vitusyu9583 Před 2 měsíci +1

    Interesting, informative and really illuminating for me as a K8s learner! Thanks!

  • @diegosantanadeoliveira9467

    Nice Video Bro!! You contents of Kubernetes Bare Metal Help me a lot.

  • @arunsarma7997
    @arunsarma7997 Před 2 lety

    All your videos are excellent. Keep up your good job

  • @rsrini7
    @rsrini7 Před 4 lety +2

    Excellent ! Simply Wow. நன்றி

  • @godfreytan1001
    @godfreytan1001 Před 3 lety +1

    Great complete ingress controller. Thank you.

  • @alebiosulukmon3445
    @alebiosulukmon3445 Před 4 lety +2

    Great tutorial, clearly explained

  • @balakrishnag1707
    @balakrishnag1707 Před rokem +1

    Thanks a lot Bro for this tutorial. all my questions are clear.

  • @christophea.2145
    @christophea.2145 Před 4 lety +1

    Thanks a lot for this video; very clear, it helps me a lot !

  • @yohansutanto4195
    @yohansutanto4195 Před 2 lety +1

    Your channel is life saver

  • @trigun539
    @trigun539 Před 3 lety +3

    Great content, greatly appreciate all the kubernetes tutorials!

  • @senhajirhazihamza7718
    @senhajirhazihamza7718 Před rokem +1

    Wonderful work

  • @martin_mares_cz
    @martin_mares_cz Před 4 lety +9

    Best tutorial ingress I've ever seen. great man!

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Martin, thanks for watching. Cheers.

    • @martin_mares_cz
      @martin_mares_cz Před 4 lety +1

      Just me and Opensource Are you planning to release the trafik v2 tutorial? there are big changes against v1 and also there are problems with the API version of Kubernetes 1.16.2 where many things are deprecated. I can't get trafik v2 up and running like DaemonSet. Thank you in advance for your reply.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      @@martin_mares_cz I don't have Traefik v2 in my list but will add. I have videos scheduled for the next two months. And lot more videos in the pipeline to be recorded. Thanks.

  • @chandrashekhar152
    @chandrashekhar152 Před 2 lety +1

    I loved it , and post videos frequently with real time scenarios , request you to do videos on jenkins as well

  • @rayehan30
    @rayehan30 Před rokem +1

    You're simply the best🤟

  • @nanocaf
    @nanocaf Před 4 lety +1

    Very good tutorial, thank you for sharing. Tested on a kubernetes that runs behind rancher2 using Hetzner servers. Next step is to test Traefik

  • @dilamartins
    @dilamartins Před 4 lety +2

    dude you helped my a lot, thanks!

  • @puyansude
    @puyansude Před rokem +1

    Great demo!! Thank You

  • @aryadiadi6888
    @aryadiadi6888 Před 3 lety +1

    Great tutorial, thank you.

  • @larperdixon723
    @larperdixon723 Před 4 lety +1

    excellent video, thank you!

  • @gbrt9569
    @gbrt9569 Před 4 lety

    Great video Venkat

  • @HamitKumru
    @HamitKumru Před 2 lety +1

    Thank for sharing your knowledge

  • @billmcguire6128
    @billmcguire6128 Před 5 lety +1

    Great video - thanks!

  • @ranjbar_hadi
    @ranjbar_hadi Před 3 lety +1

    amazing video

  • @fabianbrash4356
    @fabianbrash4356 Před 4 lety +1

    Great vids!!

  • @mateuszgelmuda2656
    @mateuszgelmuda2656 Před 4 lety

    Great tutorial !

  • @nikhilwankhade3953
    @nikhilwankhade3953 Před 2 lety +1

    Excellent 👍

  • @devopssimon
    @devopssimon Před 3 lety +1

    Another brilliant video, very helpful and well explained. Thank you

  • @mikedqin
    @mikedqin Před 4 lety +2

    Hello Sir, I just watched your video, will follow your instructions to try it tomorrow, and get feedback to you. From what I've seen, you've made an excellent tutorial on Ingress Controller - application load balancer, and HAProxy - network load balancer for bare-metal Kubernetes cluster. That's exactly what I am looking for at this moment. You're very hands-on. Great Jobs. Subscribed. Thank you.

  • @aromals3871
    @aromals3871 Před 3 lety +1

    Awesome!! Thanks a ton!

  • @atostrife
    @atostrife Před 4 lety +2

    This video is magic. Better explaination around Ingress Controller for Kubernetes Bare Metal !!

  • @marcosfelipecarvalhonazari1509

    Thank you so much!!!!!

  • @taiwoesoimeme2383
    @taiwoesoimeme2383 Před 9 měsíci +1

    Nice Vid

  • @msahsan1
    @msahsan1 Před 3 lety +1

    Awesome thanks

  • @royals6413
    @royals6413 Před rokem +1

    Hello thank you for this video !
    Do we need to link the load balancer only to the nodes that have an ingress controller ? Or to all of them ?

  • @rikschaaf
    @rikschaaf Před 3 lety +1

    Just found your video's on kubernetes, kubespray and nginx ingresses. You are very good at explaining the default behaviors, which gives the highest chance for success.
    The nginx docs explained that in the default server secret file they provided a default self-signed cert and key and that they recommended to use your own certificate. Things to note: the cert and key are base64 encoded (again), so keep this in mind when you add the cert to the default-server-secret.yaml file.
    Also, if you are using windows to generate the keys, make sure you remove the CR characters ( ^M), before base64 encoding the cert and key. Otherwise you'll get an error when trying to start the nginx-ingress pods.

  • @saibabaachanna5542
    @saibabaachanna5542 Před rokem

    Thank you very much ❤ for providing such good video..
    In this video you have taken haproxy for routing and exposed over the private IP how the nginx application will access all clients with domain name...

    • @justmeandopensource
      @justmeandopensource  Před rokem

      That will be via the ingress route resource which defines the hostname to service mapping. Thanks for watching.

  • @xiuhuazhai1168
    @xiuhuazhai1168 Před 3 lety +2

    Great video, thanks, one quick question, Does the ingress controller POD exposed to the HAproxy directly? but I don't see you use "hostNetwork: true" ?

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +1

      Hi Xiuhua, thanks for watching. If you do kubectl describe on the ingress controller daemonset, you will see that it binds to the host port on the underlying worker node.

  • @mehdibakhtyari5861
    @mehdibakhtyari5861 Před 22 dny

    Thanks in advance for the great tutorials, by any change, is there any load balancer that can support Diameter and be used inside K8s cluster?

  • @Goalkickers21
    @Goalkickers21 Před 3 lety +2

    Hi There,
    Nice video bro!
    I have one question, on the HAPROXY you configure all the IP addresses from the worker nodes.
    what is if you scale out or scale the Cluster (Add or remove worker nodes)? then you have to manually change the Configuration on the HAPROXY?
    Also if the worker nodes are deployed via DHCP and somehow the ip change, then its also needed to change the config. Do you have an Solution for this?
    Thank you very much.

  • @adityahpatel
    @adityahpatel Před 3 měsíci

    Fantastic video. I've seen your metalLB videos too. My question is - If i deploy nginx-ingress-controller as a daemonset on 4 physical nodes of my cluster at home and expose ingress deployment as as nodeport service on port 31111 + then attach haproxy to this, why do i need MetalLB to load balance?

  • @feezankhattak1573
    @feezankhattak1573 Před 2 lety

    Thanks for the video, can you tell what are the other ways of creating cluster instead of lxc.

  • @ajitsingh4346
    @ajitsingh4346 Před 5 lety +2

    Hello Venkat, again, excellent explanation of the topic. I read the documentation about ingress where they mentioned nginx, ingress controller, nginx, load-balancer etc. It was all respect to some cloud provider and not about bare metal k8s cluster. It was all so confusing.
    Your component and flow diagram made the concept crystal clear. Since it is bare metal, I can practice in my home lab. Today, your video quality was max 360p, so difficulty in reading text, may be due to just upload. Tomorrow, I would do hands on on my home lab.
    One suggestion on demo container/pod. I generally use hashicorp/http-echo image to show different pods or different container in a single pod as below. It might make your demos easy than using nginx.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: fruit-deployment
    labels:
    app: fruit
    spec:
    replicas: 4
    selector:
    matchLabels:
    app: fruit
    template:
    metadata:
    labels:
    app: fruit
    spec:
    containers:
    - name: apple-app
    image: hashicorp/http-echo
    args:
    - "-text=response from apple-app"
    - "-listen=:6000" # default container port is 5678
    ports:
    - containerPort: 6000
    - name: banana-app
    image: hashicorp/http-echo
    args:
    - "-text=response from banana-app"
    - "-listen=:6001" # default container port is 5678
    ports:
    - containerPort: 6001

    • @justmeandopensource
      @justmeandopensource  Před 5 lety +1

      Hi Ajit, Thanks for the http-echo container suggestion. Looks good.
      I just checked my video and I can see all the video playback qualities. I can switch to 720p or 1080p for high resolution. Have you checked if you can change the video quality setting? Depending on your internet connection speed, youtube will automatically select appropriate quality.

    • @ajitsingh4346
      @ajitsingh4346 Před 5 lety +1

      @@justmeandopensource Strange, regarding resolution, I watched your video in chromium browser on windows, in that your video has max resolution of 360p where as other channels are having normal higher resolutions. I checked your video on Google Chrome, it is having higher resolution, 1080p. I would use that browser :)

    • @justmeandopensource
      @justmeandopensource  Před 5 lety +1

      Yeah, just googled the issue and there were lot of discussions around this where all the video qualities are not listed on some browsers.

  • @knightrider6478
    @knightrider6478 Před 4 lety +1

    Hello Venkat, I have a question regarding the HAProxy. This load balancer cannot be provisioned as a pod inside of the k8s cluster?
    I saw that you made a separate VM for it.
    I'm asking you this because i use VPSs for my k8s cluster.
    Thanks and regards.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Knight,
      Thanks for watching this video. Although I haven't tried it, Haproxy can be provisioned inside the cluster itself as a pod. But it involves lots of configurations to make it work. Deploying a haproxy as a container/pod isn't difficult. Then you will have to create a service for that to expose it outside of the cluster. Lots of ports mappings involved.
      The below link might give you some direction.
      www.bluematador.com/blog/running-haproxy-docker-containers-kubernetes
      You mentioned you are using VPS. You can install haproxy on the master node itself, and don't have to use a separate VM for it.
      Thanks

  • @rmnobarra
    @rmnobarra Před 4 lety +1

    Nice!!

  • @nevink3123
    @nevink3123 Před 4 lety +1

    Hi Venkat, thanks for this great video. One question though, I still could not understand how haproxy is able to connect to the worker nodes port 80. We only have the cluster IP service created and ingress resource has routing in it to point to the cluster ip service. There is no node port service or LoadBalancer service to access it from outside the kubernetes cluster. I was trying to get it working by following your video. If I check the get all -n nginx-ingress with the steps, I see only the nginx-ingress pods and the daemonset in the nginx-ingress namespace. The get all ( without namespace) only gives the nginx-pod and the cluster ip service pointing to the nginx-pod. I am wondering how is it working without having a nodeport service or a loadbalancer service running to connect to the worker node from haproxy ? As per the haproxy configuration, it directly using the IP addresses of the worker nodes and port number 80. Looks like I miss something ...

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Nevin, thanks for watching. Have a look at the output of kubectl describe daemonset . The ingress controller pods are deployed as daemonset, so there will be one ingress controller pod on each worker node. They use hostport to bind to port 80 and 443. This will be clear when you look at the kubectl describe output. Cheers.

  • @LongNguyen-ur9co
    @LongNguyen-ur9co Před 4 lety +1

    Excellent session! I do come across a small snag when deploying nginx-ingress in that the create DaemonSet (kubectl apply -f daemon-set/nginx-ingress.yaml) as shown in your demo works. On the other hand, if I choose Create a Deployment (kubectl apply -f deployment/nginx-ingress.yaml) then all requests via HAProxy would fail with 503! Is there a hack that need to be applied? Thank you Venkat

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi, thanks for watching. I haven't actually tried the deployment type. Always gone for the daemonset as my dev cluster has only few nodes. I think its the haproxy configuration that needs to be tweaked but not entirely sure.

  • @lachopaez3080
    @lachopaez3080 Před 3 lety +1

    Hi Venkat,
    Great video! I am planning to use plublic DNS as noip.com and make a port forward in a router to reach a microservice backend. How does the HA Proxy can reach the service if the service as an internal IP from the cluster?

  • @Tshadowburn
    @Tshadowburn Před 4 lety +1

    hello venkat, it is me again :) with yet another question :), I was wondering how with Kubernetes can I manage to request a pod from another pod? , I have a web service in python with flask that is inside a container that I can request thanks to nodeport, but I want that web service to also send a request to a TensorFlow serving ( a container that when requested return a series of probabilities)? should I expose a service for the TFserving too?

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Yeah, that's the way to access another pod by exposing it as a service for the TensorFlow pod.

  • @rayehan30
    @rayehan30 Před rokem +1

    Thanks!

    • @justmeandopensource
      @justmeandopensource  Před rokem

      Hi Rayehan, many thanks for watching and for your contribution. Much appreciated.

  • @michelbisschoff6993
    @michelbisschoff6993 Před 5 lety +2

    Hi Venkat, thank you for another excellent video. I got it working on your vagrant environment. I also tried it on the cluster as created via the-hard-way (Kelsey Hightower). But that didn't work. Looks like iptables are blocking port 80. Just wondering how iptables are setup (probably done by kube-proxy). Hard to find this info. Maybe a suggestion to make a video about network setup and the protocols, ports and their flow and how iptables are setup. But again, thank you for taking time to make these videos and sharing them with us.

    • @justmeandopensource
      @justmeandopensource  Před 5 lety +1

      Hi Michel, thanks for watching this video. Thats interesting. When I get some time I will test ingress on the cluster set up the hard way. Thanks.

    • @michelbisschoff6993
      @michelbisschoff6993 Před 5 lety +1

      Hi Venkat, I've found the issue. It is actually quite simple. I got triggered when I was doing your video about Prometheus on my "the-hard-way" cluster. I noticed the prometheus-node-exporter got IP addresses of the the worker nodes. Normally it is more secure, to have the POD IP address range used for the PODs. So I noticed that if the hostNetwork parameter is set to true, the IP addresses of the hosts is used! So I changed the ingress file daemon-set/nginx-ingress.yaml by adding this parameter and now it all works!!!

    • @justmeandopensource
      @justmeandopensource  Před 5 lety +1

      @@michelbisschoff6993 Thats great. I learnt something new today. Thanks for that. Cheers.

  • @josepvindas
    @josepvindas Před 4 lety

    Very helpful tutorial, just one quick question. I assume that since you have the cluster running on containers, the reason you are able to execute kubectl commands from the host machine is some sort of rule on your .zshrc file? If so, could you please explain how that is accomplished? I tried using an alias such as alias kubectl='lxc exec kmaster kubectl'. And while this works just fine for listing resources and what not, the forwarding of the command breaks when you need to add flags. So while I can run 'kubectl get nodes', if I try to run 'kubectl get nodes -o wide' it breaks.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety

      Hi Jose, thanks for watching this video. I covered the kubeconfig details in various other cluster provisioning videos. I had an assumption that viewers watched all my previous videos. That's why I don't repeat all the information in every video.
      So you are using lxc containers for kubernetes cluster?
      I copy the /etc/kubernetes/admin.conf file from the master node to my host machine as $HOME/.kube/config.
      I also download the kubectl binary and move it /usr/local/bin.
      Hope this helps. If you are stuck, give me a shout again.
      Thanks.

  • @0x037
    @0x037 Před 5 lety +1

    This is a great tutorial! One question - how would one easily define a default route to send users to if they ask for something that doesn't exist? It looks by default it just returns a 404. Is there a way to make it redirect / show something else?

    • @justmeandopensource
      @justmeandopensource  Před 5 lety +3

      Hi, thanks for watching this video. You can do that by configuring the default backend. If there is no rule specified for a url or path to a domain, then ingress controller will redirect the traffic to the default backend service. Thanks

  • @seshreddy8616
    @seshreddy8616 Před 3 lety +1

    Thanks Venkat. It's such a great stuff that I've missed this long.
    I think your k8s installation processes, you're installing the latest k8s version and might want to stick to some version something like below. I'm using ubuntu, therefore it looks as below.
    # Install Kubernetes
    echo "[TASK 9] Install Kubernetes kubeadm, kubelet and kubectl"
    apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
    apt-mark hold kubelet kubeadm kubectl

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +1

      Hi Sesh, thanks for watching. Yes I could have locked it down to a specific version. I have different kubernetes setup videos and I think on some of them I do lock it down to a know working version of docker and kubernetes. I will have to update the github docs. Cheers.

    • @seshreddy8616
      @seshreddy8616 Před 3 lety

      Thanks Venkat. Yeah, I've realised it later while I'm covering your other videos.
      Also, I've a scenario here and not sure if you've covered, if so, could you please point me to the correct clip.
      I've a k8s cluster (with 3 nodes) running in my local wifi network. The vagrant network looks as below. I've chosen this way because I've built another db server (postgres) as a standalone box running outside k8s in the same network as wifi(192.168.1- subnet) . I'd like the pods communicate with it and it works fine using IP and port from the pod.
      If I try to create a headless service something like below, it didn't work. I use the service name from my pod. I'd like to use name instead of IPof my db server.
      Any suggestions please.
      apiVersion: v1
      kind: Service
      metadata:
      name: postgre
      spec:
      type: ExternalName
      externalName: 192.168.1.13
      Vagrant
      kmaster.vm.network "public_network",bridge: "en0: Wi-Fi (Wireless)",ip:"192.168.1.30"

  • @IT_Consultant
    @IT_Consultant Před rokem

    Thanks very much for ur totos, i have a question, i'm strugling to deploy an ingress controller of a type load balancer and let haproxy to give it an ip and connect to it ?

  • @srikanthv8108
    @srikanthv8108 Před 2 lety

    Hi Venkat, thanks for the wonderful video. I have created instances in GCP and ingress setup is done. Please advise me how to check the setup is working or not in GCP.
    I have used 1 HAproxy server, 1 master and 2 worker nodes.

  • @ovnigaz
    @ovnigaz Před 3 lety +1

    Hello thanks for your content.
    I got 2 small questions.
    First: What is the point of an ha-proxy since even if we point to the same node, the svc of type NodePort will load balance btw pod.
    Second: As a mac user how can we communicate from the host to the cluster ? On mac kubernetes (docker for mac) use an hidden VM.

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +1

      Hi Gilles, thanks for watching.
      1. Yes but if you want to expose your application with a DNS name (eg: myapp.example.com), what entry would you add in your DNS? Would you add myapp.example.com with an IP address of one of the worker nodes? What if that worker node goes down? You will then have to update DNS for myapp.example.com with the ip address of another worker node. Just to simplify this process, we use HAproxy or any other load balancer so we don't have to worry about underlying servers (worker nodes) and you don't have to update DNS for myapp.example.com often.
      2. I haven't tried this on Mac with Docker for mac. So I am afraid I can't comment on that. I am a Linux person by birth.

  • @ryandangalan9173
    @ryandangalan9173 Před rokem

    Great tutorial! I'm just wondering on how did you setup your cluster. I mean, who is the cluster endpoint? Is it the HAProxy?

    • @ryandangalan9173
      @ryandangalan9173 Před rokem

      Also, you are doing the load balance on the worker node. Is it also going to work if I do the load balancing on the multiple master node instead of worker in a High availability cluster setup?

  • @raghuvaranchintakindi3331

    Hi Venkat, Thanks for this class, i have tried this tutorial on AWS instances, but getting site can't reachable. which IP(Private IP or Public IP of HAProxy server) i have to place in /etc/hosts. Or should i do any other configuration since i am using AWS instances?. I am using a security group in which all ports are open.

    • @dineshraj2304
      @dineshraj2304 Před 3 lety

      Hi Raghu, Have you fixed it? If yes.. what have you done? Thanks

  • @bhalchandramekewar6015
    @bhalchandramekewar6015 Před 4 lety +1

    hi Venkat,
    Wonderful session again; with hands on ingress setup.
    I tried with similar things using vagrant setup instead of lxd; worked well.
    I found one issue with vagrant though ... simply by hitting hostname on browser vm instance doesn't display anything on windows OS. But within HAProxy instance if i simply do curl on 3 host names i get expected output as mentioned in session.
    how to access vagrant vm instance using hostname instead of private_network ip on browser?

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Bhalchandra, if you were using a Linux machine, then you can update /etc/hosts file with IP address and VM name and then you can access it through the name. Similarly you can do it in Windows as well. The below link might help you.
      www.howtogeek.com/howto/27350/beginner-geek-how-to-edit-your-hosts-file/
      Cheers.

  • @swarajgupta2531
    @swarajgupta2531 Před 3 lety

    You have given :80 in HAProxy default backend config but where have we configured Nginx ingress controller to listen on port 80 of worker nodes for incoming traffic from Load balancer? Thanks Venkat.

    • @swarajgupta2531
      @swarajgupta2531 Před 3 lety

      Have a look at the definition of the ingress daemonset. Kubectl describe daemonset . You will find that the ingress controller pods on each worker nodes uses host port to bind to port 80 and 443. (github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-ingress.yaml)

  • @kunchalavikram
    @kunchalavikram Před 4 lety +3

    Hi sir, just wondering if i can run haproxy in master node itself. or do i need separate VM for this?

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +3

      Hi Kunchala, thanks for watching. Yes you can run haproxy on master node itself or on any of your existing Kubernetes nodes, if its for learning or development purpose.

  • @kunal050285
    @kunal050285 Před 5 lety +1

    Hi Venkat
    if we have setup a rule for node provisioning based on CPU, Memory or based on user request , in that case our total number of nodes will not be same all the time , then how we will do the haproxy entry.

    • @justmeandopensource
      @justmeandopensource  Před 5 lety +1

      Hi Kunal, thanks for watching this video. Thats a good question.
      One other viewer also asked similar question I think.
      I haven't researched much about this. But the following reddit posts seems to discuss few possibilities.
      amp-reddit-com.cdn.ampproject.org/v/s/amp.reddit.com/r/devops/comments/50df4d/ways_to_dynamically_add_and_remove_servers_in/?amp_js_v=a2&_gsa=1&usqp=mq331AQCCAE%3D#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fwww.reddit.com%2Fr%2Fdevops%2Fcomments%2F50df4d%2Fways_to_dynamically_add_and_remove_servers_in%2F

  • @ashish1099
    @ashish1099 Před 4 lety

    Hi
    What do you use for the terminal zsh prompt. Looks nice especially the history commands are coming up automatically.
    I have been using python powerline, but haven't configured internals

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Ashish, thanks for your interest in this video. Actually I have done a video on my terminal setup.
      czcams.com/video/soAwUq2cQHQ/video.html
      But this was long time ago. I have moved since then to a whole different setup to using I3 tiling window manager.
      czcams.com/play/PL34sAs7_26wOgqJAHey16337dkqahonNX.html
      Cheers.

    • @khatmanworld
      @khatmanworld Před 3 lety

      @@justmeandopensource and your desktop theme? It looks cool too

  • @sticksen
    @sticksen Před 2 lety +3

    Hey, fantastic content, I’m a fan!
    Just one question: how would you manage if the worker nodes get scaled out or in or if the IP addresses change? Is there a way that the HaProxy Config automatically stays in sync with the cluster?

    • @justmeandopensource
      @justmeandopensource  Před 2 lety +2

      Hi, thanks for watching. In this video, I used HAProxy for proxying to worker nodes where ingress controllers are listening. But in recent versions of ingress, you don't need this external load balancer. You can make use of MetalLB. So don't worry about configuring and maintaining the haproxy with dynamic worker node details.

    • @sticksen
      @sticksen Před 2 lety +1

      @@justmeandopensource thanks, very helpful tip! So you would consider MetalLB already fit for production environments?

    • @sticksen
      @sticksen Před 2 lety +1

      @@justmeandopensource Thanks Venkat!

    • @justmeandopensource
      @justmeandopensource  Před 2 lety +1

      @@sticksen you are welcome

  • @MuzammilShahbaz
    @MuzammilShahbaz Před 2 lety

    What if you are running the ingress controller only on worker1 and haproxy hits worker2?
    Secondly, what if we run ingress controller on the master node (non-HA)? In that case, should we only provide the IP address of the master in the haproxy backend?

  • @rangisettisatishkumar5491

    Hi Bro... Thanks for the Video... It's really Great... Can we use MetalLB Loadbalancer intead of HAproxy

  • @tylorbillings4065
    @tylorbillings4065 Před měsícem

    Shoudn't there also be a nodeport service created to allow access from the haproxy to the ingress controller?

  • @SanjeevKumar-nq8td
    @SanjeevKumar-nq8td Před 2 lety

    MetalLB & HAProxy both are Load balancer ?. Instead of HAProxy, I can use MetalLB ?

  • @sanuboys9877
    @sanuboys9877 Před 4 lety

    Hi, Thanks for sharing this video, it is very useful. I have a question on the installation. I have a k8s cluster with one master and one node. If i want to install the nginx-ingress load balancer within the k8s cluster, would i need to carry out the bare metals installation of the same load balancer as a pre-requisite? I have tried to install the nginx-ingress controller installation on k8s cluster and the pod is in creation state for ever. Thanks.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety

      Hi, thanks for watching. First of all nginx ingress isn't a load balancer. You need an external load balancer and in this video I used haproxy. Ingress controllers just route the traffic to appropriate backend services. You will have to deploy ingress controller in your cluster and use some form of load balancer to access the worker nodes.

    • @sanuboys9877
      @sanuboys9877 Před 4 lety

      @@justmeandopensource Thankyou. This has cleared up the air. Now i understand what fits where.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety

      @@sanuboys9877 Cool.

  • @musmanayub
    @musmanayub Před 4 lety +1

    Hi Thanks for the video, I have followed the exact same step to create an nginx controller using daemonset however I am not able to browse the app deployed in the pods. I have noticed that the ports 80 and 443 are not getting exposed on the worker nodes despite trying to create the daemonset multiple times. What can be the reason for this? I am using weavenet

    • @justmeandopensource
      @justmeandopensource  Před 4 lety

      Are all your ingress related pods running fine? Have you setup haproxy as shown in this video?

  • @ToallpointsWest
    @ToallpointsWest Před 3 lety

    Great video thank you for putting it out! I was wondering though, with the requirement of the HAProxy Loadbalancer , how do you prevent it from becoming a single point of failure?

  • @teamnetherland5553
    @teamnetherland5553 Před 4 lety +1

    great tutorials. Do you have video to connect bare metal to gitlab cloud. thanks

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi, thanks for watching this video. I haven't explored that much. Thanks for suggesting that though. Cheers.

  • @hanumaadabala9541
    @hanumaadabala9541 Před rokem

    Very nicely explained. Just I wanted to know the system information, battery life , networking , processor and which package required to install on Ubuntu OS

    • @justmeandopensource
      @justmeandopensource  Před rokem

      Hi Hanuma, thanks for watching.
      The widget that you see on the right side of my screen that shows various system information is conky. You have to install conky software and have a conky configuration file. You can search online for ready to use conkyrc configuration or you can customize as per your need. Cheers,

  • @Siva-ur4md
    @Siva-ur4md Před 5 lety +1

    Nice Video, I have a request whenever you have time please make a video on service accounts creation, clusterrolebinding and rolebaseauthentication please, it's a bit confusing when watching ingress and NFS dynamic provisioning.. thanks in Advance...

  • @walidshouman
    @walidshouman Před 4 lety +3

    How does the haproxy discover the ingress pods through the node-ip:80 lines in the haproxy.cfg without any service defined with the nodeport set to 80?

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +3

      Hi Walid, thanks for watching. When you deploy ingress controllers in your cluster, the ingress controller pods will bind to port 80 on the worker nodes they are running. HAProxy load balances the traffic to all worker nodes. When a request is received HAproxy will route it to one of the worker nodes on port 80 where the ingress controller pod is listening and which inturn will route it to the appropriate service. And the service will route it to one of the backend pods. Cheers.

  • @jadukori-animation
    @jadukori-animation Před 2 lety

    Nice tutorial. Can you show a lts (https) example, please? I setup like this but ingress with lts host getting too many redirection.

  • @smartaquarius2021
    @smartaquarius2021 Před 3 lety

    I have a precompiled image and once i spin the pod then container expose itself as rest api. Is there any way to enable https in this case. How to add ssl certificate in this case so that I can call the api using https

  • @jayaprakashr5691
    @jayaprakashr5691 Před 2 lety

    hello, this setup works locally right? i want to expose my application through the internet. i tried with a basic ingress YAML file and deployed the laravel application and created a service. to expose it on the internet i just run the minikube tunnel and exposed the external IP and tried it in the browser but the app is not loading. is my way is correct or what i have to do to expose my app on the internet with minikube. please guide me

  • @mailsuryateja
    @mailsuryateja Před 3 lety

    Is there a reason why all the microservice ports are same? What if the ports are different. Do we have to create those many backend entries in haproxy

  • @deepdeep4629
    @deepdeep4629 Před 2 lety +1

    good video, but it would be greate if you could explain a bit regarding ingress controller.

  • @NiteshKumar-do4en
    @NiteshKumar-do4en Před 2 lety +1

    Hey one request can you make one video on how to attach a load balancer like nlb in front of our kubernates cluster that can load balance between different nodes

  • @alixak4304
    @alixak4304 Před 4 lety

    how are you actually exposing the port 80 on the worker Nodes for the ingress-controller? My approach was to create a Nodeport service for the nginx-ingress-controller and then forward on HAproxy to "nodeip:nodeport"

    • @justmeandopensource
      @justmeandopensource  Před 4 lety

      Hi Alix, thanks for watching.
      Actually when you deploy the nginx ingress controller, it will bind to ports 80 and 443 on the worker node where it is running. You can check "kubectl describe " command to look at the deployment or daemonset (whichever way you deployed). Then you configure HAproxy with workernode:80, workernode:443 for all worker nodes as backend.

  • @dakshithamevandias8949

    How the haproxy LB points to the ingress controller? Configuration file only points the haproxy to worker nodes ips and port 80.

  • @usweta6358
    @usweta6358 Před 4 lety +1

    Hi venkat..Hope you are doing good.. Your videos are really very helpful..Thanks a tonn for the same.
    One doubt - We have a kubernetes setup in AWS environment (3 node setup -1 master and 2 slaves)but we are not utilizing EKS or something, we have installed the cluster on EC2 instance its just that we are consuming the EC2 services. Suppose i have hosted a containerized application and it is listening on 8443 port and the pod ip is ex:10.244.3.251 and for accessing the UI of this application, i have setup load balancer (nginx) on other EC2 instance following the above tutorial. Now once i create service and ingress for my deployment I am able to access the application with the host name (as mentioned in ingress yaml file) and it listens on port 80 by default.
    I hope i am clear untill now.. my question is for accessing the host name, everytime we need to edit our host file on the local machine and give a entry of ('nginx ip' 'hostname as defined in ingress.yaml') which is a bit difficult approach. We don't have access to route53 service for setting up DNS or something.. Is their any other method through which we can access our application easily?? Your comment would be very helpful..!! If need by i can send you the detailed yaml files in your personal email for a better clarity..

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Sweta, thanks for watching. If you don't have access to Route53, then the only way is to edit your /etc/hosts file. You only add an entry once, when you deploy an application/service. Because we are accessing the application using a dns name, the entry has to be somewhere that resolves to the HAProxy IP. Either Route53 or your local /etc/hosts file. Even if you use Route 53, you still have to add entry when you deploy a new app.

  • @jinbaoxin
    @jinbaoxin Před 3 lety +1

    Hi, since the Ngnix controller pod is inside the cluster, how haproxy can reach the ingress controller pods? Thanks

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +2

      Hi Mike, thanks for watching. If you take a look at the output of kubectl describe of one of the nginx ingress controller pod, you will notice that it binds to the host port on the worker node it is running. And haproxy's backend configuration points to these worker nodes on the ports where the ingress controller pods are bound.

  • @premierde
    @premierde Před 2 lety

    Can you please do a session on contour envoy.

  • @nusibusi4728
    @nusibusi4728 Před 2 lety +1

    If I understand you correctly, we need HA Proxy for Ingress to working? Is HA Proxy the prerequisite for Ingress?

  • @surendarm8698
    @surendarm8698 Před 3 lety +1

    Hi Venkat,
    I tired with ingress in gke(google k8s engine).
    1) Created the my pods.
    2)Expose it though loadbalancer service type. (it was working with http)
    3) then i configure ingress, it says error "Some backend services are in UNHEALTHY state".
    Can you pls suggest any possibilities to configure in https.

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +1

      HI Surendar, thanks for watching. I haven't used this setup in Google cloud yet. So I can't be sure of your problem. If I get some time I will test this.

  • @20kwok
    @20kwok Před 3 lety +1

    Thank you for the tutorial.
    May i know how can i set sticky session (stateful application) for this environment?
    Should i configure in haproxy or in the ingress?

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +1

      Hi, thanks for watching.
      I believe it has to be done at the haproxy level.
      thisinterestsme.com/haproxy-sticky-sessions/

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +1

      Actually it can be done at the ingress level as well it seems by adding appropriate annotations to the ingress resource.
      kubernetes.github.io/ingress-nginx/examples/affinity/cookie/#:~:text=Deployment,-Session%20affinity%20can&text=The%20affinity%20mode%20defines%20how,or%20persistent%20for%20maximum%20stickyness.&text=When%20set%20to%20false%20nginx,even%20if%20previous%20attempt%20failed.

  • @MihirMishraMe
    @MihirMishraMe Před 3 lety +1

    Do we always need to register ingress controllers to load balancers(HA proxy in this case).

    • @justmeandopensource
      @justmeandopensource  Před 3 lety +1

      Hi Mihir, thanks for watching. You need some form of load balancing.
      Take a look at the recent updated video on this topic czcams.com/video/UvwtALIb2U8/video.html.
      You can use load balancer solution like metallb and can get away without haproxy stuff.

  • @narendrabhupathiraju8986
    @narendrabhupathiraju8986 Před 2 lety +1

    please do blue/green deployment strategy video

  • @RubenCordero
    @RubenCordero Před 3 lety

    Hi Venkat, thanks for this video. I alway that try to deploy an ingress controller obtain when I do a describe ing:
    Default backend: default-http-backend:80 ()
    And never get to link ingress controller con cluster IP services. I dont know what is happening
    Thanks

  • @gouravgoutam6309
    @gouravgoutam6309 Před 4 lety +1

    Thanks alot for all your videos, Could you make a video on -> how can I listen to TCP traffic using ingress as by default it listen to HTTP & HTTPS traffic only.
    Also I would like to request to create a video with above request using Contour Ingress Controller.

    • @justmeandopensource
      @justmeandopensource  Před 4 lety +1

      Hi Gourav, thanks for watching. I will see if I can do those. I have already recorded videos for the next two months and it will be after that unfortunately. Cheers.