[ Kube 31 ] Set up Nginx Ingress in Kubernetes Bare Metal
Vložit
- čas přidán 17. 03. 2019
- In this video, I will show you how to set up Ingress controller using Nginx in your Kubernetes cluster. Traffic routing in Kubernetes cluster is taken care automatically if you use one of the cloud provider. But if your cluster is in bare metal, you are left with few choices.
In this demo, all virtual machines I used are from LXC containers.
Github: github.com/justmeandopensourc...
Nginxinc Ingress: github.com/nginxinc/kubernete...
For any questions/issues/feedback, please leave me a comment and I will get back to you at the earliest. If you liked the video, please share it with your friends and do not forget to subscribe to my channel.
Hope you found this video useful and informative. Thanks for watching this video.
If you wish to support me:
www.paypal.com/cgi-bin/webscr...
#nginxingress #kubernetesingress #learnkubernetes #justmekubernetes
Thanks a ton for the tutorial. Got it up and running rather quickly with the examples you provided, now to take that knowledge into my own ingress ventures.
Hi Anthony, many thanks for watching this video and taking time to comment. Cheers.
Hi,
How the request from haproxy to worker nodes are flowing to port 80. In my case when i configured haproxy backed with worker IP’s with port 80, it posting an error connection refused 80, how an ingress controller open port 80 on Worker Nodes? Any Suggestions
Your vides are the best. Easy to follow and very clear. The context you provide at the beginning of each video is perfect. I have learned so much from your instructions in last couple of week to setup my K8s infrastructure. Thanks really for such great quality content.
No worries. Thanks for watching.
Thank you so much for all your tutorials. You do a fantastic job. I look forward to continue learning from you.
Many thanks for watching. Cheers.
AMAZING work, your didactic is on point and doing it hands-on is exactly what I needed. Gonna watch the whole playlist for sure!
Hi Felipe, many thanks for watching. Bear in mind some of the videos might be outdated and I am relying on viewers to tell me whether something is broken so that I can do a follow up video with latest versions of softwares. Cheers.
This is a brilliant tutorial, i really enjoyed the simple nicely paced step by step approach. Great work
Hi, Thanks for watching.
Glad to hear that you are doing this great videos for us💐💐
Hi Siva, thanks for watching. Cheers.
This is a great video that explains nginx ingress very well. Some viewers might be trying to run this on vps servers in the cloud using the new lxd/lxc version and can't get haproxy to work. You run lxc config device add haproxy myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 (note: haproxy in the command is the name of the lxc container). So if you followed the video and haproxy is not forwarding the traffic you may need this command or see if there is a firewall enbaled. Another helpful command is from inside the haproxy container use haproxy -c -V -f /etc/haproxy/haproxy.cfg which checks to make sure your configuration is valid before starting/restarting the haproxy service. Thank-you for putting this video series together they are one of the best ones out here.
H iJames, many thanks for sharing this info. It will be really of great help to others looking for proper implementation. Cheers.
Interesting, informative and really illuminating for me as a K8s learner! Thanks!
Glad it was helpful! Thanks for watching.
Nice Video Bro!! You contents of Kubernetes Bare Metal Help me a lot.
Hi Diego, Thanks for watching.
All your videos are excellent. Keep up your good job
Hi Arun, thanks for watching.
Excellent ! Simply Wow. நன்றி
Thanks for watching this video. மகிழ்ச்சி
Great complete ingress controller. Thank you.
Hi Godfrey, many thanks for watching. Cheers.
Great tutorial, clearly explained
Hi Ale, thanks for watching.
Thanks a lot Bro for this tutorial. all my questions are clear.
Glad to hear that. Thanks for watching.
Thanks a lot for this video; very clear, it helps me a lot !
Hi Chris, thanks for watching. Cheers.
Your channel is life saver
Hi Yohan, thanks for watching.
Great content, greatly appreciate all the kubernetes tutorials!
Hi Edwin, thanks for watching.
Wonderful work
Thanks for watching.
Best tutorial ingress I've ever seen. great man!
Hi Martin, thanks for watching. Cheers.
Just me and Opensource Are you planning to release the trafik v2 tutorial? there are big changes against v1 and also there are problems with the API version of Kubernetes 1.16.2 where many things are deprecated. I can't get trafik v2 up and running like DaemonSet. Thank you in advance for your reply.
@@martin_mares_cz I don't have Traefik v2 in my list but will add. I have videos scheduled for the next two months. And lot more videos in the pipeline to be recorded. Thanks.
I loved it , and post videos frequently with real time scenarios , request you to do videos on jenkins as well
I will continue to do my best. Thanks for watching.
You're simply the best🤟
HI Rayehan, thanks for watching. Cheers.
Very good tutorial, thank you for sharing. Tested on a kubernetes that runs behind rancher2 using Hetzner servers. Next step is to test Traefik
Hi, Thanks for watching this video.
dude you helped my a lot, thanks!
No worries. You are welcome.
Great demo!! Thank You
hi, thanks for watching.
Great tutorial, thank you.
Hi Aryadi, thanks for watching.
excellent video, thank you!
Hi Larper, thanks for watching.
Great video Venkat
Hi Gary, thanks for watching. Cheers.
Thank for sharing your knowledge
Hi Abdul, thanks for watching. Cheers.
Great video - thanks!
Hi Bill, thanks for watching this video.
amazing video
Hi, thanks for watching. Cheers.
Great vids!!
Hi Fabian, thanks for watching.
Great tutorial !
Hi Mateusz, thanks for watching.
Excellent 👍
Thanks for watching. Cheers.
Another brilliant video, very helpful and well explained. Thank you
Hi Simon, thanks for watching.
Hello Sir, I just watched your video, will follow your instructions to try it tomorrow, and get feedback to you. From what I've seen, you've made an excellent tutorial on Ingress Controller - application load balancer, and HAProxy - network load balancer for bare-metal Kubernetes cluster. That's exactly what I am looking for at this moment. You're very hands-on. Great Jobs. Subscribed. Thank you.
Hi Michael, thanks for watching. Cheers.
Awesome!! Thanks a ton!
Hi Aromal, thank for watching.
This video is magic. Better explaination around Ingress Controller for Kubernetes Bare Metal !!
Hi Cedrick, thanks for watching.
Thank you so much!!!!!
Thanks for watching.
Nice Vid
Awesome thanks
Hi, thanks for watching. Cheers.
Hello thank you for this video !
Do we need to link the load balancer only to the nodes that have an ingress controller ? Or to all of them ?
Just found your video's on kubernetes, kubespray and nginx ingresses. You are very good at explaining the default behaviors, which gives the highest chance for success.
The nginx docs explained that in the default server secret file they provided a default self-signed cert and key and that they recommended to use your own certificate. Things to note: the cert and key are base64 encoded (again), so keep this in mind when you add the cert to the default-server-secret.yaml file.
Also, if you are using windows to generate the keys, make sure you remove the CR characters ( ^M), before base64 encoding the cert and key. Otherwise you'll get an error when trying to start the nginx-ingress pods.
Hi Rik, thanks for watching and sharing your thoughts.
Thank you very much ❤ for providing such good video..
In this video you have taken haproxy for routing and exposed over the private IP how the nginx application will access all clients with domain name...
That will be via the ingress route resource which defines the hostname to service mapping. Thanks for watching.
Great video, thanks, one quick question, Does the ingress controller POD exposed to the HAproxy directly? but I don't see you use "hostNetwork: true" ?
Hi Xiuhua, thanks for watching. If you do kubectl describe on the ingress controller daemonset, you will see that it binds to the host port on the underlying worker node.
Thanks in advance for the great tutorials, by any change, is there any load balancer that can support Diameter and be used inside K8s cluster?
Hi There,
Nice video bro!
I have one question, on the HAPROXY you configure all the IP addresses from the worker nodes.
what is if you scale out or scale the Cluster (Add or remove worker nodes)? then you have to manually change the Configuration on the HAPROXY?
Also if the worker nodes are deployed via DHCP and somehow the ip change, then its also needed to change the config. Do you have an Solution for this?
Thank you very much.
Fantastic video. I've seen your metalLB videos too. My question is - If i deploy nginx-ingress-controller as a daemonset on 4 physical nodes of my cluster at home and expose ingress deployment as as nodeport service on port 31111 + then attach haproxy to this, why do i need MetalLB to load balance?
Thanks for the video, can you tell what are the other ways of creating cluster instead of lxc.
Hello Venkat, again, excellent explanation of the topic. I read the documentation about ingress where they mentioned nginx, ingress controller, nginx, load-balancer etc. It was all respect to some cloud provider and not about bare metal k8s cluster. It was all so confusing.
Your component and flow diagram made the concept crystal clear. Since it is bare metal, I can practice in my home lab. Today, your video quality was max 360p, so difficulty in reading text, may be due to just upload. Tomorrow, I would do hands on on my home lab.
One suggestion on demo container/pod. I generally use hashicorp/http-echo image to show different pods or different container in a single pod as below. It might make your demos easy than using nginx.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fruit-deployment
labels:
app: fruit
spec:
replicas: 4
selector:
matchLabels:
app: fruit
template:
metadata:
labels:
app: fruit
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=response from apple-app"
- "-listen=:6000" # default container port is 5678
ports:
- containerPort: 6000
- name: banana-app
image: hashicorp/http-echo
args:
- "-text=response from banana-app"
- "-listen=:6001" # default container port is 5678
ports:
- containerPort: 6001
Hi Ajit, Thanks for the http-echo container suggestion. Looks good.
I just checked my video and I can see all the video playback qualities. I can switch to 720p or 1080p for high resolution. Have you checked if you can change the video quality setting? Depending on your internet connection speed, youtube will automatically select appropriate quality.
@@justmeandopensource Strange, regarding resolution, I watched your video in chromium browser on windows, in that your video has max resolution of 360p where as other channels are having normal higher resolutions. I checked your video on Google Chrome, it is having higher resolution, 1080p. I would use that browser :)
Yeah, just googled the issue and there were lot of discussions around this where all the video qualities are not listed on some browsers.
Hello Venkat, I have a question regarding the HAProxy. This load balancer cannot be provisioned as a pod inside of the k8s cluster?
I saw that you made a separate VM for it.
I'm asking you this because i use VPSs for my k8s cluster.
Thanks and regards.
Hi Knight,
Thanks for watching this video. Although I haven't tried it, Haproxy can be provisioned inside the cluster itself as a pod. But it involves lots of configurations to make it work. Deploying a haproxy as a container/pod isn't difficult. Then you will have to create a service for that to expose it outside of the cluster. Lots of ports mappings involved.
The below link might give you some direction.
www.bluematador.com/blog/running-haproxy-docker-containers-kubernetes
You mentioned you are using VPS. You can install haproxy on the master node itself, and don't have to use a separate VM for it.
Thanks
Nice!!
Hi Leonardo, thanks for watching this video. Cheers.
Hi Venkat, thanks for this great video. One question though, I still could not understand how haproxy is able to connect to the worker nodes port 80. We only have the cluster IP service created and ingress resource has routing in it to point to the cluster ip service. There is no node port service or LoadBalancer service to access it from outside the kubernetes cluster. I was trying to get it working by following your video. If I check the get all -n nginx-ingress with the steps, I see only the nginx-ingress pods and the daemonset in the nginx-ingress namespace. The get all ( without namespace) only gives the nginx-pod and the cluster ip service pointing to the nginx-pod. I am wondering how is it working without having a nodeport service or a loadbalancer service running to connect to the worker node from haproxy ? As per the haproxy configuration, it directly using the IP addresses of the worker nodes and port number 80. Looks like I miss something ...
Hi Nevin, thanks for watching. Have a look at the output of kubectl describe daemonset . The ingress controller pods are deployed as daemonset, so there will be one ingress controller pod on each worker node. They use hostport to bind to port 80 and 443. This will be clear when you look at the kubectl describe output. Cheers.
Excellent session! I do come across a small snag when deploying nginx-ingress in that the create DaemonSet (kubectl apply -f daemon-set/nginx-ingress.yaml) as shown in your demo works. On the other hand, if I choose Create a Deployment (kubectl apply -f deployment/nginx-ingress.yaml) then all requests via HAProxy would fail with 503! Is there a hack that need to be applied? Thank you Venkat
Hi, thanks for watching. I haven't actually tried the deployment type. Always gone for the daemonset as my dev cluster has only few nodes. I think its the haproxy configuration that needs to be tweaked but not entirely sure.
Hi Venkat,
Great video! I am planning to use plublic DNS as noip.com and make a port forward in a router to reach a microservice backend. How does the HA Proxy can reach the service if the service as an internal IP from the cluster?
hello venkat, it is me again :) with yet another question :), I was wondering how with Kubernetes can I manage to request a pod from another pod? , I have a web service in python with flask that is inside a container that I can request thanks to nodeport, but I want that web service to also send a request to a TensorFlow serving ( a container that when requested return a series of probabilities)? should I expose a service for the TFserving too?
Yeah, that's the way to access another pod by exposing it as a service for the TensorFlow pod.
Thanks!
Hi Rayehan, many thanks for watching and for your contribution. Much appreciated.
Hi Venkat, thank you for another excellent video. I got it working on your vagrant environment. I also tried it on the cluster as created via the-hard-way (Kelsey Hightower). But that didn't work. Looks like iptables are blocking port 80. Just wondering how iptables are setup (probably done by kube-proxy). Hard to find this info. Maybe a suggestion to make a video about network setup and the protocols, ports and their flow and how iptables are setup. But again, thank you for taking time to make these videos and sharing them with us.
Hi Michel, thanks for watching this video. Thats interesting. When I get some time I will test ingress on the cluster set up the hard way. Thanks.
Hi Venkat, I've found the issue. It is actually quite simple. I got triggered when I was doing your video about Prometheus on my "the-hard-way" cluster. I noticed the prometheus-node-exporter got IP addresses of the the worker nodes. Normally it is more secure, to have the POD IP address range used for the PODs. So I noticed that if the hostNetwork parameter is set to true, the IP addresses of the hosts is used! So I changed the ingress file daemon-set/nginx-ingress.yaml by adding this parameter and now it all works!!!
@@michelbisschoff6993 Thats great. I learnt something new today. Thanks for that. Cheers.
Very helpful tutorial, just one quick question. I assume that since you have the cluster running on containers, the reason you are able to execute kubectl commands from the host machine is some sort of rule on your .zshrc file? If so, could you please explain how that is accomplished? I tried using an alias such as alias kubectl='lxc exec kmaster kubectl'. And while this works just fine for listing resources and what not, the forwarding of the command breaks when you need to add flags. So while I can run 'kubectl get nodes', if I try to run 'kubectl get nodes -o wide' it breaks.
Hi Jose, thanks for watching this video. I covered the kubeconfig details in various other cluster provisioning videos. I had an assumption that viewers watched all my previous videos. That's why I don't repeat all the information in every video.
So you are using lxc containers for kubernetes cluster?
I copy the /etc/kubernetes/admin.conf file from the master node to my host machine as $HOME/.kube/config.
I also download the kubectl binary and move it /usr/local/bin.
Hope this helps. If you are stuck, give me a shout again.
Thanks.
This is a great tutorial! One question - how would one easily define a default route to send users to if they ask for something that doesn't exist? It looks by default it just returns a 404. Is there a way to make it redirect / show something else?
Hi, thanks for watching this video. You can do that by configuring the default backend. If there is no rule specified for a url or path to a domain, then ingress controller will redirect the traffic to the default backend service. Thanks
Thanks Venkat. It's such a great stuff that I've missed this long.
I think your k8s installation processes, you're installing the latest k8s version and might want to stick to some version something like below. I'm using ubuntu, therefore it looks as below.
# Install Kubernetes
echo "[TASK 9] Install Kubernetes kubeadm, kubelet and kubectl"
apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
apt-mark hold kubelet kubeadm kubectl
Hi Sesh, thanks for watching. Yes I could have locked it down to a specific version. I have different kubernetes setup videos and I think on some of them I do lock it down to a know working version of docker and kubernetes. I will have to update the github docs. Cheers.
Thanks Venkat. Yeah, I've realised it later while I'm covering your other videos.
Also, I've a scenario here and not sure if you've covered, if so, could you please point me to the correct clip.
I've a k8s cluster (with 3 nodes) running in my local wifi network. The vagrant network looks as below. I've chosen this way because I've built another db server (postgres) as a standalone box running outside k8s in the same network as wifi(192.168.1- subnet) . I'd like the pods communicate with it and it works fine using IP and port from the pod.
If I try to create a headless service something like below, it didn't work. I use the service name from my pod. I'd like to use name instead of IPof my db server.
Any suggestions please.
apiVersion: v1
kind: Service
metadata:
name: postgre
spec:
type: ExternalName
externalName: 192.168.1.13
Vagrant
kmaster.vm.network "public_network",bridge: "en0: Wi-Fi (Wireless)",ip:"192.168.1.30"
Thanks very much for ur totos, i have a question, i'm strugling to deploy an ingress controller of a type load balancer and let haproxy to give it an ip and connect to it ?
Hi Venkat, thanks for the wonderful video. I have created instances in GCP and ingress setup is done. Please advise me how to check the setup is working or not in GCP.
I have used 1 HAproxy server, 1 master and 2 worker nodes.
Hello thanks for your content.
I got 2 small questions.
First: What is the point of an ha-proxy since even if we point to the same node, the svc of type NodePort will load balance btw pod.
Second: As a mac user how can we communicate from the host to the cluster ? On mac kubernetes (docker for mac) use an hidden VM.
Hi Gilles, thanks for watching.
1. Yes but if you want to expose your application with a DNS name (eg: myapp.example.com), what entry would you add in your DNS? Would you add myapp.example.com with an IP address of one of the worker nodes? What if that worker node goes down? You will then have to update DNS for myapp.example.com with the ip address of another worker node. Just to simplify this process, we use HAproxy or any other load balancer so we don't have to worry about underlying servers (worker nodes) and you don't have to update DNS for myapp.example.com often.
2. I haven't tried this on Mac with Docker for mac. So I am afraid I can't comment on that. I am a Linux person by birth.
Great tutorial! I'm just wondering on how did you setup your cluster. I mean, who is the cluster endpoint? Is it the HAProxy?
Also, you are doing the load balance on the worker node. Is it also going to work if I do the load balancing on the multiple master node instead of worker in a High availability cluster setup?
Hi Venkat, Thanks for this class, i have tried this tutorial on AWS instances, but getting site can't reachable. which IP(Private IP or Public IP of HAProxy server) i have to place in /etc/hosts. Or should i do any other configuration since i am using AWS instances?. I am using a security group in which all ports are open.
Hi Raghu, Have you fixed it? If yes.. what have you done? Thanks
hi Venkat,
Wonderful session again; with hands on ingress setup.
I tried with similar things using vagrant setup instead of lxd; worked well.
I found one issue with vagrant though ... simply by hitting hostname on browser vm instance doesn't display anything on windows OS. But within HAProxy instance if i simply do curl on 3 host names i get expected output as mentioned in session.
how to access vagrant vm instance using hostname instead of private_network ip on browser?
Hi Bhalchandra, if you were using a Linux machine, then you can update /etc/hosts file with IP address and VM name and then you can access it through the name. Similarly you can do it in Windows as well. The below link might help you.
www.howtogeek.com/howto/27350/beginner-geek-how-to-edit-your-hosts-file/
Cheers.
You have given :80 in HAProxy default backend config but where have we configured Nginx ingress controller to listen on port 80 of worker nodes for incoming traffic from Load balancer? Thanks Venkat.
Have a look at the definition of the ingress daemonset. Kubectl describe daemonset . You will find that the ingress controller pods on each worker nodes uses host port to bind to port 80 and 443. (github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-ingress.yaml)
Hi sir, just wondering if i can run haproxy in master node itself. or do i need separate VM for this?
Hi Kunchala, thanks for watching. Yes you can run haproxy on master node itself or on any of your existing Kubernetes nodes, if its for learning or development purpose.
Hi Venkat
if we have setup a rule for node provisioning based on CPU, Memory or based on user request , in that case our total number of nodes will not be same all the time , then how we will do the haproxy entry.
Hi Kunal, thanks for watching this video. Thats a good question.
One other viewer also asked similar question I think.
I haven't researched much about this. But the following reddit posts seems to discuss few possibilities.
amp-reddit-com.cdn.ampproject.org/v/s/amp.reddit.com/r/devops/comments/50df4d/ways_to_dynamically_add_and_remove_servers_in/?amp_js_v=a2&_gsa=1&usqp=mq331AQCCAE%3D#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fwww.reddit.com%2Fr%2Fdevops%2Fcomments%2F50df4d%2Fways_to_dynamically_add_and_remove_servers_in%2F
Hi
What do you use for the terminal zsh prompt. Looks nice especially the history commands are coming up automatically.
I have been using python powerline, but haven't configured internals
Hi Ashish, thanks for your interest in this video. Actually I have done a video on my terminal setup.
czcams.com/video/soAwUq2cQHQ/video.html
But this was long time ago. I have moved since then to a whole different setup to using I3 tiling window manager.
czcams.com/play/PL34sAs7_26wOgqJAHey16337dkqahonNX.html
Cheers.
@@justmeandopensource and your desktop theme? It looks cool too
Hey, fantastic content, I’m a fan!
Just one question: how would you manage if the worker nodes get scaled out or in or if the IP addresses change? Is there a way that the HaProxy Config automatically stays in sync with the cluster?
Hi, thanks for watching. In this video, I used HAProxy for proxying to worker nodes where ingress controllers are listening. But in recent versions of ingress, you don't need this external load balancer. You can make use of MetalLB. So don't worry about configuring and maintaining the haproxy with dynamic worker node details.
@@justmeandopensource thanks, very helpful tip! So you would consider MetalLB already fit for production environments?
@@justmeandopensource Thanks Venkat!
@@sticksen you are welcome
What if you are running the ingress controller only on worker1 and haproxy hits worker2?
Secondly, what if we run ingress controller on the master node (non-HA)? In that case, should we only provide the IP address of the master in the haproxy backend?
Hi Bro... Thanks for the Video... It's really Great... Can we use MetalLB Loadbalancer intead of HAproxy
Shoudn't there also be a nodeport service created to allow access from the haproxy to the ingress controller?
MetalLB & HAProxy both are Load balancer ?. Instead of HAProxy, I can use MetalLB ?
Hi, Thanks for sharing this video, it is very useful. I have a question on the installation. I have a k8s cluster with one master and one node. If i want to install the nginx-ingress load balancer within the k8s cluster, would i need to carry out the bare metals installation of the same load balancer as a pre-requisite? I have tried to install the nginx-ingress controller installation on k8s cluster and the pod is in creation state for ever. Thanks.
Hi, thanks for watching. First of all nginx ingress isn't a load balancer. You need an external load balancer and in this video I used haproxy. Ingress controllers just route the traffic to appropriate backend services. You will have to deploy ingress controller in your cluster and use some form of load balancer to access the worker nodes.
@@justmeandopensource Thankyou. This has cleared up the air. Now i understand what fits where.
@@sanuboys9877 Cool.
Hi Thanks for the video, I have followed the exact same step to create an nginx controller using daemonset however I am not able to browse the app deployed in the pods. I have noticed that the ports 80 and 443 are not getting exposed on the worker nodes despite trying to create the daemonset multiple times. What can be the reason for this? I am using weavenet
Are all your ingress related pods running fine? Have you setup haproxy as shown in this video?
Great video thank you for putting it out! I was wondering though, with the requirement of the HAProxy Loadbalancer , how do you prevent it from becoming a single point of failure?
great tutorials. Do you have video to connect bare metal to gitlab cloud. thanks
Hi, thanks for watching this video. I haven't explored that much. Thanks for suggesting that though. Cheers.
Very nicely explained. Just I wanted to know the system information, battery life , networking , processor and which package required to install on Ubuntu OS
Hi Hanuma, thanks for watching.
The widget that you see on the right side of my screen that shows various system information is conky. You have to install conky software and have a conky configuration file. You can search online for ready to use conkyrc configuration or you can customize as per your need. Cheers,
Nice Video, I have a request whenever you have time please make a video on service accounts creation, clusterrolebinding and rolebaseauthentication please, it's a bit confusing when watching ingress and NFS dynamic provisioning.. thanks in Advance...
Yeah. Sure. Thanks.
How does the haproxy discover the ingress pods through the node-ip:80 lines in the haproxy.cfg without any service defined with the nodeport set to 80?
Hi Walid, thanks for watching. When you deploy ingress controllers in your cluster, the ingress controller pods will bind to port 80 on the worker nodes they are running. HAProxy load balances the traffic to all worker nodes. When a request is received HAproxy will route it to one of the worker nodes on port 80 where the ingress controller pod is listening and which inturn will route it to the appropriate service. And the service will route it to one of the backend pods. Cheers.
Nice tutorial. Can you show a lts (https) example, please? I setup like this but ingress with lts host getting too many redirection.
I have a precompiled image and once i spin the pod then container expose itself as rest api. Is there any way to enable https in this case. How to add ssl certificate in this case so that I can call the api using https
hello, this setup works locally right? i want to expose my application through the internet. i tried with a basic ingress YAML file and deployed the laravel application and created a service. to expose it on the internet i just run the minikube tunnel and exposed the external IP and tried it in the browser but the app is not loading. is my way is correct or what i have to do to expose my app on the internet with minikube. please guide me
Is there a reason why all the microservice ports are same? What if the ports are different. Do we have to create those many backend entries in haproxy
good video, but it would be greate if you could explain a bit regarding ingress controller.
Yeah.. May be on another video. Thanks for watching.
Hey one request can you make one video on how to attach a load balancer like nlb in front of our kubernates cluster that can load balance between different nodes
how are you actually exposing the port 80 on the worker Nodes for the ingress-controller? My approach was to create a Nodeport service for the nginx-ingress-controller and then forward on HAproxy to "nodeip:nodeport"
Hi Alix, thanks for watching.
Actually when you deploy the nginx ingress controller, it will bind to ports 80 and 443 on the worker node where it is running. You can check "kubectl describe " command to look at the deployment or daemonset (whichever way you deployed). Then you configure HAproxy with workernode:80, workernode:443 for all worker nodes as backend.
How the haproxy LB points to the ingress controller? Configuration file only points the haproxy to worker nodes ips and port 80.
Hi venkat..Hope you are doing good.. Your videos are really very helpful..Thanks a tonn for the same.
One doubt - We have a kubernetes setup in AWS environment (3 node setup -1 master and 2 slaves)but we are not utilizing EKS or something, we have installed the cluster on EC2 instance its just that we are consuming the EC2 services. Suppose i have hosted a containerized application and it is listening on 8443 port and the pod ip is ex:10.244.3.251 and for accessing the UI of this application, i have setup load balancer (nginx) on other EC2 instance following the above tutorial. Now once i create service and ingress for my deployment I am able to access the application with the host name (as mentioned in ingress yaml file) and it listens on port 80 by default.
I hope i am clear untill now.. my question is for accessing the host name, everytime we need to edit our host file on the local machine and give a entry of ('nginx ip' 'hostname as defined in ingress.yaml') which is a bit difficult approach. We don't have access to route53 service for setting up DNS or something.. Is their any other method through which we can access our application easily?? Your comment would be very helpful..!! If need by i can send you the detailed yaml files in your personal email for a better clarity..
Hi Sweta, thanks for watching. If you don't have access to Route53, then the only way is to edit your /etc/hosts file. You only add an entry once, when you deploy an application/service. Because we are accessing the application using a dns name, the entry has to be somewhere that resolves to the HAProxy IP. Either Route53 or your local /etc/hosts file. Even if you use Route 53, you still have to add entry when you deploy a new app.
Hi, since the Ngnix controller pod is inside the cluster, how haproxy can reach the ingress controller pods? Thanks
Hi Mike, thanks for watching. If you take a look at the output of kubectl describe of one of the nginx ingress controller pod, you will notice that it binds to the host port on the worker node it is running. And haproxy's backend configuration points to these worker nodes on the ports where the ingress controller pods are bound.
Can you please do a session on contour envoy.
If I understand you correctly, we need HA Proxy for Ingress to working? Is HA Proxy the prerequisite for Ingress?
No. HAProxy isn't a requirement for ingress.
Hi Venkat,
I tired with ingress in gke(google k8s engine).
1) Created the my pods.
2)Expose it though loadbalancer service type. (it was working with http)
3) then i configure ingress, it says error "Some backend services are in UNHEALTHY state".
Can you pls suggest any possibilities to configure in https.
HI Surendar, thanks for watching. I haven't used this setup in Google cloud yet. So I can't be sure of your problem. If I get some time I will test this.
Thank you for the tutorial.
May i know how can i set sticky session (stateful application) for this environment?
Should i configure in haproxy or in the ingress?
Hi, thanks for watching.
I believe it has to be done at the haproxy level.
thisinterestsme.com/haproxy-sticky-sessions/
Actually it can be done at the ingress level as well it seems by adding appropriate annotations to the ingress resource.
kubernetes.github.io/ingress-nginx/examples/affinity/cookie/#:~:text=Deployment,-Session%20affinity%20can&text=The%20affinity%20mode%20defines%20how,or%20persistent%20for%20maximum%20stickyness.&text=When%20set%20to%20false%20nginx,even%20if%20previous%20attempt%20failed.
Do we always need to register ingress controllers to load balancers(HA proxy in this case).
Hi Mihir, thanks for watching. You need some form of load balancing.
Take a look at the recent updated video on this topic czcams.com/video/UvwtALIb2U8/video.html.
You can use load balancer solution like metallb and can get away without haproxy stuff.
please do blue/green deployment strategy video
I can try. Thanks for watching.
Hi Venkat, thanks for this video. I alway that try to deploy an ingress controller obtain when I do a describe ing:
Default backend: default-http-backend:80 ()
And never get to link ingress controller con cluster IP services. I dont know what is happening
Thanks
Thanks alot for all your videos, Could you make a video on -> how can I listen to TCP traffic using ingress as by default it listen to HTTP & HTTPS traffic only.
Also I would like to request to create a video with above request using Contour Ingress Controller.
Hi Gourav, thanks for watching. I will see if I can do those. I have already recorded videos for the next two months and it will be after that unfortunately. Cheers.