[ Kube 113.1 ] Learn Kubesphere | Provisioning Kubernetes cluster with kubekey
Vložit
- čas přidán 6. 09. 2024
- In this video I will show you how to provision a kubernetes cluster with KubeKey.
KubeSphere is a distributed operating system for cloud-native application management, using Kubernetes as its kernel. KubeSphere is also a multi-tenant container platform.
Kubesphere weblinks
kubesphere.io/
github.com/kub...
😺 Github:
github.com/jus...
📺 Learn Kubesphere Playlist:
• Learn Kubesphere
📺 Learn Kubernetes Playlist:
• Learn Kubernetes
Justmeandopensource Discord Server:
/ discord
Hope you enjoyed this video. Please share it with your friends and don't forget to subscribe to my channel. For any questions/issues/feedback, please leave me a comment and I will be happy to help.
👏 Thanks for watching.
💗 If you wish to support me:
www.paypal.com...
As always great content
Thanks for watching.
you should disable the swap memory .. it is recommended to disable the swap always .. swap support is still in alpha stage ..
I do it on my vagrant bootstrap scripts. But I guess kubesphere will do it for us. Not sure. It wasn't mentioned in pre-requisites. I haven't checked after kubesphere was installed. But good spot. Thanks for watching.
KK will set up that "vm.swappiness = 1"
@@LeoAzong Thanks for confirming. Helpful. Cheers.
Hi , venkat
so cool & perfect
Thanks a lot .🙏
Thanks for watching Jamall
Thanks for showing us way to deploy K8s
Thanks for watching.
Nice video ! Thanks !
Thanks for watching. Cheers.
Excellent videos!
thank you for your excellent videos. wanted to know your latest opinion on what technologies you use to run production kubernetes clusters. there are so many technologies out there, its mind boggling.
Really useful .
Hi, Thanks for watching.
Did you run nerdctl as replacement for Docker? If yes, what is your thoughts about it? In my opinion it's pretty good for running containers in conteinerd without docker. Waiting for next video, thank you
No. I didn't use nerdctl.
Hi Venkat, I have created a multi master setup(3M-3W) in 2 ways.
1. Using kubevip (static pod)
2. Using Haproxy and keepalived
Both the scenario The cluster is able to tolerate single master failure but not 2 master. Is it expected behavior, that in 3master setup cluster can tolerate only 1 master to be inactive. PS:- inbuilt etcd cluster
May be if I have an external etcd cluster it might be able to tolerate 2 master inactive cluster state?
@@vinothkumaar2568 Hmm.. I haven't tried with two master nodes unavailable. In theory at least it should work as we have 1 master node still alive. But as you said, it could well be the etcd cluster on those 3 nodes that have gone mad.
@@justmeandopensource hmm yes not very sure. Will check 👍 Thank you for the response 🤝
@@vinothkumaar2568 no worries
# TIL