2/28/2024 0 Comments Rancher control plane![]() There’s still a lot of testing to do to see how fit for purpose this kind ofĬluster is for my workloads but the initial install was smooth and painless.As you get further into your Kubernetes journey, learning about various configurations of your Kubernetes cluster, you will want to create a high-availability Kubernetes control plane configuration for your Kubernetes cluster. Schedulable again and I was good to go! Conclusions They were able to get the heartbeat successfully and the system came back up.įrom there all that I needed to do was uncordon the two drained nodes so they were Once the other pods overcame their crash loop backoff timers The solution ended up being to wait for the rancher pods to become ready and The problem is that the cattle-node-agent pods andĬattle-cluster-agent pods were also pinging AKA HAProxy HAProxy started up it couldn’t see Rancher running on port 80 or 443 so it actuallyįailed to come up at all. As it turned out this order was a mistake. ![]() I began my restart procedure by rebooting the HAProxy VM and then rebooting the On the last node before shutting down the HAProxy VM as well. ![]() Once the first two nodes were down I performed a graceful shutdown Would allow all of the etcd pods to migrate and I wouldn’t have issues when itĬame back up. Of my workloads to migrate onto the remaining node. I used the Rancher web UI toĪccomplish all of these tasks but you could just as easily use kubectl if youįirst I gracefully drained two of my nodes before shutting them down allowing all As it turns out, also pretty easy toĭo but did run into a small gotcha with HAProxy. The last thing I wanted to test before I called this experiment a success was On a per project basis but for now global is fine for testing purposes. It would probably be more secure to do it This install method puts the provisioner in the global namespace instead of looped $ helm install nfs-client stable/nfs-client-provisioner -values. YAML file for Rancher which sets the cluster hostname to the DNS name of your My previous post but the simple version is to create a configuration That I had generated from my personal CA. Installing Rancher itself was also pretty easy since I went with certificates Up correctly during the first run, no network issues at all! Install - Rancher Even more impressive this time is that everything came HSM and the only way to get a hold of it is through ssh-agent when trying to Ssh_agent_auth line is only required because I have my SSH key loaded in a Then I installed Kubernetes with rke up -config. # rancher-cluster.yml ssh_agent_auth : true nodes : - address : user : rancher role : - address : user : rancher role : - address : user : rancher role : services : etcd : snapshot : true creation : 6h retention : 24h Under the rancher key in the cloud-config. I also recently learned that qemu-guest-agentĭoes in fact come bundled with RancherOS so I enabled that by adding the following The install followed the same pattern as before but I went aheadĪnd created a separate cloud-config.yml files for each of the nodes so I wouldn’t Of those prepped and loaded with the RancherOS ISO I was ready to boot them upĪnd start installing. Per host and included a 100GB thin provisioned disk for each as well. I increased the VM resources from 2 CPUs and 4GB of RAM to 4 CPUs and 8GB of RAM We could choose to have smallerĬontrol plane nodes with additional worker nodes but I felt like a three masterĪll in one approach seemed like a simple and effective test. We are also going to be deploying three of them. However this time we’re going to be beefing up each server and Just like in my previous post about Rancher, we’re going to beĬombining the Kubernetes control plane, etcd nodes, and workload hosting onto I also went ahead and enabled Cockpit since it is now availableīy default and the web interface can be handy. HAProxy and will be keeping minimal logs I kept the disk size at aĭefault 10GB. I went with a CentOS 8 VM image because that was what was handy and set up a new To my DNS server so I can use names instead of IPs in my configuration. PTR records for the load balancer and each of the three nodes I would be creating Since routing to specificĪpplications will be handled by the Rancher cluster and the cluster will beīehind a load balancer all I need to do is add a new CNAME record which points This cluster to live under a single subdomain. I decided that I wanted all of the various applications I was going to host on Have the space to do just that! Preparation - DNS Having recently picked up a chunky new server I finally High-Availability Rancher for the Home LabĪfter playing with Rancher some on a small VM I decided I wanted to up my gameĪnd try a larger cluster.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |