Deploying Kubernetes

#Kubernetes is a nightmare for the exact same reason that is is awesome.

Setting up Kubernetes often feels like trying to assemble a jet engine while flying it. Trust me, I’ve been there staring at networking errors and complex YAML files until my eyes crossed. But here’s a secret: if you strip away the enterprise-grade fluff, K8s is actually quite logical.

Today, we’re going the “less is more” route. We aren’t building a massive data center; we’re building a clean, functional environment on a single Ubuntu server using MicroK8s. It’s lightweight, it’s official, and it won’t break your Wi-Fi or mess with your PCIe configurations.

If you want to get the great benefits of Kubernetes, you have two choices: buying a solution or building one yourself.
And if you are something like me, you know there is no better approach to learning than some good old DIY and hands on!

So on this guide I will to go to the basic steps I had to learn to deploy my own Kubernetes 

So, let’s spin up an Ubuntu Server and get on with it!

 


Step 1: Prep the Canvas

Before we touch Kubernetes, we need to make sure Ubuntu is ready. We want a clean slate so the installation doesn’t trip over itself.

  1. Update your system:

    sudo apt update && sudo apt upgrade -y

  2. Enable IPv4 forwarding: Kubernetes needs to let containers talk to each other.

    echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf  && sudo sysctl -p


Step 2: The “One-Command” Install

We’re using MicroK8s because it packages everything into a single snap. No fussing with manual API server configs.

  1. Deploying MicroK8s:

    sudo snap install microk8s --classic

  2. Join the group: You don’t want to type sudo for every single command. Add your user to the MicroK8s group:

    sudo usermod -a -G microk8s $USER && mkdir -p ~/.kube && chmod -R 700 ~/.kube

    Note: Log out and back in for this to take effect!

  3. Check the status:

    microk8s status --wait-ready


Step 3: Enable the Essentials

By default, MicroK8s is a barebones engine. Let’s add a “Dashboard” (to see what’s happening) and “DNS” (so services can find each other).

microk8s enable dns dashboard


Step 4: Your First Project (The “Hello World” Webpage)

Now for the fun part. We’re going to deploy a simple Nginx web server. In the world of Kubernetes, we don’t just “run” a container; we define a Deployment (the brain) and a Service (the door).

  1. Create the Deployment: This tells Kubernetes to keep one “Pod” of Nginx running at all times.

    microk8s kubectl create deployment hello-k8s --image=nginx

  2. Expose it to the world: This creates a “NodePort,” which opens a port on your server so you can actually see the webpage.

    microk8s kubectl expose deployment hello-k8s --type=NodePort --port=80

  3. Find your port:

    microk8s kubectl get services

    Look for hello-k8s. You’ll see something like 80:31234/TCP. That 31234 is your magic number.


Step 5: The Victory Lap

Open your browser and type in your server’s IP address followed by that port (e.g., http://192.168.1.50:31234). If you see “Welcome to nginx!”, you’ve officially conquered the Kubernetes learning curve.

Why this works:

  • Encapsulation: MicroK8s keeps the mess inside its own snap environment.

  • Scalability: If you wanted two web servers, you’d just type microk8s kubectl scale deployment hello-k8s --replicas=2.

You’ve just moved from managing a server to orchestrating an environment. It’s a big shift, but as you can see, the entry fee doesn’t have to be a headache.