How to deploy MySQL on Kubernetes Cluster ?

Pre-Requisite

  1. kubernetes cluster is ready if not Follow this blog.
kubectl get nodes -o wide

OUTPUT

NAME        STATUS   ROLES           AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION         CONTAINER-RUNTIME
kubmaster   Ready    control-plane   5h26m   v1.28.15   192.168.10.6   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-32-cloud-amd64   containerd://1.7.27
kubnode1    Ready    <none>          5h3m    v1.28.15   192.168.10.5   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-35-cloud-amd64   containerd://1.7.27
kubnode2    Ready    <none>          5h2m    v1.28.15   192.168.10.7   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-35-cloud-amd64   containerd://1.7.27

Create Namespace (Optoinal)

Create file kamailio-namespace.yaml

vim kamailio-namespace.yaml

paste the following content

# kamailio-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kamailio

Apply It

kubectl apply -f kamailio-namespace.yaml

Create Physical Volume for MySQL Database

create file mysql-pv.yaml

vim mysql-pv.yaml

paste the following content in the file

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  namespace: kamailio
spec:
  storageClassName: manual
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/mysql"

apply it

kubectl apply -f mysql-pv.yaml

confirmation command

kubectl get pv -A
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
mysql-pv   3Gi        RWO            Retain           Available    kamailio/mysql-pvc   manual                  5h16m

Important

Status must be Available

Create ConfigMap

create file mysql-configmap.yaml

vim mysql-configmap.yaml

paste the following content in the file

# mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
  namespace: kamailio
data:
  my.cnf: |
    [mysqld]
    default-authentication-plugin=mysql_native_password
    skip-name-resolve
    max_connections=1000

Create Secret File

create file mysql-secret.yaml

vim mysql-secret.yaml

copy and paste the following content in the file

# mysql-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mysql-secrets
  namespace: kamailio
type: Opaque
data:
  mysql-root-password: MTIzNDU2Nw==
  mysql-password: MTIzNDU2Nw==

password must be base64 encrypted

echo -n 1234567 | base64 

OUTPUT

MTIzNDU2Nw==

Create a PersistentVolumeClaim for MySQL Data

create file mysql-pvc.yaml

vim mysql-pvc.yaml

copy and paste the following content in the file

# mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: kamailio
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

create the MySQL Deployment

create file mysql-deployment.yaml

vim mysql-deployment.yaml

copy and paste the following content in the file

# mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: kamailio
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secrets
              key: mysql-root-password
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secrets
              key: mysql-password
        - name: MYSQL_USER
          value: "kamailio"
        - name: MYSQL_DATABASE
          value: "kamailio"
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
        - name: mysql-config
          mountPath: /etc/mysql/conf.d
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc
      - name: mysql-config
        configMap:
          name: mysql-config

Create MySQL Service

create file mysql-service.yaml

vim mysql-service.yaml

copy and paste the following content in the file

apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: kamailio
spec:
  type: NodePort
  ports:
  - port: 3306
    targetPort: 3306
    nodePort: 30306
  selector:
    app: mysql

Apply all of the files

kubectl apply -f mysql-configmap.yaml -f mysql-secret.yaml -f mysql-pvc.yaml -f mysql-deployment.yaml -f mysql-service.yaml

Check the pod

kubectl get pods -n kamailio

OUTPUT

NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
kamailio          mysql-d4b767577-58v2n                      1/1     Running   0          5h15m

Check PV Status

kubectl get pv -A

OUTPUT

NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
mysql-pv   3Gi        RWO            Retain           Bound    kamailio/mysql-pvc   manual                  8m42s

Confirm nameserver IP

kubectl -n kamailio exec -it mysql-5448dbc87f-5gm97 -- cat /etc/resolv.conf

OUTPUT

search kamailio.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

Add Port Forwarding

kubectl port-forward --address 192.168.10.6 pod/mysql-d4b767577-58v2n -n kamailio 3306:3306

This command will start listening on master node IP 192.168.10.6:3306 and forward all traffic to mysql pod port 3306.

Enjoy 😉

How to configure a Kubernetes Cluster ?

image

Pre-Requisite

  1. Three Linux Machines/VMs (Minimum Specs: 2 cores, 4GB RAM, 16GB Hard Disk)
  2. Can communicate with each other (Preferred to be on Same LAN)
  3. Open ports: 22 (SSH), 6443 (Kubernetes API), 2379-2380 (etcd), 10250 (kubelet)

Linux Server Preparation for Kubernetest Cluster

Update system and install prerequisites

apt update && apt upgrade -y
apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates chrony

Disable swap (Kubernetes requirement)

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Enable kernel modules and sysctl params

cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Load kernel Modules

modprobe overlay
modprobe br_netfilter

set system variables

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

Load system variables

sysctl --system

Install Docker and Kubernetes

Note

Execute the following steps on all Three Nodes.

Add Docker Repo

curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian bookworm stable" | tee /etc/apt/sources.list.d/docker.list

Update and install Docker

apt update
apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Configure containerd

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

Set cgroup driver to systemd

sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

Restart containerd

systemctl restart containerd
systemctl enable containerd

Add Kubernetes repo

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list

Install kubelet, kubeadm, kubectl

apt update
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Initiate Kubernetes Cluster

Note

Execute the following commands only on Master Nodes.

Initialize cluster with Calico network (recommended for Debian)

kubeadm init

OR if want to define CIDR do it as below

kubeadm init \
  --pod-network-cidr=192.168.100.0/24 \
  --service-cidr=192.168.101.0/24  # Example: Different CIDR for services

OUTPUT of the above command will guide to executethe further commands

I0522 10:17:11.162566   18880 version.go:256] remote version is much newer: v1.33.1; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.15
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0522 10:17:19.988252   18880 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local kubmaster] and IPs [10.96.0.1 192.168.10.6]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubmaster localhost] and IPs [192.168.10.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubmaster localhost] and IPs [192.168.10.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002958 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubmaster as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kubmaster as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: oi42uq.qkhghzhbgbekm81a
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.6:6443 --token oi42uq.qkhghzhbgbekm81a --discovery-token-ca-cert-hash sha256:353a1ad9271fdfc883cf2a7fa47b232580dd10a5fc3dcce11377d79daeed7518

Copy commands from the OUTPUT

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown (id−u):(id -g) $HOME/.kube/config

kubeadm join 192.168.10.6:6443 --token oi42uq.qkhghzhbgbekm81a --discovery-token-ca-cert-hash sha256:353a1ad9271fdfc883cf2a7fa47b232580dd10a5fc3dcce11377d79daeed7518

Now Execute it in the following order

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After that Install Calico network plugin

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Now Join Master node to cluster

kubeadm join 192.168.10.6:6443 --token oi42uq.qkhghzhbgbekm81a --discovery-token-ca-cert-hash sha256:353a1ad9271fdfc883cf2a7fa47b232580dd10a5fc3dcce11377d79daeed7518

Now Execute the following commands

kubectl get nodes -o wide

OUTPUT

NAME        STATUS   ROLES           AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION         CONTAINER-RUNTIME
kubmaster   Ready    control-plane   5h26m   v1.28.15   192.168.10.6   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-32-cloud-amd64   containerd://1.7.27

confirm coreDNS IP as well.

kubectl get svc -n kube-system | grep -E 'kube-dns|coredns'

OUTPUT

kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   3m2s

Add node to Kubernetes Cluster

Important

Please make sure that follwoing ports are reachable TCP/6443 Try telnel port 6443

apt install telnet
 telnet 192.168.10.8 6443
Trying 192.168.10.8...
Connected to 192.168.10.8.
Escape character is '^]'.
^]
telnet> quit
Connection closed.

Note

Execute the following commands only on Worker Nodes.

Go to each Node and execute the following command

kubeadm join 192.168.10.6:6443 --token oi42uq.qkhghzhbgbekm81a --discovery-token-ca-cert-hash sha256:353a1ad9271fdfc883cf2a7fa47b232580dd10a5fc3dcce11377d79daeed7518

Execute the follwoing command again on Master Node

kubectl get nodes -o wide

OUTPUT

NAME        STATUS   ROLES           AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION         CONTAINER-RUNTIME
kubmaster   Ready    control-plane   5h26m   v1.28.15   192.168.10.6   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-32-cloud-amd64   containerd://1.7.27
kubnode1    Ready    <none>          5h3m    v1.28.15   192.168.10.5   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-35-cloud-amd64   containerd://1.7.27
kubnode2    Ready    <none>          5h2m    v1.28.15   192.168.10.7   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-35-cloud-amd64   containerd://1.7.27

Troubleshooting

Commands which can help in troubleshooting Check Logs

journalctl -u kubelet -f

Check if system pods are working fine

kubectl get pods -n kube-system 

OUTPUT

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-658d97c59c-gvn5s   1/1     Running   0          5h21m
calico-node-hw42j                          1/1     Running   0          5h21m
calico-node-p55zs                          1/1     Running   0          5h11m
calico-node-qdwlh                          1/1     Running   0          5h12m
coredns-5dd5756b68-r85lr                   1/1     Running   0          5h35m
coredns-5dd5756b68-zxsss                   1/1     Running   0          5h35m
etcd-kubmaster                             1/1     Running   0          5h35m
kube-apiserver-kubmaster                   1/1     Running   0          5h35m
kube-controller-manager-kubmaster          1/1     Running   0          5h35m
kube-proxy-cllll                           1/1     Running   0          5h12m
kube-proxy-pkxcp                           1/1     Running   0          5h11m
kube-proxy-s5c2c                           1/1     Running   0          5h35m
kube-scheduler-kubmaster                   1/1     Running   0          5h35m

Check if thee are any errors in containerd

systemctl status containerd
crictl ps  # Verify container runtime

How to Reset Master Node

Drain all Worker Nodes

for node in $(kubectl get nodes -o name | grep -v master); do
  kubectl drain $node --ignore-daemonsets --delete-emptydir-data
done

Stop Control Plane Services

sudo kubeadm reset -f
sudo systemctl stop kubelet
sudo systemctl stop docker

How to Reset a Worker Node

sudo kubeadm reset
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/kubelet/
sudo rm -rf /var/lib/etcd/
sudo rm -rf $HOME/.kube

sudo iptables -F
sudo iptables -t nat -F
sudo iptables -t mangle -F
sudo iptables -X

sudo systemctl stop kubelet
sudo systemctl disable kubelet
sudo systemctl stop docker || sudo systemctl stop containerd
sudo systemctl daemon-reload

confirm that there is no servce listening on port 10250

sudo netstat -tulnp | grep 10250

For Next Steps use this link

Enjoy 😉

How to deploy MySQL on Kubernetes Cluster ?

Pre-Requisite kubernetes cluster is ready if not Follow  this  blog. kubectl get nodes -o wide OUTPUT NAME STATUS ROLES A...