diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md index b3968c92bd..b65e279ef3 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md @@ -1,16 +1,18 @@ --- title: Install and validate Helm on Google Cloud C4A Arm-based VMs -minutes_to_complete: 45 +minutes_to_complete: 60 who_is_this_for: This is an introductory topic intended for developers who want to get hands-on experience using Helm on Linux Arm64 systems, specifically Google Cloud C4A virtual machines powered by Axion processors. learning_objectives: - Provision an Arm-based SUSE Linux Enterprise Server (SLES) virtual machine on Google Cloud (C4A with Axion processors) - - Install Helm and kubectl on a SUSE Arm64 (C4A) instance - - Create and validate a local Kubernetes cluster (KinD) on Arm64 - - Verify Helm functionality by performing install, upgrade, and uninstall workflows + - Install and configure Helm and kubectl on a SUSE Arm64 (C4A) instance + - Create and connect to a Google Kubernetes Engine (GKE) cluster running on Arm-based nodes + - Deploy PostgreSQL, Redis, and NGINX on GKE using official Helm charts + - Validate Helm workflows by performing install, upgrade, rollback, and uninstall operations + - Verify application readiness and service access for PostgreSQL, Redis, and NGINX on GKE - Observe Helm behavior under concurrent CLI operations on an Arm64-based Kubernetes cluster prerequisites: @@ -32,8 +34,11 @@ armips: tools_software_languages: - Helm - Kubernetes - - KinD - kubectl + - GKE + - PostgreSQL + - Redis + - NGINX operatingsystems: - Linux @@ -57,6 +62,11 @@ further_reading: link: https://kubernetes.io/docs/ type: documentation + - resource: + title: Bitnami Helm Charts + link: https://github.com/bitnami/charts + type: documentation + weight: 1 layout: "learningpathall" learning_path_main_page: "yes" diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md index e1aa5e051a..4e029ecd4b 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md @@ -1,6 +1,6 @@ --- title: Benchmark Helm concurrency on a Google Axion C4A virtual machine -weight: 6 +weight: 10 ### FIXED, DO NOT MODIFY layout: learningpathall diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/gke-cluster-for-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/gke-cluster-for-helm.md new file mode 100644 index 0000000000..5c46f3d2f3 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/gke-cluster-for-helm.md @@ -0,0 +1,134 @@ +--- +title: Prepare GKE Cluster for Helm Deployments +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview +This section explains how to prepare a **Google Kubernetes Engine (GKE) cluster** for deploying Helm charts. +The prepared GKE cluster is used to deploy the following services using custom Helm charts: + +- PostgreSQL +- Redis +- NGINX + +This setup differs from the earlier KinD-based local cluster, which was intended only for local validation. + +## Prerequisites + +Before starting, ensure the following are already completed: + +- Docker installed +- kubectl installed +- Helm installed +- Google Cloud account available + +If Helm and kubectl are not installed, complete the **Install Helm** section first. + +### Verify kubectl Installation +Confirm that kubectl is available: + +```console +kubectl version --client +``` +You should see an output similar to: +```output +Client Version: version.Info{Major:"1", Minor:"26+", GitVersion:"v1.26.15-dispatcher", GitCommit:"5490d28d307425a9b05773554bd5c037dbf3d492", GitTreeState:"clean", BuildDate:"2024-04-18T22:39:37Z", GoVersion:"go1.21.9", Compiler:"gc", Platform:"linux/arm64"} +Kustomize Version: v4.5.7 +``` + +### Install Google Cloud SDK (gcloud) +The Google Cloud SDK is required to create and manage GKE clusters. + +**Download and extract:** + +```console +wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-460.0.0-linux-arm.tar.gz +tar -xvf google-cloud-sdk-460.0.0-linux-arm.tar.gz +``` + +**Install gcloud:** + +```console +./google-cloud-sdk/install.sh +``` +Restart the shell or reload the environment if prompted. + +### Initialize gcloud +Authenticate and configure the Google Cloud CLI: + +```console +./google-cloud-sdk/bin/gcloud init +``` + +During initialization: + +- Log in using a Google account +- Select the correct project +- Choose default settings when unsure + +### Set the Active Project +Ensure the correct GCP project is selected: + +```console +gcloud config set project YOUR_PROJECT_ID +``` + +### Enable Kubernetes API +Enable the required API for GKE: + +```console +gcloud services enable container.googleapis.com +``` + +### Create a GKE Cluster +Create a Kubernetes cluster that will host Helm deployments. + +```console +gcloud container clusters create helm-arm64-cluster \ + --zone us-central1-a \ + --machine-type c4a-standard-4 \ + --num-nodes 2 +``` + +- This creates a standard GKE cluster +- Node count and machine type can be adjusted later +- Arm64 compatibility depends on available node types in the region + +### Configure kubectl Access to GKE +Fetch cluster credentials: + +```console +gcloud container clusters get-credentials helm-arm64-cluster \ + --zone us-central1-a +``` + +### Verify Cluster Access +Confirm Kubernetes access: + +```console +kubectl get nodes +``` + +You should see an output similar to: +```output +NAME STATUS ROLES AGE VERSION +gke-helm-arm64-cluster-default-pool-f4ab8a2d-5h6f Ready 5h54m v1.33.5-gke.1308000 +gke-helm-arm64-cluster-default-pool-f4ab8a2d-5ldp Ready 5h54m v1.33.5-gke.1308000 +``` + +- Nodes in Ready state +- Kubernetes control plane accessible + +### Outcome +At this point: + +- Google Cloud SDK is installed and configured +- GKE cluster is running +- kubectl is connected to the cloud cluster +- Helm is ready to deploy applications on GKE + +The environment is now prepared to deploy Helm charts. + diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/images/nginx-browser.png b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/images/nginx-browser.png new file mode 100644 index 0000000000..6415e5363d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/images/nginx-browser.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/nginx-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/nginx-helm.md new file mode 100644 index 0000000000..a094881360 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/nginx-helm.md @@ -0,0 +1,172 @@ +--- +title: NGINX Deployment Using Custom Helm Chart +weight: 9 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## NGINX Deployment Using Custom Helm Chart +This document explains how to deploy NGINX as a frontend service on Kubernetes using a custom Helm chart. + +## Goal +After completing this guide, the environment will include: + +- NGINX deployed using Helm +- Public access using a LoadBalancer service +- External IP available for browser access +- Foundation for connecting backend services (Redis, PostgreSQL) + +### Create Helm Chart +Generates a Helm chart skeleton that will be customized for NGINX. + +```console +helm create my-nginx +``` + +### Resulting structure + +```text +my-nginx/ +├── Chart.yaml +├── values.yaml +└── templates/ +``` + +### Configure values.yaml +Defines configurable parameters such as: + +- NGINX image +- Service type +- Public port + +Replace the contents of `my-nginx/values.yaml` with: +```yaml +image: + repository: nginx + tag: latest + +service: + type: LoadBalancer + port: 80 +``` + +That matters + +- Centralizes configuration +- Allows service exposure without editing templates +- Simplifies future changes + +### Deployment Definition (deployment.yaml) +Defines how the NGINX container runs inside Kubernetes, including: + +- Container image +- Pod labels +- Port exposure + +Replace `my-nginx/templates/deployment.yaml` completely: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "my-nginx.fullname" . }} + +spec: + replicas: 1 + selector: + matchLabels: + app: {{ include "my-nginx.name" . }} + + template: + metadata: + labels: + app: {{ include "my-nginx.name" . }} + + spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 +``` + +### Service Definition (service.yaml) +Exposes NGINX to external traffic using a Kubernetes LoadBalancer. + +Replace `my-nginx/templates/service.yaml` with: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: {{ include "my-nginx.fullname" . }} +spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: {{ include "my-nginx.name" . }} +``` + +Why LoadBalancer: + +- Provides a public IP +- Required for browser access +- Common pattern for frontend services + +### Install & Access + +```console +helm install nginx ./my-nginx +``` + +```output +NAME: nginx +LAST DEPLOYED: Tue Jan 6 07:55:52 2026 +NAMESPACE: default +STATUS: deployed +REVISION: 1 +NOTES: +1. Get the application URL by running these commands: + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch its status by running 'kubectl get --namespace default svc -w nginx-my-nginx' + export SERVICE_IP=$(kubectl get svc --namespace default nginx-my-nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}") + echo http://$SERVICE_IP:80 +``` + +### Access NGINX from Browser +Get External IP + +```console +kubectl get svc +``` + +Wait until EXTERNAL-IP is assigned. + +```output +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 34.118.224.1 443/TCP 3h22m +nginx-my-nginx LoadBalancer 34.118.239.19 34.63.103.125 80:31501/TCP 52s +postgres-app-my-postgres ClusterIP 34.118.225.2 5432/TCP 13m +redis-my-redis ClusterIP 34.118.234.155 6379/TCP 6m53s +``` + +**Open in browser:** + +```bash +http:// +``` + +You should see the default NGINX welcome page as shown below: + +![NGINX default welcome page in a web browser on an GCP VM alt-text#center](images/nginx-browser.png) + +### Outcome +This deployment achieves the following: + +- NGINX deployed using a custom Helm chart +- Public access enabled via LoadBalancer +- External IP available for frontend access +- Ready to route traffic to backend services + diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/postgresql-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/postgresql-helm.md new file mode 100644 index 0000000000..7e9003091e --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/postgresql-helm.md @@ -0,0 +1,296 @@ +--- +title: PostgreSQL Deployment Using Custom Helm Chart +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## PostgreSQL Deployment Using Custom Helm Chart +This document explains how to deploy **PostgreSQL** on Kubernetes using a **custom Helm chart** with persistent storage. + +### Goal +After completing this guide, the environment will include: +- PostgreSQL running inside Kubernetes +- Persistent storage using PVC +- Secure credentials using Kubernetes Secrets +- Ability to connect using psql +- A clean, reusable Helm chart + +### Prerequisites +Ensure Kubernetes and Helm are working: + +```console +kubectl get nodes +helm version +``` + +If these commands fail, fix them first before continuing. + +### CREATE WORKING DIRECTORY +Creates a dedicated folder to store all Helm charts for microservices. + +```console +mkdir helm-microservices +cd helm-microservices +``` + +### Create Helm Chart +Generates a Helm chart skeleton that will be customized for PostgreSQL. + +```console +helm create my-postgres +``` + +**Directory structure:** + +```text +helm-microservices/ +└── my-postgres/ + ├── Chart.yaml + ├── values.yaml + └── templates/ +``` + +### Clean the chart +The default Helm chart contains several files that are not required for a basic PostgreSQL deployment. Removing these files prevents confusion and template errors. +Inside `my-postgres/templates/`, delete the following: + +- hpa.yaml +- ingress.yaml +- serviceaccount.yaml +- tests/ +- NOTES.txt +- httproute.yaml + +Only PostgreSQL-specific templates will be maintained. + +### Configure values.yaml (Main Configuration File) +`values.yaml` centralizes all configurable settings, including: + +- Container image details +- Database credentials +- Persistent storage configuration + +Replace the entire contents of `my-postgres/values.yaml` with the following: + +```yaml +replicaCount: 1 + +image: + repository: postgres + tag: "15" + pullPolicy: IfNotPresent + +postgresql: + username: admin + password: admin123 + database: mydb + +persistence: + enabled: true + size: 10Gi + mountPath: /var/lib/postgresql + dataSubPath: data +``` + +This matters + +- Ensures consistent configuration +- Avoids Helm template evaluation errors +- Simplifies upgrades and maintenance + +### Create secret.yaml (Database Credentials) +Stores PostgreSQL credentials securely using Kubernetes Secrets. +Create the following file: + +`my-postgres/templates/secret.yaml` + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "my-postgres.fullname" . }} +type: Opaque +stringData: + POSTGRES_USER: {{ .Values.postgresql.username }} + POSTGRES_PASSWORD: {{ .Values.postgresql.password }} + POSTGRES_DB: {{ .Values.postgresql.database }} +``` + +That matters + +- Prevents hard-coding credentials +- Follows Kubernetes security best practices + +### Create pvc.yaml (Persistent Storage) +Requests persistent storage so PostgreSQL data remains available even if the pod restarts. +Create the following file: + +`my-postgres/templates/pvc.yaml` + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: {{ include "my-postgres.fullname" . }}-pvc +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ .Values.persistence.size }} +``` + +That matters +- Without a PVC, PostgreSQL data would be lost whenever the pod restarts. + +### deployment.yaml (PostgreSQL Pod Definition) +Defines how PostgreSQL runs inside Kubernetes, including: +- Container image +- Environment variables +- Volume mounts +- Pod configuration + +Replace the existing `my-postgres/templates/deployment.yaml` file completely. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "my-postgres.fullname" . }} + +spec: + replicas: 1 + selector: + matchLabels: + app: {{ include "my-postgres.name" . }} + + template: + metadata: + labels: + app: {{ include "my-postgres.name" . }} + + spec: + containers: + - name: postgres + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + + ports: + - containerPort: 5432 + + envFrom: + - secretRef: + name: {{ include "my-postgres.fullname" . }} + + env: + - name: PGDATA + value: "{{ .Values.persistence.mountPath }}/{{ .Values.persistence.dataSubPath }}" + + volumeMounts: + - name: postgres-data + mountPath: {{ .Values.persistence.mountPath }} + + volumes: + - name: postgres-data + persistentVolumeClaim: + claimName: {{ include "my-postgres.fullname" . }}-pvc +``` + +- PGDATA avoids the common lost+found directory issue +- Persistent storage is mounted safely +- Secrets inject credentials at runtime + +### service.yaml (Internal Access) +Enables internal cluster communication so other services can connect to PostgreSQL. +Replace `my-postgres/templates/service.yaml` with: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: {{ include "my-postgres.fullname" . }} +spec: + type: ClusterIP + ports: + - port: 5432 + targetPort: 5432 + selector: + app: {{ include "my-postgres.name" . }} +``` + +**ClusterIP** +- PostgreSQL should remain accessible only inside the Kubernetes cluster. + +### Install PostgreSQL Using Helm + +```console +cd helm-microservices +helm uninstall postgres || true +helm install postgres-app ./my-postgres +``` + +**Check:** + +```console +kubectl get pods +kubectl get pvc +``` + +You should see an output similar to: +```output +NAME READY STATUS RESTARTS AGE +postgres-app-my-postgres-6dbc8759b6-jgpxs 1/1 Running 0 40s + +>kubectl get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +postgres-app-my-postgres-pvc Bound pvc-5f3716df-39bb-4683-990a-c5cd3906fbce 10Gi RWO standard-rwo 33s +``` + +### Test PostgreSQL +Connect to PostgreSQL + +```console +kubectl exec -it -- psql -U admin -d mydb +``` + +You should see an output similar to: +```output +psql (15.15 (Debian 15.15-1.pgdg13+1)) +Type "help" for help. + +mydb=# +``` + +**Run test queries:** + +```psql +CREATE TABLE test (id INT); +INSERT INTO test VALUES (1); +SELECT * FROM test; +``` + +You should see an output similar to: +```output +mydb=# CREATE TABLE test (id INT); +INSERT INTO test VALUES (1); +SELECT * FROM test; +CREATE TABLE +INSERT 0 1 + id +---- + 1 +(1 row) +``` + +### Outcome +You have successfully: + +- Created a custom Helm chart +- Deployed PostgreSQL on Kubernetes +- Enabled persistent storage +- Used Secrets for credentials +- Verified database functionality + diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/redis-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/redis-helm.md new file mode 100644 index 0000000000..c193587405 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/redis-helm.md @@ -0,0 +1,166 @@ +--- +title: Redis Deployment Using Custom Helm Chart +weight: 8 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## Redis Deployment Using Custom Helm Chart +This document explains how to deploy Redis on Kubernetes using a custom Helm chart. + +## Goal +After completing this guide, the environment will include: + +- Redis running on Kubernetes +- Deployment managed using Helm +- Internal access using a ClusterIP Service +- Basic connectivity validation using redis-cli + +### Create Helm Chart +Generates a Helm chart skeleton that will be customized for Redis. + +```console +helm create my-redis +``` + +### Resulting structure + +```text +my-redis/ +├── Chart.yaml +├── values.yaml +└── templates/ +``` + +### Clean Templates +The default Helm chart includes several files that are not required for a basic Redis deployment. Removing them avoids unnecessary complexity and template errors. +Inside `my-redis/templates/`, delete the following: + +- ingress.yaml +- hpa.yaml +- serviceaccount.yaml +- tests/ +- NOTES.txt + +Only Redis-specific templates will be maintained. + +### Configure values.yaml +`values.yaml` stores all configurable parameters, including: + +- Redis image version +- Service type and port +- Replica count + +Replace the entire contents of `my-redis/values.yaml` with: + +```yaml +replicaCount: 1 + +image: + repository: redis + tag: "7" + pullPolicy: IfNotPresent + +service: + type: ClusterIP + port: 6379 +``` + +That matters + +- Centralizes configuration +- Simplifies future updates +- Prevents Helm template evaluation issues + +### Deployment Definition (deployment.yaml) +Defines how the Redis container runs inside Kubernetes, including: + +- Container image +- Port configuration +- Pod labels and selectors + +Replace the existing `my-redis/templates/deployment.yaml` completely. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "my-redis.fullname" . }} + +spec: + replicas: 1 + selector: + matchLabels: + app: {{ include "my-redis.name" . }} + + template: + metadata: + labels: + app: {{ include "my-redis.name" . }} + + spec: + containers: + - name: redis + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + ports: + - containerPort: 6379 +``` + +- Redis runs as a single pod +- No persistence is configured (suitable for learning and caching use cases) + +### Service Definition (service.yaml) +Creates an internal Kubernetes service to allow other pods to connect to Redis. +Replace `my-redis/templates/service.yaml` with: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: {{ include "my-redis.fullname" . }} +spec: + type: ClusterIP + ports: + - port: 6379 + selector: + app: {{ include "my-redis.name" . }} +``` + +**ClusterIP** + +- Redis is intended for internal communication only within the cluster. + +### Install Redis Using Helm +Validates that Redis is running and responding correctly. + +```console +helm install redis ./my-redis +kubectl get svc +kubectl exec -it -- redis-cli ping +``` + +You should see an output similar to: +```output +NAME READY STATUS RESTARTS AGE +postgres-app-my-postgres-6dbc8759b6-jgpxs 1/1 Running 0 6m38s +redis-my-redis-75c88646fb-6lz8v 1/1 Running 0 13s + +>kubectl get svc +redis-my-redis ClusterIP 34.118.234.155 6379/TCP 6m14s + +> kubectl exec -it redis-my-redis-75c88646fb-6lz8v -- redis-cli ping +PONG +``` + +- Redis pod → Running +- Redis service → ClusterIP + +### Outcome +This deployment achieves the following: + +- Redis deployed using a custom Helm chart +- Internal access via Kubernetes Service +- Successful connectivity validation +- Clean and reusable Helm structure Accessible via service name redis